Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 10/23/20 in Posts

  1. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  2. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  3. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  4. Thank you so much for your help. It worked for me using win SCP.
    2 points
  5. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  6. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  7. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  8. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  9. I do have 4 jobs defined to run for different VMs and times. I can click only one of them inside the "Onboard repository" under the section "Backups". All other three are, if I hover over them, just displaying "Job no longer exists." But I can confirm, these are the correct jobs and are most definitely working. I can click onto any item in the first column and see the list of the last "Recovery points". Last run was yesterday night as specified inside that job. Is this just a display bug or do I need to be concerned if jobs will suddenly disappear? Thanks in advance Michael Reetz
    1 point
  10. No problem Please use this reference to update Windows-based installation https://maglar.box.com/s/l34lsb0uuv1nc2cg1imzos8hg5depg5p If you have any additional questions, let me know.
    1 point
  11. I've to apologize, I wasn't clear enough: I'm on Windows platform, NBR is 10.4 b58736... ...please. post the relevant link! Kind regards
    1 point
  12. Thanks for the info. I'll wait for 10.5 before upgrading
    1 point
  13. I have two issues in esxi 7.0 u3: VM Backup job: VM cannot be started The VM cannot be processed. cannot find field: vmOpNotificationToAppEnabled (NoSuchFieldException) ESXI Host conection problem: ESXi host is inaccessible An internal error has occurred. Reason: cannot find field: qualifiedName (NoSuchFieldException).
    1 point
  14. Hi, @darkhorse! I also had it once. In the end, turned out that I had no disk space.
    1 point
  15. Solved! I've found the problem: I think there is a config issue on default Intel QNAP TCP config (factory default): Default Receive TCP Window on Intel QNAP = net.ipv4.tcp_rmem = "4096 1638400 1638400" Default Receive TCP Window on ARM QNAP = net.ipv4.tcp_rmem="32768 87380 16777216" I think Intel's config had a too low minimal window As soon as I changed the value on the Intel to the values on the ARM, ssh data transfer went double and beyond So I tweaked the TCP Windows (send and receive), and got my beloved 1Gbps per job code (ssh root on QNAP ) sysctl -w net.ipv4.tcp_rmem="131072 524288 16777216" sysctl -w net.ipv4.tcp_wmem="131072 524288 16777216" Also, I think it's active on default, but not a bad idea forcing autotuning: sysctl -w net.ipv4.tcp_window_scaling=1 I hope it's useful If anyone has the same issue. Just be sure to include the tweaks on autorun.sh to survive reboots Thanks
    1 point
  16. I got a new version from support, now it's working. Version: 10.4.0.r57249
    1 point
  17. Hello everyone, I have a Microsoft Hyper-v server 2019, and I implemented a backup of two virtual machines ( 1 domain controller 2019 and 1 exchange server 2019). I noticed that some folders are missing in the domain controller backup and only the main exchange server database is shown in the exchange server backup but there should be other databases in it, too. I noticed that by doing the recovery procedure. How can I solve the problem? Thanks.
    1 point
  18. Run into the same problem !!! Please put pressure on synology to get this working again!
    1 point
  19. Hi, @Gavino! In a new release, we have updated the UI for the Repository screen. The repository type menu was removed. "Store backups in separate files" option was added (enabled by default). Backup repositories created in the previous versions of the product keep operating in the same way they were configured. Storing backups in separate files is recommended. Downside: it has higher space requirements, no built-in deduplication. Upsides: it is more resilient to network, power, and hardware failures; it does not require running space reclaim; it is suitable for enterprise-grade deduplication storage devices; it supports immutable recovery points. If you still want to use the "forever incremental" repository and built-in data deduplication, please uncheck the "Store backups in separate files" checkbox when creating the backup repository. Please let me know if you have further questions or concerns.
    1 point
  20. I have a fresh install of Ubuntu Server 18.04 with nothing but the following done to it before installing Nakivo "NAKIVO_Backup_Replication_v10.4.0.56979_Installer-TRIAL.sh" sudo apt update sudo apt install nfs-common sudo apt install cifs-utils sudo apt install open-iscsi sudo apt install ntfs-3g I installed Nakivo with flags " --eula-accept -f -C" since I am using NFS instead of onboard repository. You can see in 10.4 that multiple fields are missing. Where is the data storage policy ("Forever incremental" etc)? Where is the deduplication option? I don't want compression in Nakivo as my TrueNAS storage does LZ4 compression already. Even though Data Size Reduction is set to Disabled, it's still telling me that fast compression is set (even though I toggled that to Disabled, before setting Data Size Reduction to Disabled). If I experiment and turn the Data Size Reduction" to Enabled, I am then able to change compression to Disabled and have that stick, but still - deduplication remains off, and that matters greatly to me. Still no idea what my data storage policy is. I uninstalled 10.4 and installed 10.3 on the same box, and that is behaving correctly. Compare the screenshots. 10.4 is a hot mess for NFS right now and I simply cannot use it. You broke login wallpapers in 10.3, which was annoying, but this is simply horrible. Where are your testing procedures? Support bundle from 10.4 has been sent. I am running 10.3 now so you'll have to replicate in your lab, which should be easy to do.
    1 point
  21. OK I worked out that "Store backups in separate files" is the exact same meaning as "no deduplication". If I am wrong - then please let me know. Nakivo should have just kept it simple by keeping the "deduplication" wording and not confuse people with another option that means the same thing. It's a bit crazy to look at a repository after it's been created, and see "Deduplication" and "Store backups in separate files" as completely separate line items (which are far apart). Both refer to the same thing. I bet there is a product designer pointing his finger at a collegue right now going "see, I told you that customers would not like this!". And they would be right. It's pretty strange that for this feature, the new docs are now saying "Enabling this option is highly recommended to ensure higher reliability and performance" WHAT?! Dedup was one of the main reasons I bought Nakivo! https://www.nakivo.com/vmware-backup/#benefits - this talks up dedup as a benefit and cost saving. Nowhere in the sales pitch do you say that it's highly recommended not to use it!! It's interesting to note also that deduplication and immutable backup respository are incompatible with each other, as I found out from emailing tech support. Some feedback on documentation. Nakivo should put 10.3 or 10.4 (or whatever the version is) at the top of each documentation page. That would be very handy. I thought I was reading the latest version when I posted above, as the Release Notes page mentioned 10.4. I was like... ok this looks like I'm in the right place for the latest info as it lists 10.4. https://helpcenter.nakivo.com/display/RN/Release+Notes I didn't realise until after contacting tech support, that it was the older documentation. Also, the images zoom on the 10.3 docs but not 10.4. I prefer zoom, as sometimes the info is unreadable until it's zoomed in. Please fix this in the 10.4 docs. Thanks. An issue I found here: https://helpcenter.nakivo.com/User-Guide/Content/Deployment/System-Requirements/Feature-Requirements.htm#Ransomware ------ quote For Local Folder type of Backup Repository, the following conditions must be met: The Backup Repository data storage type must be set to Incremental with full backups. ----- end quote Whoever wrote that maybe doesn't realise that "incremental with full backups" doesn't exist as an option any more in 10.4, like it did in 10.3. I have been told in an email that "...there is the option "store backups in separate files". When this option is selected, the repository is "Incremental with Full Backup" and Immutable is supported." It would be handy if you explained that in your documentation. The line above sould probably say The Backup Repository data storage type must be set to "store backups in separate files" (i.e. deduplication switched off). This implicitly sets the data storage policy to Incremental with full backups.
    1 point
  22. The problem was with Check_mk 2.0-agent, MaxConnectionsPerSource was set to 3, so that's why the agent stopped responding. I'm running some scripts via mrpe to check status of jobs.
    1 point
  23. Goodmorning I have a nakivo installation with an unique esx server as inventory. Currently i backup all vms in a qnap based on intel cpu with nakivo transporter app. I added a new qnap at the network and i'm trying to use it as second trasporter/repository. No problem adding it as transporter in nakivo's dashboard but i can't add the folder as repository cfis. The error is tha "the folder doesn't exists", so it can't create the NakivoBackup folder. Currently I have the doubt that I choosed the wrong nas : it's an Arm processor. I opened a support ticket but i'm also trying to use this forum. Thanks
    1 point
  24. Hi I did the update to 10.4 and had again some troubles with Cert and Passwords on Windows / Hpyer-V Hosts. After some tests I found the following way to get it running. Bevor you start the Update: Create a copy of your current Nakivo Directory on C:\Program Files\NAKIVO Backup & Replication Do the update, if it's working - be happy If not: stop the nakvio service copy certificate.pem and bhsvc.id from the backup directory to your current directory. start the service Refresh the transporter, now it should work Greetings Mario
    1 point
  25. I added 4 workstations, which were shut down for the night, to my inventory. Is it possible to stop the errors for the non-responding workstations and the error for the transporters (the same as for the workstations) that do not respond? (or can I turn it into an alert so I won't receive any emails?). For the errors of the workstations I receive an email every 2 hours and another email for the transporters (the same of the workstations). Do you think for the future to create a dedicated section where we can configure alerts and error messages? It would be useful to be able to check the presence of the workstation and the transporter only when the backup is to be done. I understand for a server that it should always be on, but for a workstation it can happen that it is turned off for the night or for the weekend or for the holidays. Just as it was thought, the management of physical machines is unmanageable for an inventory with some workstations other than 3 or 4. The error messages generated in this case risk losing focus on the really important error messages (real transporters not responding, storage not responding, vcenters or resources that must always be present that do not respond). Loris
    1 point
  26. After a successfull backup of our file server (virtual), If I test a Recovery task, I cannot find data partition (bitlocker encrypted) but only o.s. partition is available for granular selection of files. Where I'm wrong ? backup size seems correct (1TB) but I can see only 3 parts; the most important (data part) seems has been skipped for some reason. Anybody experienced this issue ? Alex
    1 point
  27. Hi - is it possible that this article is modified - it doesn't say what to change only what requirements are and a little vague. Just needs direction to modify /etc/ssh/sshd_config and to modify those lines mentioned above e.g. KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512,ssh-rsa,ssh-dss Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1-96,hmac-sha1,hmac-md5-96,hmac-md5 This adds the right KexAlgorithm, HostKeyAlgorithms and Ciphers That way, it's more specific to the solution. Thanks !!
    1 point
  28. ok thanks it's good
    1 point
  29. Same problem with NAKIVO Appliance and Transporter + Repository on SYNOLOGY Same trick wor like a charm BUT: First stage on SYNOLOGY /volume1/@appstore/NBR-Transporter# ./nkv-bhsvc stop ./bhsvc -b "<UsePassword>" Aftter these restart Service from Synology console Connect to NAKIVO Console , edit Transporter , Connect .et voila :))
    1 point
  30. I have a question (suggestion) regarding licensing. Why don't you make an independent license server that the Directors could rely on regardless of the installation. Example: because I will have more than 800 transporters in a near future, I installed a Multi-tenant to separate eleven "Tenant". With an independent license server I could have installed several single-tenant Directors, one for each organization With this architecture I secure my Directors (today I lose the multi-tenant (crash, cyber attack, ...), I block all my backups and restore) Same, I facilitate the update (not a bigbang). sorry for my aproximative English.
    1 point
  31. Thanks, we looked at Wasabi before and the cost was adding up quickly. That is the issue we have been having running into. Backups locally are great but offsite storage of the backups is so expensive clients don't want to pay.
    1 point
  32. I'm doing the update right now. I have the same error and requested support for this. Conclusion: if you can wait with the upgrade, then wait....
    1 point
  33. Hello, @Mr.Ed! I am very sorry to hear that you didn't resolve your issue with the Support Team. Please continue the discussion in the opened ticket until your problem is fixed. Let me know when you find a solution for your case. As for appending all daily backups to the same tape and keeping it in the library while storing offsite your weekend full backups only, unfortunately, this feature isn't currently implemented. However, the feature request will be submitted to the Product Team.
    1 point
  34. Hello, I'd like to inquire what the best way is to set up backup to cloud - Amazon S3. I don't have much Amazon S3 experience but would like to keep the backups fast & the cost low, and I'm not sure how to proceed. * If I set up backup with incrementals & weekly full backups, my S3 costs will surely be low because I don't download anything. However, my weekend backups will be fulls and take considerable time. Correct? * If I set up backup with incrementals & synthetic fulls, does that work well with S3? Will my backups be faster or will S3 use my local transporters to merge the backups and create high costs due to downloading from S3? * In a local test (not to S3) I saw that backup copy jobs ignore the synthetic setting and revert to full backups - in normal backup jobs it works. Is this true for S3 also? I'd love to read some advice from people running backups to S3 at this moment so I make the right choice? Thanks!
    1 point
  35. Good day. When is it planned Vmware 7 update 2 support?
    1 point
  36. Hi, thank you very much for the information. It is unfortunate that it is not possible in our version But thanks anyway for the quick reply. Kind regards, Jens
    1 point
  37. Nakivo seems to run self-backups to all repositories automatically. In my case, some of my repositories are hard drives that are ejected and stored off-site. The issue is that Nakivo automatically adds repositories as a destination for the self-backup, it would be great if it didn't do this because it generates failure notifications and the Self-Backup fails because the repositories aren't available.
    1 point
  38. I noticed that on Windows server, I can only run one job at a time. When I backup to tape or Wasabi, I cannot run my scheduled backups. QNAP says that Nakivo does not have this limitation when running on a Intel or AMD based QNAP NAS. I assume that on QNAP the Backup Copy jobs are running from a snapshot freeing the primary backup to be used at the same time? What other repositories allow replication or backup copy jobs while source backups are running? Synology? FreeNAS? Linux? - Ed Hammond
    1 point
  39. Hello i had the same probleme here ! Do you have any package for QNAP QTS ? Thank in advanced !
    1 point
  40. it's done, and it doesn't change anything, I sent an email updating # 80384 with the install.log files
    1 point
  41. Why do not try oauth 2.0 ??? It`s cheaper and much easier to use
    1 point
  42. It's a bug and it will be fixed in an upcomming release.
    1 point
  43. the problem with Windows OS Antivirus with Nakivo backup repository files is still and issue. is there any roadmap that you guys will find ways to fix the repository files recognize as virus? putting exclusion list aside, we need to know which are the folders to exclude and process to exclude.
    1 point
  44. I can tell you how it works manually or how it works in R1Soft / Idera backup software (basically it is the same way). I can do it here or in some ticket.
    1 point
  45. When it comes to MySQL it is not so simple like it is on MSSQL. On MSSQL you can recover .mdf and .ldf and just attach this DB to any other compatible MSSQL engine. When it comes to MySQL it is not so simple. To recover MySQL DB you will have to access to not only DB dir but also innodb logs, mysql DB (main DB in MySQL engine), performance_schemat DB too. Investigate it on your end to fully understand whole complexity of this operation. Bottom line just like I said this is not so simple.
    1 point
  46. I got an answer from Support which sparked a realization. I believe I have figured out the problem. The transporter instance size was much smaller than the target EC2 instance. I deployed a new transporter of a size that matched the target instance and kicked off the backup. It is running now and no errors so far, knock wood.
    1 point
  47. Does the NBR supports AHV community edition?
    1 point
  48. My apologies, I forgot to reply here. Updating for anyone in the future. We upgraded NAKIVO B&R to v10.0.0 which resolved the issue. VM's are now all on Server 2019 hosts and boot verification is working as expected.
    1 point
×
×
  • Create New...