Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 05/28/21 in Posts

  1. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  2. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  3. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  4. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  5. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  6. Thank you so much for your help. It worked for me using win SCP.
    2 points
  7. Hi, I updated the va to nakivo 10.6. I also have a transporter synlogy, with DSM 7.1-42661. but when i try to update the transporter, i get this error " this packet require DSM 7.0-4000 or lower". Any solution? thanks.
    1 point
  8. @AalHaiThank you for getting back to us! In this case, we could recommend the following workaround: 1. Create a script on the NAKIVO director's side with the following actions (please note that there should be a separate script for each physical machine that should be backuped): - Refresh the transporter that is assigned to a necessary physical machine. - Start the backup job for a necessary physical machine. 2. Create a task on the physical machine side that will initiate the above script remotely on some trigger (boot, wake up etc.). At the same time, such workflow was not tested by our QA Team, and we could not guarantee its reliability. Thank you so much for your attention and participation. We are looking forward to your feedback soon
    1 point
  9. thanks, but your maglar.box link is not available anymore. Can you please put this .sh file anywhere else? On a Nakivo download site would be nice.
    1 point
  10. Hi, I disabled the auto refresh of the repositories but it did not make any change to the usage. After some digging it looks like the transporter ( or pgsql ) is repeatedly trying to determine the used and free space in the repository. looking at all_pg_sql.log, it keeps logging eventl like this: I suspect that the "Get used space" is the culprit where it is walking the directory structure and adding up the size of all the files in the repo (which contains ~3.3M files in this case). It appears to be triggered at tenant startup (or repo mount) and then 30 minutes after the previous iteration completes. I have also tried switching off "system.repository.refresh.backup.size.calculation" under expert settings but "Get used space" still runs.
    1 point
  11. Hi, detach/attach backup repository seeams a good solution to preserve data corruption. Why do you not add this feature directly in backup job or backup copy job ?
    1 point
  12. Hi, Has NAKIVO support the features of Recovery from Physical to Physical now?
    1 point
  13. Hello everyone, I'm experiencing a problem with recovering a VM. It always stops at 21,6% and doesn't proceed. I see on my NAS interface and the VMware ESXi (tried versions 6.7 and 7.0) interface that it's starting to transfer data (creates folder) and tries to build a VM. The recovery job stops on its own after a while (23 minutes on my last try) and the VM gets destroyed and the folder disappears. So far I've played around with different options (choosing the vmserver as datastore instead of the NAS, different Recovered VM names, using just the default options) and I've also experienced this exact same phenomenon with a different VM and NAS before. Has anyone else had this problem and might know how to solve it?
    1 point
  14. That's great. It's still marked as unsupported in Synology's DSM7 3rd party packages support matrix, but found same topic on their community site confirming that NAKIVO 10.5 is supported by DSM7. Already moved to DSM7 with NBR10.5 Thanks
    1 point
  15. @Garry.web I guess it is always worth a try in life, but in business, if you are satisfied with M365, why would you get into the drama of transferring all the data and work process to Google? Makes no sense.
    1 point
  16. Hi. I updated one of my offsite NAS units to DSM 7 and found out through support that 10.4 does not support it. Now I am left iwth having to do a physical initial seed on a DSM 6 unit or wait for certification on DSM7. They are hoping it will be done this month! Is there anywhere I can find out compatibility matrix with Synology and Nakivo? Very Frustrating.
    1 point
  17. Hello. I've got two Synologys SAN, with NAKIVO installed in both. One of them (SAN1) is taking backups from VCENTER, and it makes copy to two repositories, the first one os local, and the second one is snd to SAN2 (the secionf synology). Is case of mistake of SAN1, I'd like to be able to recover backups from SAN2, but I'm not able to mount local repository in SAN2 with the same folder in wich SAN1 is writting. I get the error "The repository cannot be found within the specified location (/volume1/SecureIT/NakivoBackup)." I've tied changing rights accessing to folders, but no way.. Any suggestion or idea? Thank you very much!
    1 point
  18. Hi, @Tayler Odling! The biggest difference between FTP and SFTP is encryption (SSH file transfer protocol) included in SFTP. When you send data with FTP, it isn't encrypted. Note: You can use a secure connection, but your file and the transmission, in general, won't be encrypted. On the contrary, SFTP doesn't transfer data in clear text as it is encrypted. If you decide to stay with FTP, try using passwords and additional security options, like requesting passwords to access the transferred data. If you choose to set up SFTP, use this step-by-step guide https://www.nakivo.com/blog/how-to-build-powershell-tools-to-setup-sftp/ Let me know if you have any additional questions.
    1 point
  19. @Cian Joyce, hey! We primarily take into account the optimal performance of your network, ease of maintenance, scalability options and, of course, the network building costs. I guess the best strategy is to define your priorities and then check what types of networks fit the best.
    1 point
  20. @Tayler Odling, Depends on what you need it for. But generally speaking, it's like playing with fire as you get no encryption. It isn't that hard to switch. Try to consider it.
    1 point
  21. 1 point
  22. Hello, I just open my Synology running DSM7 On the installed paquet the Nakivo tell me to do an update to solve the issue And Nakivo is working again after upgrade ! (version 10.4.0.59271-3000) Michel
    1 point
  23. Hi I have once every few days, the following error: I see this only by Synology NAS, attached ober SMB. Of course it isn't a username / password problem, this is setup onced and then never changed. When I got to the repo and press "Refresh", the repo comes back online. I don't know if the problem its at Synology side or at Nakivo.. Does anyone knwo this problem?
    1 point
  24. Today we tested 10.5 beta version and still having "The VM cannot be processed. cannot find field: vmOpNotificationToAppEnabled (NoSuchFieldException)" issue, no backup at all.
    1 point
  25. I see in report the speed is about 960 Mbit/s. But the data size is 47 Gb and have been restored in 13 min 46 sec. If we calculate then the real speed should be 3.41 Gb / s. How you can explain it? Summary Status: Successful Started: Tue, 28 Sep at 14:06 (UTC +03:00) Finished: Tue, 28 Sep at 14:19 (UTC +03:00) Data & speed: 47.0 GB at 960.46 Mbit/s Duration: 00:13:46
    1 point
  26. Thanks! Support got me sorted after I sent the bundle like you suggested.
    1 point
  27. Hi, I have two questions that are easily answered with any backup software but I cannot find how to do it with Nakivo: 1. How to backup a remote file share: \\myserver\abc. Every bit of documentation and thread refers on how to put e repository on a SMB share, but I need to backup the content of that share. I cannot install anything on the host of that share, jut need to grab the files using user/password as any backup software does. I have a transporter in the same network as the share. 1.1 If that share is instead on a NAS like Synology where you can actually install the Transporter, how do you use it to backup that share that is local to the transporter? I know those are dumb and easy questions, but seems impossible to do with Nakivo... Thanks
    1 point
  28. I've to apologize, I wasn't clear enough: I'm on Windows platform, NBR is 10.4 b58736... ...please. post the relevant link! Kind regards
    1 point
  29. Hello to all of you I've managed to install Nakivo 10 in docker on DSM 7 with the use of the existing repository. Tested recovery and backup and it's working fine. It could be a temporary solution. RJ
    1 point
  30. I have two issues in esxi 7.0 u3: VM Backup job: VM cannot be started The VM cannot be processed. cannot find field: vmOpNotificationToAppEnabled (NoSuchFieldException) ESXI Host conection problem: ESXi host is inaccessible An internal error has occurred. Reason: cannot find field: qualifiedName (NoSuchFieldException).
    1 point
  31. @fio_fan, I've checked it already, not my case. Any other suggestions? Thank you anyways!
    1 point
  32. Is there any way to get on a notification list, for when the updated product is released?
    1 point
  33. Ok, my apologies... Seems to be a QNAP issue... Or a network issue... SSH (SCP or Rsync) transfers from any remote server to the Intel QNAP tops on 200Mpbs... The weird thing is that same SCP from the internal network have no cap. My bets were wrong, just because Nakivo Transporter must use some kind of ssh transfer method, but not his fault. Maybe an internal firewall throttling conections from outside the network? (I have all "visible" firewalls offline) I totally reset the QNAP, still not installed Nakivo, and have the cap. The other QNAP just beside this one it's performing nice. (Arm one) Any clue...? I will send a support ticket to QNAP guys... Sorry again for all the misunderstanding.
    1 point
  34. I got a new version from support, now it's working. Version: 10.4.0.r57249
    1 point
  35. Hello everyone, I have a Microsoft Hyper-v server 2019, and I implemented a backup of two virtual machines ( 1 domain controller 2019 and 1 exchange server 2019). I noticed that some folders are missing in the domain controller backup and only the main exchange server database is shown in the exchange server backup but there should be other databases in it, too. I noticed that by doing the recovery procedure. How can I solve the problem? Thanks.
    1 point
  36. There were mistakes on my end - I didn't realise I was reading out-of-date documentation, as the documentation only seems have version number on the front page, and I arrived at the info via direct link of a Google search. Even with the most current documentation, I think many long-term users will struggle to understand fully what is going on with these changes. Thanks for following up.
    1 point
  37. Thanks for the link Mario. For my use I still really like global deduplication, for the space/cost savings, and indeed it might be necessary given my storage space requirements. I've recently upgraded from an older Dell R710 6-disk setup to a newer Dell R730 8-disk setup, and switched from SATA to SAS drives. I definitely recommend SAS over SATA - especially for VM Flash Boot and Screenshot Verification. It's worth the extra $$ IMHO. The firmware quality is usally better too, as is the SMART reporting. Previously with SATA I found it too slow to have "Enforce explicit file system sync" on. The SATA bus is only half-duplex, and only 6Gbps, vs the full-duplex SAS3 12Gbps. Now with my new setup I have exlicit filesystem sync turned on and still get about 1.2-1.5Gbps speeds, which I'm happy with. It was terrible on the old system so had it off there. With my old setup (without filesystem sync) I did have one corruption on a single VM which has about 2TB of data - a Flash VM Boot would always come up with disk errors and would take forever to try and fix those on boot with checkdisk, which meant Screenshot Verification wouldn't work for that one, but my other 8 VMs were fine. My main repo is now a high quality TrueNAS Core 12.0-U5 ZFS setup with enterprise SAS drives and ECC buffered RDIMMs, and the ZFS dataset shared via NFS to my Nakivo Director on Ubuntu 18.04 respects the explicit filesystem sync setting set by Nakivo. I have everything on power conditioning UPS, so hoping I don't have any further repo corruptions. I put a 2TB NVMe drive as L2ARC for my ZFS pool, so hoping that gives a nice speed boost to the VM Flash Boot and Screenshot Verification, even after months of backups (and dedup complexity). Another thing I did was turn off compression for Nakivo, as my NFS datastore uses LZ4 ZFS compression natively, so avoiding double-compression issues. Now that Nakivo only has to worry about dedup, and not dedup + compression, I am hoping that my new setup will be super reliable, even with global dedup on. I really think global dedup is still fine to use from a reliablity standpoint, but you just have to manage the setup with more care (quality ECC RAM, quality UPS, high quaity (SAS) drives, high quality drive controller, filesystem sync, optimal compression setings, keep the BIOS and controller+drive firmwares up to date etc). Anyway I'll see how I go.
    1 point
  38. Thanks for the info . What's strange is that "Deduplication" and "Store backups in separate files" are both listed as repository parameters at the same time - when really they are the negative of each other aren't they? Like - it's not possible to have both be enabled, or both be disabled, is it? It's like you haven't introduced a new feature, but just added another name for an existing one, greatly confusing people in the process. I suspect "store backups as separate files" (aka no deduplication) would take up too much space. But I also find that global deduplication is risky, for the reliability reasons you've listed above. What I was hoping you were doing with this new feature, is to have per-VM deduplication, rather than global. That way if there is any corruption, then it would/should only affect a single VM, and I could just start backing up that VM from scatch again, rather than blowing everything away - which takes a long time to do for that inital backup (I can only start that on a Saturday afternoon). To me, that's a great trade off between global dedup vs no dedup. I was really hoping this new feature would be "store VM backups seperately" with the option to still have deduplication on (on a per-VM basis).
    1 point
  39. Either remotely collect it using Win SCP and edit it locally. Or use Vi. You just alter the 3 lines shown in the thread. You can do it remotely if you can ssh into box remotely (switch this on in VMWare).
    1 point
  40. Thanks for the response. Is your roadmap accessible online?
    1 point
  41. @Mike Spragg, thanks a lot for your contribution to our blog! Sharing is caring
    1 point
  42. That's not the relevant part - the "file" (as I can't attach it) are shown above - you only need to modify the lines: KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512,ssh-rsa,ssh-dss Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1-96,hmac-sha1,hmac-md5-96,hmac-md5 This adds the right KexAlgorithm, HostKeyAlgorithms and Ciphers (the original shows what they are now in 7.0U1) The file you need to modify is /etc/ssh/sshd_config
    1 point
  43. In HotAdd mode, virtual plates from the virtual machines being supported up are naturally mounted to the intermediary, so they can be gotten to by the intermediary as neighborhood circles. The ESX have the intermediary is running on should approach all datastores for the virtual machine. In the event that the virtual machine and the intermediary are not on a similar host, all datastores should be divided among the hosts and the hosts should be inside the equivalent datacenter. In the event that SAN mode isn't accessible, HotAdd mode can accomplish near SAN mode execution. To empower gradual reinforcements of virtual circles, Changed Square Following (CBT) should be utilized for the main full reinforcement. (CBT is empowered for reinforcements of course.)
    1 point
  44. Correct, you didn't. In 7.0U2 they [VMWare] uprated/hardened the security requirements through ssh. By doing this change you've reverted that change by VMWare.
    1 point
  45. I was wondering if we could use choose Amazon S3 for the repository but use any S3 compatible service. Like Backblaze B2 is supposed to be S3 compatible. Could I use it for a repository? We are tying to find a good solution for offsite copies of the backups for customers.
    1 point
  46. Thanks, we looked at Wasabi before and the cost was adding up quickly. That is the issue we have been having running into. Backups locally are great but offsite storage of the backups is so expensive clients don't want to pay.
    1 point
  47. Dear all, I have a problem when trying to install NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh under Ubuntu 20.04. I know that it isn't fully supported at the moment but the problem is at a very early step of the installation - I can't aeven see the EULA, it stops before this step. I tried with ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh as well as sudo ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh as well as sudo su and run ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh after that. Everytime it's the same: Support for current OS was not tested. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied To see the list of supported systems, please visit https://www.nakivo.com/helpcenter/display/NH/System+Requirements ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Successful operation of all features and good performance are not guaranteed. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Proceed with installation at your own risk [y/N]? y ******************************************************* * * * Nakivo Backup & Replication Installation and Update * * * ******************************************************* ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 793: cannot create /tmp/nkv-install.log: Permission denied Verifying Installer... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Extracting NAKIVO Backup & Replication... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 807: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 808: cannot create /tmp/nkv-install.log: Permission denied NAKIVO Backup & Replication extraction failed. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Aborting... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 812: cannot create /tmp/nkv-install.log: Permission denied
    1 point
  48. I've got the following link from support: https://maglar.box.com/shared/static/ebzm01v0080ibtvdh2ubc11s7ops58wa.sh This is a newer build, and it should fix the problems. Since I was able to fix the problems with reinstall, I won't upgrade. But maybe some else can minimize his trouble. Mario
    1 point
×
×
  • Create New...