Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/19/19 in all areas

  1. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  2. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  3. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  4. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  5. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  6. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  7. Thank you so much for your help. It worked for me using win SCP.
    2 points
  8. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  9. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  10. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  11. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  12. Could you please do a video for Nakivo B&R to Microsoft Azure ?
    2 points
  13. 2 points
  14. It is something that i may be interested in but i would like to see a "how to" video on the process on your youtube. I saw that you have a "how to" video on amazon ec2 backups (which is old now by the way)
    2 points
  15. Thank you. Someone from Nakivo Support helped me already. This is what I did: to increase java memory on NAKIVO application as following : 1. Stop NAKIVO service from the AppCenter 2. Connect to Synology via SSH using "root" 3. edit nkv-dirsvc file ( mostly that file is located in /volume1/@appstore/NBR/nkv-dirsvc ) and replace: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx320m" with: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx640m" 4. Start NAKIVO service Thank you. Juraj.
    2 points
  16. Hello, thanks for support, it works. Regards Francesco
    2 points
  17. Our team is excited to announce the official release of NAKIVO Backup & Replication v9.1! The most anticipated features are finally available: Backup to Tape, Linux Server Backup, Instant Verification and Windows Workstation Backup. The fully-functional Free Trial version is here https://www.nakivo.com/resources/releases/v9.1/
    2 points
  18. NAKIVO Backup & Replication v9.0 allows you to perform an incremental, application-aware backup of Windows Servers. With our solution, you can instantly recover files as well as application objects! Download our full-featured free trial to try out NAKIVO Backup & Replication for yourself: https://www.nakivo.com/resources/releases/v9.0/
    2 points
  19. Hi. Before I update to DSM 7.0.1-42218 Update 2 I was wondering if it will cause issues with Nakivo 10.4.0 (build 56979 from 30 Jul 2021). Do I need to update Nakivo first?
    1 point
  20. @Official Moderator Hi there, it has been a while after your post. Could you please tell me what the status is of this request?
    1 point
  21. @darkhorse Hi! We always used Microsoft 365 plans because they have a lot of functionality. Even the cheapest options offer so many different services and apps, while the pricing is much lower than getting it separately. If you are on the budget, check the most affordable plan that includes Exchange Online.
    1 point
  22. I have a question mostly to sysadmins or anyone related. How do you monitor your environment? What tools do you like? I feel that my company needs an upgrade in terms of monitoring potential issues. Thanks for sharing your experience and for any relevant suggestions.
    1 point
  23. @Martijnand @Jason! Thank you for your suggestion. I've escalated your request to the management, and it is being considered. As soon as I have any information, I will let you know!
    1 point
  24. Hello, It looks like it's not possible to add more than one schedule per attach and detach operation. I know I can do that with CLI but that means a script with readable password have to be stored somewhere. So it would be nice to have the same facility to add schedules as we can do for backup jobs. If there is really no way to do that, is it planned in the roadmap ? Thanks.
    1 point
  25. Hi @Moderator I don't want to encrypt the partition, I like to setup the encryption for folders as written in the helpcenter: https://helpcenter.nakivo.com/display/KB/How+Backup+Repository+Encryption+Works The ecryptfs package provides encryption of folders Ubuntu When you create a Backup Repository on a Ubuntu machine, you can enable encryption for that Backup Repository. The cryptsetup approach will be used for devices and partitions. The file system of the encrypted device/partition will be transformed to ext4. At the same time, the ecryptfs approach will be used for encrypting folders. Thanks Mario
    1 point
  26. Thanks for the link Mario. For my use I still really like global deduplication, for the space/cost savings, and indeed it might be necessary given my storage space requirements. I've recently upgraded from an older Dell R710 6-disk setup to a newer Dell R730 8-disk setup, and switched from SATA to SAS drives. I definitely recommend SAS over SATA - especially for VM Flash Boot and Screenshot Verification. It's worth the extra $$ IMHO. The firmware quality is usally better too, as is the SMART reporting. Previously with SATA I found it too slow to have "Enforce explicit file system sync" on. The SATA bus is only half-duplex, and only 6Gbps, vs the full-duplex SAS3 12Gbps. Now with my new setup I have exlicit filesystem sync turned on and still get about 1.2-1.5Gbps speeds, which I'm happy with. It was terrible on the old system so had it off there. With my old setup (without filesystem sync) I did have one corruption on a single VM which has about 2TB of data - a Flash VM Boot would always come up with disk errors and would take forever to try and fix those on boot with checkdisk, which meant Screenshot Verification wouldn't work for that one, but my other 8 VMs were fine. My main repo is now a high quality TrueNAS Core 12.0-U5 ZFS setup with enterprise SAS drives and ECC buffered RDIMMs, and the ZFS dataset shared via NFS to my Nakivo Director on Ubuntu 18.04 respects the explicit filesystem sync setting set by Nakivo. I have everything on power conditioning UPS, so hoping I don't have any further repo corruptions. I put a 2TB NVMe drive as L2ARC for my ZFS pool, so hoping that gives a nice speed boost to the VM Flash Boot and Screenshot Verification, even after months of backups (and dedup complexity). Another thing I did was turn off compression for Nakivo, as my NFS datastore uses LZ4 ZFS compression natively, so avoiding double-compression issues. Now that Nakivo only has to worry about dedup, and not dedup + compression, I am hoping that my new setup will be super reliable, even with global dedup on. I really think global dedup is still fine to use from a reliablity standpoint, but you just have to manage the setup with more care (quality ECC RAM, quality UPS, high quaity (SAS) drives, high quality drive controller, filesystem sync, optimal compression setings, keep the BIOS and controller+drive firmwares up to date etc). Anyway I'll see how I go.
    1 point
  27. Hi Gavino i can fully understand your annoyance with the documentation and the labelling in the gui. I have already asked some questions about the new storage system, see the following link:
    1 point
  28. My bad, it was my monitoring agent (Check_mk) that stopped responding, not Nakivo.
    1 point
  29. Hello, We need to be able to encrypt the backups repositories that are stored on windows servers. Is this something that is considered for future Nakivo versions? We would like to have safe backups, even if the Nakivo console would be compromised by hackers.
    1 point
  30. Hi # running from inetd # Port 22 what is inetd, a program? I do it via putty thanks
    1 point
  31. Hi, oh how nice that I found your tutorial, after updating 10.3 I had not just everything did not work, but stupid everything flew and I could not do anything. I don't know what caused it, but I'm going to reinstall using your guide! Thank you so much.
    1 point
  32. Dear Moderator, as stated above - it worked for me. I just moved the installer from root (/) to /home and there I could start the installer without problems. Thanks a lot.
    1 point
  33. that was done this morning with link to URL of this discussion in summary as well as the question that started this
    1 point
  34. Hi Andrea I don't know if it works with an ARM, since I don' user Qnap. Acording to the releasenotes of 10.3 you have a loss of functions if you are not using intel CPU But don't worry Just crate a SMB/CIFS Share on the Qnap and use this way: https://helpcenter.nakivo.com/display/NH/Backup+Repository+on+CIFS+Share Hope this helps Mario
    1 point
  35. Hello, I think it would be great if you all could add a report feature to the product. Nakivo is a key part in my DR strategy and being able to have reports that show for example how long it took to restore backups, perform backups etc. would help when talking to business stakeholders.
    1 point
  36. it's done, and it doesn't change anything, I sent an email updating # 80384 with the install.log files
    1 point
  37. Hello, I already sent you a support bundleand I am in contact with support I try on the forum to see if someone has already had this problem
    1 point
  38. Hello, just like in title. Can you consider adding year to file/dir modification date view in restore individual files window? Having day and month is making not so obvious to see when some files where modified last time. Thanks
    1 point
  39. I can tell you how it works manually or how it works in R1Soft / Idera backup software (basically it is the same way). I can do it here or in some ticket.
    1 point
  40. In Nakivo i see my wasabi-repo is 1.33 TB, but when logging into Wasabi, there is 12 TB there? Why?
    1 point
  41. there are 3 options for overwrite behavior. 1. rename if item exist 2. skip if item exist 3. overwrite the original if item exist can we request for few more possible options? 1. skip files older then specific date and also fix the "skip if item exisit" as the following option only compare with the folder name, it does not goes into the files if they exist. there should be an option to choose if skip folder if exist or skip files items if files items exist. that would be helpful for us to do recovery. thanks
    1 point
  42. I also would like to know how to get 9.4.2 Because I don't want to update to v10 yet, want to wait a while. I tried v10 but it has very bad performance issues.
    1 point
  43. Okay that's what i thought but i wanted to confirm.
    1 point
  44. Thank you for your suggestion. We have forwarded your request to our Product Team for possible further implementation.
    1 point
  45. 1 point
  46. This article will surely help you: https://www.nakivo.com/blog/how-to-patch-esxi-using-vmware-update-manager
    1 point
  47. 1 point
  48. Is there something special that I need to do to my transporter to get it connected to a tape drive? I'm working on this in a demo environment before I roll it into production... I have a Windows transporter setup and the machine is hooked up to a VTL, but I don't have a tape "tab" under settings.
    1 point
  49. Hi on a ESXi server I had troubles with the snapshot of one VM (also when I do snapshots from the Vmware interfac, so not an Nakivo issue) Since I don't have the possibility to take the server offline, I removed the VM from the backup Job but I left the data on the repository. Then I added the VM directly as "physical server" to my inventory and created a new backup jop to the same repo. Bevor I started the new Job, I had 1.03 TB free space, the VM has a disk size of 200 GB and about 150 GB used. After the new Backup I had 900 GB of free space (100 GB used by new backup). According to your website you have "Global deduplication for economical storage space utilization" in your product. So why does a second backup of the same VM take this much space? I just use a diffrent way to backup the same data... regards Mario
    1 point
  50. Yes, it is possible to do so and delete backups in bulk, like: backups that do not belong to any job or backups older than etc. You can refer to NAKIVO User Guide article for more information regarding this ( https://helpcenter.nakivo.com/display/NH/Deleting+VM+Backups#DeletingVMBackups-DeletingBackupsinBulk )
    1 point
×
×
  • Create New...