Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 01/23/21 in all areas

  1. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  2. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  3. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  4. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  5. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  6. Thank you so much for your help. It worked for me using win SCP.
    2 points
  7. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  8. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  9. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  10. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  11. Hello. I've got two Synologys SAN, with NAKIVO installed in both. One of them (SAN1) is taking backups from VCENTER, and it makes copy to two repositories, the first one os local, and the second one is snd to SAN2 (the secionf synology). Is case of mistake of SAN1, I'd like to be able to recover backups from SAN2, but I'm not able to mount local repository in SAN2 with the same folder in wich SAN1 is writting. I get the error "The repository cannot be found within the specified location (/volume1/SecureIT/NakivoBackup)." I've tied changing rights accessing to folders, but no way.. Any suggestion or idea? Thank you very much!
    1 point
  12. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    1 point
  13. @Official Moderator Hi there, it has been a while after your post. Could you please tell me what the status is of this request?
    1 point
  14. @Tayler Odling, Depends on what you need it for. But generally speaking, it's like playing with fire as you get no encryption. It isn't that hard to switch. Try to consider it.
    1 point
  15. 1. I have Backup Repository on NAS device that is mounted as NFS share. 2. I have Backup Copy Job with retention policy with 3 last recovery points. 3. Itselft on a backup repository you can see that VM have 31 points to restore instead of 3 that is setup. My problem that all my repositories always full up.
    1 point
  16. I get the following error when I try this: root@SynologyNAS:/volume1/@appstore/NBR/libs# zip q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class zip warning: q.zip not found or empty zip warning: name not matched: org/apache/logging/log4j/core/lookup/JndiLookup.class Should I contact Support or am I doing something wrong?
    1 point
  17. Hello, It looks like it's not possible to add more than one schedule per attach and detach operation. I know I can do that with CLI but that means a script with readable password have to be stored somewhere. So it would be nice to have the same facility to add schedules as we can do for backup jobs. If there is really no way to do that, is it planned in the roadmap ? Thanks.
    1 point
  18. Support bundle was successfully sent to support. (on Mon, 18 Oct 2021 at 10:14 (UTC +02:00) - 51 seconds ago, 35 MB)
    1 point
  19. Hi, @JurajZ! For the x86-based CPU QNAP NAS, please update your NAKIVO instance with the following latest build to resolve the issue: 1. Director + Transporter - https://drive.google.com/file/d/1d_3HB1SulpLzb4hTKuDfo-UvRDgQwLfI/view?usp=sharing 2. Transporter only - https://drive.google.com/file/d/1LPnGno2DcYv3r9VQT5H7RutE3h_T_mXj/view?usp=sharing This article could be helpful when updating NAKIVO on QNAP - https://helpcenter.nakivo.com/User-Guide/Content/Deployment/Updating-NAKIVO-Backup-and-Replication/Updating-on-QNAP-NAS.htm#Updating2 For other architecture, please let me know the exact model and firmware version of your QNAP NAS. If you have additional questions, don't hesitate to contact me!
    1 point
  20. No problem Please use this reference to update Windows-based installation https://maglar.box.com/s/l34lsb0uuv1nc2cg1imzos8hg5depg5p If you have any additional questions, let me know.
    1 point
  21. Hello to all of you I've managed to install Nakivo 10 in docker on DSM 7 with the use of the existing repository. Tested recovery and backup and it's working fine. It could be a temporary solution. RJ
    1 point
  22. Hi, @darkhorse! I also had it once. In the end, turned out that I had no disk space.
    1 point
  23. Doesn't Microsoft 365 back up the data? Why do you even need another solution for that?
    1 point
  24. Hi. I have a VMWare server having 06 different application installed. I am looking for a free software which i can use to take backup of these applications in OVA format on a remote desktop computer. Is there any free software available??? do i have to shutdown the applications before taking backup??
    1 point
  25. If I have have basic license and want to explore the Nakivo Backup tool using API to integrate and automate its daily tasks. Can I do that?
    1 point
  26. "action":"AuthenticationManagement","method":"login","tid":"1","type":"exception","message":"The API access is not licensed. Please contact support.","where":null,"cause":null,"data":null} this is the output when i do login using the API. Can you please tell me what is wrong I am doing here.
    1 point
  27. I have a fresh install of Ubuntu Server 18.04 with nothing but the following done to it before installing Nakivo "NAKIVO_Backup_Replication_v10.4.0.56979_Installer-TRIAL.sh" sudo apt update sudo apt install nfs-common sudo apt install cifs-utils sudo apt install open-iscsi sudo apt install ntfs-3g I installed Nakivo with flags " --eula-accept -f -C" since I am using NFS instead of onboard repository. You can see in 10.4 that multiple fields are missing. Where is the data storage policy ("Forever incremental" etc)? Where is the deduplication option? I don't want compression in Nakivo as my TrueNAS storage does LZ4 compression already. Even though Data Size Reduction is set to Disabled, it's still telling me that fast compression is set (even though I toggled that to Disabled, before setting Data Size Reduction to Disabled). If I experiment and turn the Data Size Reduction" to Enabled, I am then able to change compression to Disabled and have that stick, but still - deduplication remains off, and that matters greatly to me. Still no idea what my data storage policy is. I uninstalled 10.4 and installed 10.3 on the same box, and that is behaving correctly. Compare the screenshots. 10.4 is a hot mess for NFS right now and I simply cannot use it. You broke login wallpapers in 10.3, which was annoying, but this is simply horrible. Where are your testing procedures? Support bundle from 10.4 has been sent. I am running 10.3 now so you'll have to replicate in your lab, which should be easy to do.
    1 point
  28. 1 point
  29. Hi # running from inetd # Port 22 what is inetd, a program? I do it via putty thanks
    1 point
  30. Correct, you didn't. In 7.0U2 they [VMWare] uprated/hardened the security requirements through ssh. By doing this change you've reverted that change by VMWare.
    1 point
  31. Thank you ! I hit this problem pretty much straight away as soon as 10.3 came out. Unfortunately, there is a down side insofar as you are weakening what was a hardened system so hopefully fixed in 10.4 without the need to do this.
    1 point
  32. I have a question (suggestion) regarding licensing. Why don't you make an independent license server that the Directors could rely on regardless of the installation. Example: because I will have more than 800 transporters in a near future, I installed a Multi-tenant to separate eleven "Tenant". With an independent license server I could have installed several single-tenant Directors, one for each organization With this architecture I secure my Directors (today I lose the multi-tenant (crash, cyber attack, ...), I block all my backups and restore) Same, I facilitate the update (not a bigbang). sorry for my aproximative English.
    1 point
  33. Hello all, I have a problem, is it possible to make a backup of a physical server with Hyper-V installed? When i install transporter Microsoft Hyper-V host i see all virtual machines and i can make backup, but I can't back up the physical machine. So i try add Physical machine(s) but i have error because i installed transporter earlier.
    1 point
  34. Hi Boris Thank you for this trick I did a reinstall of all the needet stuff, it took the complete day but now it's working again. (Multi tenant setup)
    1 point
  35. Hello, i drop the old 'onboard transporter' and create an new one. That work for me.
    1 point
  36. Hello, I'd like to inquire what the best way is to set up backup to cloud - Amazon S3. I don't have much Amazon S3 experience but would like to keep the backups fast & the cost low, and I'm not sure how to proceed. * If I set up backup with incrementals & weekly full backups, my S3 costs will surely be low because I don't download anything. However, my weekend backups will be fulls and take considerable time. Correct? * If I set up backup with incrementals & synthetic fulls, does that work well with S3? Will my backups be faster or will S3 use my local transporters to merge the backups and create high costs due to downloading from S3? * In a local test (not to S3) I saw that backup copy jobs ignore the synthetic setting and revert to full backups - in normal backup jobs it works. Is this true for S3 also? I'd love to read some advice from people running backups to S3 at this moment so I make the right choice? Thanks!
    1 point
  37. Problem ix fixed via remote connection from the great support Team... thx for the fast fix
    1 point
  38. Good day. When is it planned Vmware 7 update 2 support?
    1 point
  39. Hi, thank you very much for the information. It is unfortunate that it is not possible in our version But thanks anyway for the quick reply. Kind regards, Jens
    1 point
  40. Hi, I deleted the restore job because I "experimented" with the jobs But I believe that I did the restore in "Production mode". So the same vc.uuid was used in the vmx file and the backup found the wrong VM (Murphy says hello: ). I think the "Recovery mode" was the reason. Kind regards, Jens
    1 point
  41. Nakivo seems to run self-backups to all repositories automatically. In my case, some of my repositories are hard drives that are ejected and stored off-site. The issue is that Nakivo automatically adds repositories as a destination for the self-backup, it would be great if it didn't do this because it generates failure notifications and the Self-Backup fails because the repositories aren't available.
    1 point
  42. The difference is that VMware offers dynamic memory support for any guest OS, and Hyper-V has historically supported dynamic memory only for VMs that run Windows. However, Microsoft added dynamic memory support for Linux VMs in Windows Server 2012 R2 Hyper-V.
    1 point
  43. 1 point
  44. I don't have a "C:\ProgramData\Package Cache\FC6260C33678BB17FB8B88536C476B4015B7C5E9" directory, I have already tried to repair c ++ from Control Panel > Programs > Programs and Features. it does not work
    1 point
  45. I can tell you how it works manually or how it works in R1Soft / Idera backup software (basically it is the same way). I can do it here or in some ticket.
    1 point
  46. i still had this issue ... the support told me to exclude the backup folder. but they still did not answer why the files was captured as false positive virus files....
    1 point
×
×
  • Create New...