Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 06/25/21 in all areas

  1. Hello, community, let's gather here general and more scecific tips on how to speed up backups. Any suggestions and comments will be helpful. Thank you
    3 points
  2. Hello, @Loki Rodriguez, great idea! From my side I can recommend reading this worthwhile blog posts and references: How to Increase VM Backup Speed Increasing VM Backup and Replication Speed with Network Acceleration Network Acceleration Best Practices for Efficient Hyper-V VM Backup Advanced Bandwidth Throttling VMware Backup: Top 10 Best Practices Hyper-V Best Practices for Administration An Overview of Microsoft Office 365 Backup Best Practices
    3 points
  3. When I created this thread, I thought we would be mostly speaking about network load and its critical impact on backups. (Update: Yes, I have found this information if one of the references by @Official Moderator) Reducing the network load is the first I would do in a situation when you want to increase backup server performance.
    2 points
  4. Hello, @Hakim, I think you should find out more information about NAKIVO Backup for Ransomware Protection: https://www.nakivo.com/ransomware-protection/ Anti-ransomware software alone is not sufficient to protect your organization's data. Also I recommend reading best practices for ransomware protection and recovery: https://www.nakivo.com/ransomware-protection/white-paper/best-practices-for-ransomware-protection-and-recovery/ Sorry for the offtop, I just wanted to support your remark
    2 points
  5. Hello, guys two remarks from me - on one hand, anti-virus solutions can slow down saving and reading data from a disk, but on the other hand, ransomware issues can cause performance degradation. So you should always keep the balance
    2 points
  6. Hello, @tommy.cash, thank you for your post. Here is the list of the possible questions you should ask yourself before creating the Backup Policy of your University: 1) Frequency of system/application backups. 2) Frequency of network file backups. 3) Frequency of email backups. 4) Frequency of desktop backups. 5) Storage. 6) Recovery Testing. About Microsoft 365 Backup Policies - there are inbuilt configuration options in your account which can help to prevent data loss. Due to Microsoft's Shared Responsibility Model, it is necessary also to have a backup of your Microsoft 365 data, using a third-party solution as NAKIVO Backup & Replication. Please, check a useful how-to guide on setting up Microsoft 365 backup policies: https://www.nakivo.com/blog/setting-up-microsoft-office-365-backup-policies/
    2 points
  7. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  8. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  9. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  10. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  11. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  12. Thank you so much for your help. It worked for me using win SCP.
    2 points
  13. Hi, I updated the va to nakivo 10.6. I also have a transporter synlogy, with DSM 7.1-42661. but when i try to update the transporter, i get this error " this packet require DSM 7.0-4000 or lower". Any solution? thanks.
    1 point
  14. @Official Moderator Thank you for a fast response! This is actually what I needed. I am already reading the blog post and it is worthwhile
    1 point
  15. 1 point
  16. edit: we have an enterprise essentials license
    1 point
  17. Please add Nakivo Transporter 10.6 to the Synology App Store
    1 point
  18. Hello, Thanks for the clarification. So when do you anticipate Synology will have the signed package deployed? Is there any hotfix or workaround for this?
    1 point
  19. I use Nakivo in a multi tenant configuration (MSP) for Office 365 backups only. I use onboard transporter of the director VA ( 4vCPU and 16GB ram ) for all tenants. Each tenant has a SaaS repository configured on an NFS backed VMDK. The backups run overnight and work fine. I noticed however that even when backups are not running, there is a transporter process that appears to do some housekeeping on the repo. Each thread that does this housekeeping generates ~3000 iops over nfs and more than one thread can run concurrently. As a result, My storage has a load of 3,000-25,000 iops (peaking at ~260MB/sec) continuously, 24/7. No space or inodes appear to be freed. I've only got 8 SaaS repos at the moment. What happens when I have 80? What is this background process doing that requires this much IO? Is there some way I can tell if it is actually accomplishing anything? Is there a way to limit it to running out of hours? -Mark
    1 point
  20. Hi, I have implemented a workaround. This works for me in the sense that it takes the load of my nas and moves it to my flash storage. Perhaps it will be of help to some one else, perhaps not. Try this at your own risk. I stopped all tenants and the transporter service. I unmounted the existing ext4 filesystem that held all the SaaS repos and remounted it in another location. I upgraded the va's virtual hardware to the current (7.0u2) and then added a new nfs backed 10TB vmdk (sda) and a 100GB flash backed vmdk (nvme0n1). I installed zfsutils-linux in the va and created a zpool on the nfs backed vmdk with metadata stored on the flash backed vmdk and mounted in the place of the original ext4 filesystem. zpool create nkv-saas -o ashift=13 -m /mnt/nkv-saas-repos /dev/sda special /dev/nvme0n1 I then copied all the saas repos to the new filesystem using rsync, and started the transporter service and the tenants. The difference is shown below. BEFORE: (ext4 on nfs backed vmdk): continuous 3K-25K IOPs over nfs Get used space 181.0-GB: 14min. 36sec. 248mls AFTER: (zfs on nfs backed vmdk with metadata on flash backed vmdk): peaks at ~250 nfs IOPs during a backup, little to no load the rest of the time. Get used space 181.9-GB: 1min. 44sec. 278mls I'm using 23% of the 10T vmdk on my NAS and 29% of the 100G vmdk on my directly attached flash array so the amount of flash storage required is trivial. I still think that the way Nakivo tracks used space is naive at best. Walking the filesystem every 30 minutes to add up the space used by what could be hundreds of millions of files ( I'm already up to 21M files ) is a waste of system resources. Even more so considering that the repo is PostgreSQL database and all the files in the repo are managed by Nakivo. A far more efficient approach would be to track the blob sizes in the database and, if needed, measure the sizes of the database and other files. -Mark
    1 point
  21. I have a very small VMWare Essentials environment. 2 Dell R340 Hosts, 1 socket each Backing up 3 Windows Server VMs running in VMWare esxi 7 u3 1 Linux vm lab machine ~3Tb footprint 2 Synology 4bay NAS RackStation RS820RP+, 18Tb storage each, 36Tb total storage I recently had a real-life disaster scenario. One of my older Synology NAS Devices lost two drives out of 4 in less than 24 hours. A real-life lesson that a RAID Array (or any redundant disk storage) is not a backup. It is a slightly more reliable place to store data that needs to be backed up properly. Also, I had not been backing up properly. One of the Windows VMs services a ~2Tb vhd, mostly Video, Pictures, and Office Documents. This drive is trouble to back up. It averages about 3gb overall block change each day, an average of < 50Mb a day in new bits. Backing this up on the block-level using the VMWare backend takes forever. By the time you compress it, dedup, and write it, the Full backup took 6-8 hours (again, not using the Nokivo tools). The daily incrementals only took 6 minutes. Looking at the full backups, I thought "that is slow, but not horrible." Restoring that file server took more than five full days, processing on brand new hardware, at ~30mb/s. During that time we were able to function as a company, but the pain was tremendous. I could restore individual files/folders upon request, but I could not launch that server until all of the blocks were in place. Needless to say, I am new to Nakivo because I am looking for a better tool. I would like to avoid disasters in the future. I have only just installed Nakivo Backup on my Synology devices I think it makes sense to create a Primary backup job that covers my 3 servers. I would eliminate that one vhd from the primary backup job and back those blocks up at the file level from the Windows OS to a secondary target. I could then replicate the vhd to the other host for a live backup, and process a compressed, deduped daily that gets tucked away, safe from ransomware. Does that make sense, or am I missing something? Additional note: To save money initially and because I was planning to use the free version of ESXi, I only bought one core per host. That makes fitting all of my active VMs on one host pretty hairy, and that is why I don't run a live replica of all 3 servers. If anyone would like to cure my ignorance, I am all ears. Thank you for your time. Greg
    1 point
  22. Greetings, In lieu of an explanation, I am posting, in part, my email to my Sales Agent at Nakivo, Thomas Semerich. . "Hi Thomas, I spent most of an hour with Support earlier today, and I must say, it was an entirely positive experience. I recently recovered from a serious device failure on our office network using a 'competing' tool. I set up the schema, but their technicians finished implementation and they signed off on their side of our DR plan. That was important because I was in a project with them that included purchasing DR support ahead of time. Less than 60 days in their care, two out of four drives in a critical erray dropped and I lost ~9Tb of shared storage. Mostly my fault, not theirs. I took responsibility for this network a year ago and had not checked on the age of those drives. It turns out they had been operational for 2+ years. I got unlucky and 2 failed within 24 hours. On that Datastore was a 3Tb VHD. Thankfully the rest of my data survived, but I was stuck with a backup of that partition that was 36 hours old. It took 134 hours, 5 full days, to stand up that Windows server because the data partition took that long to build. Thankfully that VM was only serving as a file server at the time, but I was building it out as our primary CAD SQL server. Our Vault. If this had happened exactly the same way a month later, our Company would have been without all of our Engineering horsepower for a week. That would do serious harm to our Company, and would almost certainly result in my being fired. Within 15 minutes, the tech I was working with understood the story I outlined above. He then verbally simulated the event, and in real-time showed me three alternatives available through the Nakivo platform that would have solved my issue. In one case, he used "Universal Object recovery" to get my data back and shareable in less than 46 seconds. He was phenomenal at his job, and I wish I could call him out by name. Unfortunately I didn't write it down and it does not appear in my email record with support, as far as I can tell. Having your platform available on Synology's OS is brilliant. The person who made that decision should be on a beach somewhere. Giving me a way to save some face with my team without downloading, installing, or paying for anything up front, without even rebooting one of my devices, is priceless. I will be writing up my recent experience in the coming days and will name Nakivo's concept as the silver lining in an otherwise dismal story. While we might never be your most profitable client, I will volunteer to be a cheerleader for your tech team, and your business model. I will be in touch in a few weeks to establish which license package to go with..." I am continuing my evaluation of Nakivo's product, but I have nothing but positive input to go by so far.
    1 point
  23. Hi, I would like to create a dedicated backup user in vcenter with the minimal privilages needed to perform backups. Are there any recommendadtions?
    1 point
  24. Updating this thread for information. NAKIVO Support have kindly provided me with a 10.2 version of the Synology Director. I upgraded the Director over the weekend successfully, but still have other Transporters and offsite copy jobs to check.
    1 point
  25. Will check it out. Thanks!
    1 point
  26. @Garry.web I guess it is always worth a try in life, but in business, if you are satisfied with M365, why would you get into the drama of transferring all the data and work process to Google? Makes no sense.
    1 point
  27. Hello. I've got two Synologys SAN, with NAKIVO installed in both. One of them (SAN1) is taking backups from VCENTER, and it makes copy to two repositories, the first one os local, and the second one is snd to SAN2 (the secionf synology). Is case of mistake of SAN1, I'd like to be able to recover backups from SAN2, but I'm not able to mount local repository in SAN2 with the same folder in wich SAN1 is writting. I get the error "The repository cannot be found within the specified location (/volume1/SecureIT/NakivoBackup)." I've tied changing rights accessing to folders, but no way.. Any suggestion or idea? Thank you very much!
    1 point
  28. @Official Moderator Hi there, it has been a while after your post. Could you please tell me what the status is of this request?
    1 point
  29. @Tayler Odling, Depends on what you need it for. But generally speaking, it's like playing with fire as you get no encryption. It isn't that hard to switch. Try to consider it.
    1 point
  30. @fio_fan, Thank you for the response and the blog post. It is very descriptive. I'll start learning more about VXLAN. Seems interesting.
    1 point
  31. 1. I have Backup Repository on NAS device that is mounted as NFS share. 2. I have Backup Copy Job with retention policy with 3 last recovery points. 3. Itselft on a backup repository you can see that VM have 31 points to restore instead of 3 that is setup. My problem that all my repositories always full up.
    1 point
  32. Feeding back that changing q to -q worked, many thanks! It's probably worth mentioning that the email you sent round contained this error, as well as the forum post - in case you've not been inundated with support calls about it already!
    1 point
  33. Hello, I just open my Synology running DSM7 On the installed paquet the Nakivo tell me to do an update to solve the issue And Nakivo is working again after upgrade ! (version 10.4.0.59271-3000) Michel
    1 point
  34. @Official ModeratorThanks for the update. I think we all understand that the change was unexpected from your side of things. We all just want some transparency on when we will be able to install Nakivo on DSM 7. Thanks for giving us a rough timeline.
    1 point
  35. Hello, It looks like it's not possible to add more than one schedule per attach and detach operation. I know I can do that with CLI but that means a script with readable password have to be stored somewhere. So it would be nice to have the same facility to add schedules as we can do for backup jobs. If there is really no way to do that, is it planned in the roadmap ? Thanks.
    1 point
  36. Today we tested 10.5 beta version and still having "The VM cannot be processed. cannot find field: vmOpNotificationToAppEnabled (NoSuchFieldException)" issue, no backup at all.
    1 point
  37. No problem Please use this reference to update Windows-based installation https://maglar.box.com/s/l34lsb0uuv1nc2cg1imzos8hg5depg5p If you have any additional questions, let me know.
    1 point
  38. I've to apologize, I wasn't clear enough: I'm on Windows platform, NBR is 10.4 b58736... ...please. post the relevant link! Kind regards
    1 point
  39. Hello to all of you I've managed to install Nakivo 10 in docker on DSM 7 with the use of the existing repository. Tested recovery and backup and it's working fine. It could be a temporary solution. RJ
    1 point
  40. Hi, everyone! When I open vSphere Client, I keep getting a 503 Service Unavailable error. I don't understand what the problem is. Does someone know any solution?
    1 point
  41. Is there any way to get on a notification list, for when the updated product is released?
    1 point
  42. I got a new version from support, now it's working. Version: 10.4.0.r57249
    1 point
  43. @Mike Spragg, thanks a lot for your contribution to our blog! Sharing is caring
    1 point
  44. ok, thanks again Mike.
    1 point
  45. Thank you ! I hit this problem pretty much straight away as soon as 10.3 came out. Unfortunately, there is a down side insofar as you are weakening what was a hardened system so hopefully fixed in 10.4 without the need to do this.
    1 point
  46. You may refer to the following NAKIVO article regarding this matter https://helpcenter.nakivo.com/display/NH/File+Recovery
    1 point
×
×
  • Create New...