Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 04/25/23 in all areas

  1. Yes, native support for Proxmox VMs is under investigation. However, you can already use agent-based backup for Proxmox. This should allow you to back up and recover Proxmox data the way you would back up your Windows or Linux machines. We’ll share the workflow here soon. Stay tuned!
    6 points
  2. Darn, was hoping to test it out already . The native backups in PVE are okay but lacking some key features.
    5 points
  3. Does anyone know if NAKIVO is planning to support Proxmox backup? I'm looking for an alternative to the built-in Proxmox Backup Server
    4 points
  4. Dear @Argon, Thank you for your honest feedback. This helps us understand what our users need and how to improve our product moving forward. We have tried to be as clear as possible that this is only an agent-based approach to backing up Proxmox VE, and wanted to inform users who may not be aware of this approach in NAKIVO Backup & Replication. As for the native Backup Server Tool, it may have some advantages but it lacks several important capabilities such as multiple backup targets (for example, cloud) or recovery options (for example, granular recovery of app objects). We are continuously working on improving our software solution, and we are investigating native Proxmox support. Thank you once again for your input. Best regards
    4 points
  5. We’re excited to announce that NAKIVO has expanded the range of supported platforms to include Proxmox VE:
    4 points
  6. I asked the NAKIVO sales team about Proxmox support recently. They said native support is on their roadmap but no ETA yet.
    4 points
  7. Hi, I have a problem, apparently of incompatibility of versions, I am working on the most recent version of nakivo, that is 10.11 and I have two nas devices, one with transporter version 10.8 and another with version 10.5, when I try to add them to the nodes, I get the following error. My solution is to downgrade the version to 10.8 or similar, but I can't find the resources on the official website. Could you help me get past versions of nakivo? If you guys have any other solution ideas or suggestions, I'm open to trying.
    2 points
  8. Hello @Ana Jaas, thanks for reaching out. To be able to provide the steps to take, we need more information about your NAS devices. Could you please provide us with the specifications of your NAS devices? We'll need details such as the model, operating system version, and CPU version. This information will enable us to guide you through updating the Transporter to match the version of the Director. Alternatively, you can navigate to Expert Mode: https://helpcenter.nakivo.com/User-Guide/Content/Settings/Expert-Mode.htm?Highlight=expert mode and enable system.transporter.allow.old to address the compatibility issue. If you find these steps challenging or would prefer direct assistance, we're more than willing to schedule a remote session to help you navigate through the process: support@nakivo.com. Looking forward to your response.
    2 points
  9. I think the developer made a fundamental mistake here. The repository should be only responsible for storage type (Disk or tape), where the data is stored (local, NAS or cloud), and how the data is stored (Compression and Encryption). The backup job should determine how the data is structured. "Forever incremental" and "Incremental with full" options should be within the job schedule section, and the retention policy determines how long the data should be kept. To determine if the data "Forever incremental" or "Incremental with full" should be the function of the backup job itself, not the function of the repository.
    2 points
  10. Hi Tape Volume doesn't move to original slot when it removed from drive.
    2 points
  11. Also needing proxmox support!
    1 point
  12. Hi Guys I'm having trouble exporting settings from Nakivo 10.2 and importing to version 10.11. I would like to reinstall version 10.2 to perform this action, but I can't find any repository of old versions. Is there no possibility to download previous versions of Nakivo?
    1 point
  13. Hi guys. I've been searching for a few days and I can't find any reference to this error. I'm trying to import the configuration of a Nakivo 10.2 server to a new installation with Nakivo 10.11. Thanks.
    1 point
  14. Live Webinar: The Ultimate Guide to VM Backup and Recovery with NAKIVO Special Offer for Attendees *Attend the webinar and get a chance to win: $50 Amazon eGift Card in one of three lucky draws during the webinar. $25 Amazon eGift Card for completing a quick post-webinar survey. Learn more Protecting data in virtual environments requires a thorough understanding of potential threats and vulnerabilities and a dedicated data protection solution for virtualized workloads. Register for our upcoming webinar, where we discuss the threat landscape and best practices you need to protect your VMs. Discover how NAKIVO backup for virtualization platforms can provide the optimal solution for your data protection needs. What it covers: Overview: Potential threats to virtual environments Tools: NAKIVO backup for virtualization platforms Tips: Best practices for data protection in virtual environments. Real-life Scenarios: Case studies of customers achieving their VM data protection goals. Technical Demo: Showcasing NAKIVO Backup & Replication capabilities for virtual workload protection. Q&A session: Insights from experts in virtual environment protection. When to attend Tuesday, March 5, 2024 Americas: 11 AM – 12 PM EST EMEA: 5 PM – 6 PM CET REGISTER NOW *Terms and conditions apply
    1 point
  15. Hello to everyone like many others, we are currently looking for alternatives to vmware. Our current favorite and that of many others we have seen is XCP-NG in combination with XOA. Is there any support planned from Nakivo? Kind regards Alex
    1 point
  16. +100 for Nakivo (at Qnap) to work XCP-NG ! Any RoadMap is planned?
    1 point
  17. Hi, I'm new to this forum so excuse me if this topic already have been discussed (didn't find any though). My company is hosting and helping multiple customers, both in our hosting and onprem environments. I've been working with Nakivo before but only on onprem installations with one setup per customer (local vcenter and esxi access). But what I now would like to be able to do is to have one central director that we manage multiple customers in. From what I can see in the docs the setup should then consist of a Director with transport and repos centrally, transport agent at the customer site with Direct connect enabled. Agents on physical machines. But the main problem is that we need to open ports in the customer FW to the transport agent server. And if we have a customer that doesn't allow access to vCenter and only allow agent based backups we still need to add a transport and open FW to that. Is that correct? Is there no alternative to have agents talk to our transport and/or director centrally without opening any firewall inbound on the customer side? Hope my question makes sense.
    1 point
  18. In the newest version, there are no retention tabs (for setting up the option grand-father-son) anymore? How can it be switched to legacy rentention method?
    1 point
  19. Hello, are there any plans for supporting Proxmox in the future? Regards, Oliver
    1 point
  20. We have an issue where our transporters are not passing data at the expected 1GB rate. I have reached out to Nakivo with no response yet but wasn't sure if there was some sort of software hard limit? I have a copy job that runs at 1GB speed but anything using vmware transporter is bottoming out at around 450mb.
    1 point
  21. Hi, Im currently backing up Exchange Online through Nakivo, I also want to keep a copy of these backups offsite for redundancy. I cant find an easy (without extra licenses also) way of doing this. The only method that i can use reliably is creating a another Backup job with a different Transporter at my offsite location and running the backup again, whilst this works well im duplicating my licence costs. Is there a better way of doing this please?
    1 point
  22. Good morning, I have problems with the backup of nakivo22, I started working in the state computer and the previous computer technicians who were had nothing organized and were unreliable people, to change the password of the user of the VMware I jumped that Nakivo can not access the VMware to make backups, you need new credentials, where I can reconfigure the new credentials nakivo so that it can return to access the VMware and can complete the backups ? the nakivo22 is installed inside a linux in ubuntu console mode. Thanks!
    1 point
  23. The current software doesn't have the option for us to control the number of Synthetic full backups. The best option I can find is to create a Synthetic full every month. Most of our clients are required to store backup data for at least one year. Some clients even require to save backup for 3-10 years. With the current software option, one-year retention will create 12 Synthetic full backups. Even Synthetic full backup doesn't really perform a backup from the protected server to save backup time, it still constructs a full backup from the previous incremental and eat up the storage space. If one full backup takes 1TB, one-year retention means we will need at least 12TB to store the backup. Can Nakivo make a new feature to set the maximum number of Synthetic full backups so we can control how many full backups are in the entire backup storage? We have clients with large file servers that a full backup is more than 3TB.
    1 point
  24. I think Nakivo should change the way the software gathers the data from the server. If the software allows you to exclude the files from the file system, that means the software is looking at the data changes in each file. This is the slowest method when doing data backup because the backup process needs to walk through every file in the file system to determine the changes. I think that is the main reason the physical server backup process takes a very long time even if only a few files get changed. It spends the most of the time walking the file with no changes. The best way to backup the data on the disk is the block-level backup. The software should just take a snapshot of the data blocks on the disk level instead of the individual file level. Slide the disk with multiple (million or billion blocks). Look for the change in each block and only backup the changed blocks. This method will not care what files has changed. Every time a file gets changed, it will change data in the blocks in which the file resides. The backup software just backs up the changed block. This should be the best way for Nakivo to speed up the backup process. I normally put the page file and the printer spooler folder or any file that I don't want to backup in another volume or disk in the Windows system. Skip that disk/volume in the backup job to reduce the change that needs to be backed up.
    1 point
  25. Hi, my Nakivo jobs are working correctly. But I just realized, when I create a new job or if I want to edit an existing hob, I can no longer manually force a "Primary transporter" on source and target hosts (the transporters are grayed out). And only the "Automatic" option can be selected. However I need this manual selection mode to make things work because for some reason, in my setup, I need to have a transporter running on the same machine as the ESX host to make the backup perform without error. Can you let me know if there is a way to reenable manual selection of transporter. Lei
    1 point
  26. I have done those steps, unfortunately the specific transport items are still greyed out. EDIT: I found out that the items are greyed out, if the transporter has been configured with the "Enable Direct Connect for this transporter" option. Would you mind to explain the benefits of using this option + Automatic selection, vs unselecting this option and manually configure the transporter on the backup job level?
    1 point
  27. Hi, this link doesn't work. Can you provide the correct link? I need this exact version for manual install: 1.0.9.0.75563. Latest version in synology package center is 10.8.0.73174. Thank you.
    1 point
  28. Hi all We are thinking of changing our current Microsoft DPM backup system for Nakivo and I have many doubts about the storage configuration and the technology used by nakivo. the idea is to have a server with ssd disks with a storage of 40TB to make copies to disk, all the disks will be SSD Enterprise. To save space, we are interested in making full syntentic copies of our VMs, at this point is where my doubts begin after the experience with Microsoft DPM - What file system does Nakivo use for full synthetic backups? - It is necessary to have ReFS to be able to perform full synthetic backups? (Do you use nakivo refs for block cloning?) - There is some guidance for the implementation of the file system underlying the Nakivo repository. I mean things like interleave size, block size, file system. Our idea is to set up the server with Windows using Storage Spaces to be able to expand the pool if needed. My experience indicates that without an efficient storage configuration, random write backup speed drops dramatically. The Competitor forum is full of posts with performance issues and the cause is undoubtedly poorly configured storage. Refs uses a large amount of memory for the deletion of large files such as a purge of old backups. I have seen it use up to 80GB of RAM just to perform the purge and this is due to storage performance. ReFs uncontrolled metadata restructuring tasks and cache utilization also bring down performance. Thank you very much for clarifying my initial doubts, surely after receiving your answer I will have new questions. Regards Jorge Martin
    1 point
  29. Nakivo B&R has the great option to backup to WASABI cloud storage. However these backups are unencrypted. Is it possible to add option to encrypt those backups?
    1 point
  30. Thank you for your answer. I'll try with the instruction reported in Section 2. Elia
    1 point
  31. When you deploy a new instance of nakivo or create a new tenant in multi-tenant mode, the items creation order is Inventory, node then repository, but, that does not make sense if the only inventory i need to add is accessed through "Direct connect". I need to add the nodes first. Please change the creation order during the first start to node, repository then inventory or allow us to skip the inventory.
    1 point
  32. As per another post, I am in the process of moving my Nakivo backups from older Data Domains to Synology NAS arrays. The process I plan to use is: Install Nakivo Transporter on the Synology NAS. Register Synology Transporter with Director. Create a new "local folder" repo on the Synology transporter. Configure a backup copy job to mirror the existing DD repo to the Synology Repo. Delete the backup copy job, as you can't edit the destination target of the backup job if the copy job exists, but tell it to "Keep existing backups". Edit the VM backup job to point the VMs to their new location (this is a pain when you have a lot of VMs, wish there was an easier way to do this or have Nakivo automatically discover existing backups). Run the VM backup job to start storing new backups on the Synology repo. I ran a test and this worked fine, except all of the old recovery points that were copied by the backup copy job in step #4 had their retention set to "Keep Forever". I'm guessing this occurred when I deleted the copy job but told it to keep the backups. I'm assuming this is operating as intended? With 200 VMs and each having 9-11 recovery points, I can't manually reset the retention on 2,200 recovery points to fix this. I understand why it might have gotten changed when the backup copy job was deleted, but the Director wouldn't let me re-point the backup job until those copied backups were no longer referenced by another job object. I can use the "delete backups in bulk" to manually enact my original retention policy, but that will take 3 months to run the cycle. Any ideas?
    1 point
  33. Hello, I just updated to VMWare 8.0 U2 Will this be integrated in Nakivo 10.10 ? I have no more connection to my VMWare server from Nakivo Thanks Julien
    1 point
  34. Hello, we are evaluating a switch from our existing M365-Backup solution to Nakivo. Are there any plans to support the backup of Exchange Online Mailboxes Modern Authentication From the Release Notes of 10.9 I saw that the backup of Exchange Online Mailboxes is not supported yet. In the user guide I found, that the backup user need to be excluded from conditional access. I know that in the past there where several limitations by MS, with APIs not supporting modern authentication, but as far as I know, those have been fixed. Is this probably already implemented for the upcoming 10.10 or are there any other plans? Regards, Thomas Kriener
    1 point
  35. I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain. I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following: 1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups? 2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups? 3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array. Other advise on using Synology and Nakivo together is much appreciated. P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.
    1 point
  36. Hello, i have Synology DS1618+ with the latest DSM (DSM 7.2-64570 Update 3). I've tried the above procedure, but the update doesn't work and the virtual appliance can't connect to the transporter because it keeps reporting that the Transporter is Out Of Date. When will the update package be available? Either for automatic or manual update? Can you help me, Please? Thanks Dalibor Barak
    1 point
  37. Ok, thank you for the information. That's disappointing as now I need to find a download for the Windows version of NAKIVO, https://d96i82q710b04.cloudfront.net/res/product/NAKIVO Backup_Replication v10.9.0.76010 Updater.exe but the one that matches the transporter version available on Synology. Alternatively do you have the download link like above but for just the Synology Transporter version so I can install it via SSH? Specifically it's transporter version 10.8.0.n72947 Thanks in advance
    1 point
  38. I have now sent the Bundle to Support. The contact email is ti@memphisbr.com
    1 point
  39. Hello again I spoke too soon. I restarted one final time and we are back to "Waiting" and 100% CPU Load: I created a support package for you guys to check out. For now the Quickfix will do: Restarting the services.
    1 point
  40. Hello again So I did double the heap size parameters twice. The first changed did not fix the problem but the second seems to have fixed it. I changed the manual IP back from .204 to .203 for testing and restarted the VM and I can now access the webUI. Final heap size: For Anyone reading this in the future. To change the heapsize. Go to the VM comsole in ESXI. 1. Manage NAKIVO services -> Start/Stop Services -> Stop both the Director and Transporter service for good measure 2. Go back to the Main Menu -> Exit to system console -> log in with your user (nkvuser in my case) 3. Sudo -i -> enter password again 4. Install nano for text editing : sudo apt -y install nano 5 type: cd / (to move to root folder) -> cd etc/systemd/system -> type: sudo nano nkv-dirsrvc.service 6. Edit the file. Move with the arrow key down to the line beginning with "Environment="SVC_OPTIONS....." and move to the right until you see the above heap size parameters. Edit the parameters. When finished press CRTL+X and then Y. Press Enter to finish. Reboot the VM, done. I hope this can help whoever in the future. Thanks for the help and have a great day.
    1 point
  41. Hello! My virtualized environment is with Hyper-V and Windows Server 2022. I have a VM that I have a problem with the backup strategy for the cloud. It's a VM with 2TB of data. My cloud provider is a Wasabi S3 Object. All other backups of the VMs which are smaller are working fine My issue is on the 2TB data VM. The local Job is configured to do Synthetic Full every Friday. My JOB that saves the backups in the cloud is configured as a Backup Copy Job. But as the local JOB does the Synthetic Full, this copy to the cloud is taking too long, it is impacting the other JOBs. Any suggestions on backing up to cloud servers with large volume of data? What is the most efficient way to back up to the cloud?
    1 point
  42. Hey everyone! I've been using Microsoft 365 for my business, and I want to make sure I have proper backups in place to protect my data. I recently came across the NAKIVO blog, and I was wondering if anyone here could help me understand how to set up backups in Microsoft 365. Any guidance would be appreciated!
    1 point
  43. Another approach to consider Microsoft 365 Native Backup Features. My recommendation is to watch this useful webinar. The NAKIVO blog might have more information about it.
    1 point
  44. Hello, When I tried to install Nakivo transporter 10.8.0 previously download in nakivo partner website, the package on app center in QNAP TS-230 send me an error : Failed to install, the installation package is incompatible. But when I read documentation about compatibility of nakivo and QNAP TS-230, it can work... QNAP firmware version : QTS 5.0.1.2376
    1 point
  45. Hi, We have to periodically backup a few TBs of files (20) from a NAS to tape and the amount is increasing considerably over time. What will be the proper licensing to get so the pricing will not change based on the amount of data being backed up ? Thanks
    1 point
  46. Hi Have any updates for display tape cartridge capacity usage ?
    1 point
  47. I'm perfectly agree with TonioRoffo and Jim. Deduplication efficiency and incremental forever has been, from the beginning, one of the main reasons that made me choose Nakivo as the number one backup solution for my customers. The obligation to periodically perform full backups was, for some of my customers, extremely boring for obvious reasons of time and space consumption. Now I find myself having to destroy the repositories with all my backups to recreate them with the correct settings in order to be able to use them in "incremental forever" as I was always used to on previous versions from Nakivo. I think the change was indeed a bad idea.
    1 point
×
×
  • Create New...