Leaderboard
Popular Content
Showing content with the highest reputation since 08/30/23 in all areas
-
As per another post, I am in the process of moving my Nakivo backups from older Data Domains to Synology NAS arrays. The process I plan to use is: Install Nakivo Transporter on the Synology NAS. Register Synology Transporter with Director. Create a new "local folder" repo on the Synology transporter. Configure a backup copy job to mirror the existing DD repo to the Synology Repo. Delete the backup copy job, as you can't edit the destination target of the backup job if the copy job exists, but tell it to "Keep existing backups". Edit the VM backup job to point the VMs to their new location (this is a pain when you have a lot of VMs, wish there was an easier way to do this or have Nakivo automatically discover existing backups). Run the VM backup job to start storing new backups on the Synology repo. I ran a test and this worked fine, except all of the old recovery points that were copied by the backup copy job in step #4 had their retention set to "Keep Forever". I'm guessing this occurred when I deleted the copy job but told it to keep the backups. I'm assuming this is operating as intended? With 200 VMs and each having 9-11 recovery points, I can't manually reset the retention on 2,200 recovery points to fix this. I understand why it might have gotten changed when the backup copy job was deleted, but the Director wouldn't let me re-point the backup job until those copied backups were no longer referenced by another job object. I can use the "delete backups in bulk" to manually enact my original retention policy, but that will take 3 months to run the cycle. Any ideas?1 point
-
Hello, we are evaluating a switch from our existing M365-Backup solution to Nakivo. Are there any plans to support the backup of Exchange Online Mailboxes Modern Authentication From the Release Notes of 10.9 I saw that the backup of Exchange Online Mailboxes is not supported yet. In the user guide I found, that the backup user need to be excluded from conditional access. I know that in the past there where several limitations by MS, with APIs not supporting modern authentication, but as far as I know, those have been fixed. Is this probably already implemented for the upcoming 10.10 or are there any other plans? Regards, Thomas Kriener1 point
-
I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain. I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following: 1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups? 2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups? 3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array. Other advise on using Synology and Nakivo together is much appreciated. P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.1 point
-
>>> Please add the screenshot with the detailed information about the newly created Repository. Not sure why this is necessary, as the transporter/repo functions perfectly fine. My post is mainly concerned with the proper way to integrate Synology with Nakivo. As for my repo, it's configured as follows: Type: "Local Folder" Name & Location: Name: "DC local" Assigned transporter: "dc-bckp-003" Path to local folder: "/volume1/nakivo/dc-local" Options: Data Size Reduction: Enabled Compression: Fast Store backups in separate files: Not checked Deduplication: Checked Encryption: Disabled Enable automatic repository self-healing: Checked Run repository self-healing on: Not checked Run full data verification on: Not checked Reclaim unused space on schedule: Checked Schedule: once a day on every day at 8:00 pm - 12:00 am (UTC+00:00, UTC) Enforce explicit file system sync: Checked Detach this repository on: Unchecked >>>> NAKIVO never tested Synology's data deduplication on the array, and we can not predict the level of the deduplication for this >>>> implementation. Based on our experience, the deduplication on the hardware level is more effective and saves more space. We >>>> recommend using it. >>>> The correct configuration for this environment: NAKIVO Transporter Installed on Synology host and Incremental with full Repository >>>> configured as the local folder on the array where Synology's data deduplication process is turned on. The Backup Jobs must be >>>> configured to create a full Active backup one per 15-20 runs. I'm not too worried about the efficiency of the deduplication algorithm of the Synology vs Data Domain vs. Nakivo; I'm satisfied as long as they are on-par with each other. The low cost of the Synology NAS vs. a dedicated Data Domain device was more than enough to convince us to switch to the alternate solution. We do use Nakivo's deduplication at some of our customer sites where a dedicated backup storage array is not available (i.e. the repo is stored on the primary storage array and we need to save space). I will remove my current repo (mentioned above) and re-create it with the following options: Options: Data Size Reduction: Enabled Compression: Fast Store backups in separate files: Checked Encryption: Disabled Enable automatic repository self-healing: Checked Run repository self-healing on: Not checked Run full data verification on: Not checked Enforce explicit file system sync: Checked Detach this repository on: Unchecked I will then enable Synology deduplication on /volume1. >>>> For security reasons, we highly recommend manually creating a folder for the Repository without using the Shared folder wizard. >>>> Please do not create this folder in the NAKIVO application home directory (it can be destroyed during the update process). We >>>> have no information about data damage during the DSM update action; it is better to contact Synology support with this question. Sounds like I did it correct then. However, it would have been nice to have this documented somewhere so that a person like myself doesn't need to guess at the proper way to configure this. I couldn't find anything in the online Nakivo documentation on how to properly configure this type solution (Nakivo + Synology). Luckily I am versed enough in the technology to make somewhat intelligent decisions on how this should be working... that and Google is pretty good at pointing me to solutions of common problems/questions.1 point
-
Hello, thanks for your explanation. I'm not able to provide any support bundle, because we are not yet using the M365 backup from Nakivo. I understand from the following statement of the 10.9 Release notes in section "Microsoft 365", that the backup of "online archive mailboxes", is not supported in 10.9: Backup and recovery of guest mailboxes, public folder mailboxes, online archive mailboxes, Group calendars, and Group planners, distribution groups, mail-enabled groups, security groups, resource mailboxes, read-only calendars, calendars added from Directory, event messages (that is, "#microsoft.graph.eventMessageRequest", "#microsoft.graph.eventMessage"), and outbox folders are not supported. This is were I would like to know if this is on the roadmap or may be already in the 10.10 version. Thanks, Thomas Kriener1 point
-
Good morning I have a problem the jobs which have expired are still on the physical. expired jobs are not deleted and my disk space is full thank you for your help1 point
-
Hello, i have Synology DS1618+ with the latest DSM (DSM 7.2-64570 Update 3). I've tried the above procedure, but the update doesn't work and the virtual appliance can't connect to the transporter because it keeps reporting that the Transporter is Out Of Date. When will the update package be available? Either for automatic or manual update? Can you help me, Please? Thanks Dalibor Barak1 point
-
I am getting the below error message when i try to add some physical machines from the manager. Any ideas? An internal error has occurred. Reason: 1053 (SmbException).1 point
-
I did verify SMB 2 was running and the C$ admin and IPC are showing as shared.1 point
-
Hi, I'm using 10.8 on Linux to backup VMs from HyperV. I have several VMs with a lot of RAM like 32 or 48 Gb. When I test my restores I cannot use as a destination an host with less ram than the original VM. For example I want to restore a VM that has 48 gb of RAM on a host that has 32 GB. The process does not even start, it blocks the configuration procedure saying the host needs more RAM. I find no way to say to resize the RAM or continue anyway. This is an horrible design decision for at least a couple of reasons: - In my case I just restore the VMs and do NOT power them on, there's an option for that. Still Nakivo says I cannot restore. - In an emergency this requires you to have a full set of working hosts with at least the same RAM as you had before, and this again is an awful decision that hinders the ability to use Nakivo for real world recovery scenarios. I am in charge of the restore and if I decide to restore to an host with less RAM than the original one it should be my problem. I understand a warning from Nakivo, I cannot understand why blocking the restore. On HyperV you can create any number of VMs with any amount of RAM, then you decide which one to start or to resize. Reducing the restore options for no pratical benefit and imposing artificial limits is just silly. Is there a way to force Nakivo to continue the restore? Thanks1 point
-
1 point
-
I got answer from support and feature backup copy to tape is planed for version 10.11, expected to release around November. Grega1 point