Jump to content
NAKIVO Introduces Agent-Based Data Protection for Proxmox VE ×
NAKIVO Community Forum

Deploying Nakivo with Synology (FS6400) Array


cmangiarelli

Recommended Posts

I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain.

I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following:

1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups?

2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups?

3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array.

Other advise on using Synology and Nakivo together is much appreciated.

P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.

  • Like 1
Link to comment
Share on other sites

12 minutes ago, cmangiarelli said:

I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain.

I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following:

1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups?

2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups?

3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array.

Other advise on using Synology and Nakivo together is much appreciated.

P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.

@cmangiarelli Hello. To address your inquiry effectively, we recommend generating and submitting a support bundle to support@nakivo.com by following the instructions outlined here: https://helpcenter.nakivo.com/display/NH/Support+Bundles

By doing so, you will facilitate a thorough examination of your issue by our esteemed Technical Support team. Simultaneously, this will enable us to escalate your feature request to our Product Development team for potential inclusion in our future development roadmap. Your cooperation in this matter is greatly appreciated, and we thank you for your understanding. Meanwhile, I requested the information to inform the community about the situation. Best regards

Link to comment
Share on other sites

13 hours ago, cmangiarelli said:

I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain.

I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following:

1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups?

2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups?

3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array.

Other advise on using Synology and Nakivo together is much appreciated.

P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.

 
@cmangiarelli We appreciate your patience during our investigation.
 
>>> forever incremental repo on each array and testing was successful;
 
Please add the screenshot with the detailed information about the newly created Repository.
 
>>> Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following:
>>> 1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on
>>> Synology's data deduplication process for the volume hosting my backups?
>>> 2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and
>>> turn on Synology's data deduplication process for the volume hosting my backups?
 
NAKIVO never tested Synology's data deduplication on the array, and we can not predict the level of the deduplication for this implementation. Based on our experience, the deduplication on the hardware level is more effective and saves more space. We recommend using it. 
 
The correct configuration for this environment: NAKIVO Transporter Installed on Synology host and Incremental with full Repository configured as the local folder on the array where Synology's data deduplication process is turned on. The Backup Jobs must be configured to create a full Active backup one per 15-20 runs.
 
>>> 3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating
>>> that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data 
>>> to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service
>>> is local to my array.
 
For security reasons, we highly recommend manually creating a folder for the Repository without using the Shared folder wizard. Please do not create this folder in the NAKIVO application home directory (it can be destroyed during the update process). We have no information about data damage during the DSM update action; it is better to contact Synology support with this question.
 
We are looking forward to your feedback.
Please do not hesitate to contact us if you need any further information.
Link to comment
Share on other sites

>>> Please add the screenshot with the detailed information about the newly created Repository.

Not sure why this is necessary, as the transporter/repo functions perfectly fine. My post is mainly concerned with the proper way to integrate Synology with Nakivo. As for my repo, it's configured as follows:
 
Type: "Local Folder"
Name & Location:
  • Name: "DC local"
  • Assigned transporter: "dc-bckp-003"
  • Path to local folder: "/volume1/nakivo/dc-local"
Options:
  • Data Size Reduction: Enabled
  • Compression: Fast
  • Store backups in separate files: Not checked
  • Deduplication: Checked
  • Encryption: Disabled
  • Enable automatic repository self-healing: Checked
  • Run repository self-healing on: Not checked
  • Run full data verification on: Not checked
  • Reclaim unused space on schedule: Checked
    Schedule: once a day on every day at 8:00 pm - 12:00 am (UTC+00:00, UTC)
  • Enforce explicit file system sync: Checked
  • Detach this repository on: Unchecked
 

>>>> NAKIVO never tested Synology's data deduplication on the array, and we can not predict the level of the deduplication for this
>>>> implementation. Based on our experience, the deduplication on the hardware level is more effective and saves more space. We
>>>>  recommend using it. 

>>>> The correct configuration for this environment: NAKIVO Transporter Installed on Synology host and Incremental with full Repository
>>>> configured as the local folder on the array where Synology's data deduplication process is turned on. The Backup Jobs must be
>>>> configured to create a full Active backup one per 15-20 runs.
 
I'm not too worried about the efficiency of the deduplication algorithm of the Synology vs Data Domain vs. Nakivo; I'm satisfied as long as they are on-par with each other. The low cost of the Synology NAS vs. a dedicated Data Domain device was more than enough to convince us to switch to the alternate solution. We do use Nakivo's deduplication at some of our customer sites where a dedicated backup storage array is not available (i.e. the repo is stored on the primary storage array and we need to save space). I will remove my current repo (mentioned above) and re-create it with the following options:
 
Options:
  • Data Size Reduction: Enabled
  • Compression: Fast
  • Store backups in separate files: Checked
  • Encryption: Disabled
  • Enable automatic repository self-healing: Checked
  • Run repository self-healing on: Not checked
  • Run full data verification on: Not checked
  • Enforce explicit file system sync: Checked
  • Detach this repository on: Unchecked
I will then enable Synology deduplication on /volume1.
 

>>>> For security reasons, we highly recommend manually creating a folder for the Repository without using the Shared folder wizard.
>>>> Please do not create this folder in the NAKIVO application home directory (it can be destroyed during the update process). We
>>>> have no information about data damage during the DSM update action; it is better to contact Synology support with this question.

Sounds like I did it correct then. However, it would have been nice to have this documented somewhere so that a person like myself doesn't need to guess at the proper way to configure this. I couldn't find anything in the online Nakivo documentation on how to properly configure this type solution (Nakivo + Synology). Luckily I am versed enough in the technology to make somewhat intelligent decisions on how this should be working... that and Google is pretty good at pointing me to solutions of common problems/questions.

Edited by cmangiarelli
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...