Jump to content
NAKIVO Community Forum

cmangiarelli

Members
  • Posts

    10
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by cmangiarelli

  1. I figured out that if you detach the repo prior to deleting the backup copy job (in step#5 above), it will retain the original retention dates. I plan to use this same process when I move my production data over in a few weeks. If the retention dates get messed up though, I'll just use the "Delete backups in bulk" functionality once a week to clean my repo storage. The new arrays have a much larger storage capacity, so leaving a few extraneous recovery points around for a few weeks shouldn't cause any issues.

    • Like 1
  2. As per another post, I am in the process of moving my Nakivo backups from older Data Domains to Synology NAS arrays. The process I plan to use is:

    1. Install Nakivo Transporter on the Synology NAS.
    2. Register Synology Transporter with Director.
    3. Create a new "local folder" repo on the Synology transporter.
    4. Configure a backup copy job to mirror the existing DD repo to the Synology Repo.
    5. Delete the backup copy job, as you can't edit the destination target of the backup job if the copy job exists, but tell it to "Keep existing backups".
    6. Edit the VM backup job to point the VMs to their new location (this is a pain when you have a lot of VMs, wish there was an easier way to do this or have Nakivo automatically discover existing backups).
    7. Run the VM backup job to start storing new backups on the Synology repo.

    I ran a test and this worked fine, except all of the old recovery points that were copied by the backup copy job in step #4 had their retention set to "Keep Forever". I'm guessing this occurred when I deleted the copy job but told it to keep the backups. I'm assuming this is operating as intended? With 200 VMs and each having 9-11 recovery points, I can't manually reset the retention on 2,200 recovery points to fix this. I understand why it might have gotten changed when the backup copy job was deleted, but the Director wouldn't let me re-point the backup job until those copied backups were no longer referenced by another job object. I can use the "delete backups in bulk" to manually enact my original retention policy, but that will take 3 months to run the cycle. Any ideas?

    • Like 1
  3. >>> Please add the screenshot with the detailed information about the newly created Repository.

    Not sure why this is necessary, as the transporter/repo functions perfectly fine. My post is mainly concerned with the proper way to integrate Synology with Nakivo. As for my repo, it's configured as follows:
     
    Type: "Local Folder"
    Name & Location:
    • Name: "DC local"
    • Assigned transporter: "dc-bckp-003"
    • Path to local folder: "/volume1/nakivo/dc-local"
    Options:
    • Data Size Reduction: Enabled
    • Compression: Fast
    • Store backups in separate files: Not checked
    • Deduplication: Checked
    • Encryption: Disabled
    • Enable automatic repository self-healing: Checked
    • Run repository self-healing on: Not checked
    • Run full data verification on: Not checked
    • Reclaim unused space on schedule: Checked
      Schedule: once a day on every day at 8:00 pm - 12:00 am (UTC+00:00, UTC)
    • Enforce explicit file system sync: Checked
    • Detach this repository on: Unchecked
     

    >>>> NAKIVO never tested Synology's data deduplication on the array, and we can not predict the level of the deduplication for this
    >>>> implementation. Based on our experience, the deduplication on the hardware level is more effective and saves more space. We
    >>>>  recommend using it. 

    >>>> The correct configuration for this environment: NAKIVO Transporter Installed on Synology host and Incremental with full Repository
    >>>> configured as the local folder on the array where Synology's data deduplication process is turned on. The Backup Jobs must be
    >>>> configured to create a full Active backup one per 15-20 runs.
     
    I'm not too worried about the efficiency of the deduplication algorithm of the Synology vs Data Domain vs. Nakivo; I'm satisfied as long as they are on-par with each other. The low cost of the Synology NAS vs. a dedicated Data Domain device was more than enough to convince us to switch to the alternate solution. We do use Nakivo's deduplication at some of our customer sites where a dedicated backup storage array is not available (i.e. the repo is stored on the primary storage array and we need to save space). I will remove my current repo (mentioned above) and re-create it with the following options:
     
    Options:
    • Data Size Reduction: Enabled
    • Compression: Fast
    • Store backups in separate files: Checked
    • Encryption: Disabled
    • Enable automatic repository self-healing: Checked
    • Run repository self-healing on: Not checked
    • Run full data verification on: Not checked
    • Enforce explicit file system sync: Checked
    • Detach this repository on: Unchecked
    I will then enable Synology deduplication on /volume1.
     

    >>>> For security reasons, we highly recommend manually creating a folder for the Repository without using the Shared folder wizard.
    >>>> Please do not create this folder in the NAKIVO application home directory (it can be destroyed during the update process). We
    >>>> have no information about data damage during the DSM update action; it is better to contact Synology support with this question.

    Sounds like I did it correct then. However, it would have been nice to have this documented somewhere so that a person like myself doesn't need to guess at the proper way to configure this. I couldn't find anything in the online Nakivo documentation on how to properly configure this type solution (Nakivo + Synology). Luckily I am versed enough in the technology to make somewhat intelligent decisions on how this should be working... that and Google is pretty good at pointing me to solutions of common problems/questions.

    • Like 1
  4. I currently have a Nakivo installation backing up to Dell Data Domains but transitioning over to Synology FS6400s. The transporters that control my backup repositories have the 'ddboostfs' plugin installed and mount the appropriate storage locations on the local Data Domain.

    I recently got my new Synology FS6400s online in my lab. After some web searching, I was able to find out how to manually update the version of the Nakivo Transporter service to 10.9; only 10.8 is available in the public channel and it's incompatible with a 10.9 director. The fact that I had to do this already makes me weary of running the transporter service on the array itself, even though we specifically purchased the arrays with 128GB RAM to ensure adequate resources. Anyway, I created a forever incremental repo on each array and testing was successful; though my data domains use the incremental with full backup and rely on DDOS for dedupe. However, I am starting to question the methods of which I was going to use my Synology to replace my Data Domains. I wasn't aware that the Synology's support data deduplication on the array itself, which is why I loaded the Nakivo Transporter service in order to reduce my storage footprint. Using Nakivo's deduplication requires a period of space reclamation that is historically dangerous should the process be interrupted. Given what I now know, I'm wondering how others have used Synology arrays to store their backup data? I myself question the following:

    1. Should I still use the Nakivo Transporter service on the array, configure my repo(s) for incremental with full backup, and turn on Synology's data deduplication process for the volume hosting my backups?

    2. Should I consider forgoing the Transporter service on the array, configure NFS mount points to my virtual transporter appliances, and turn on Synology's data deduplication process for the volume hosting my backups?

    3. I created a folder (/volume1/nakivo) without using the Shared folder wizard where my repo stores its data. I've seen prompts indicating that information stored outside of "Shared Folders" can be deleted by DSM updates. Will future patches potentially cause my nakivo data to be removed since I created the folder myself? I have no reason to share out the base /volume1/nakivo folder if the Transporter service is local to my array.

    Other advise on using Synology and Nakivo together is much appreciated.

    P.S. After VM backups are taken, we perform backup copies across data center boundaries to ensure our backup data is available at an off-site location for DR/BCM purposes. By placing the transporter service on the array itself, we figure this replication process will be mainly resource driven at the array level (with network involvement to move the data) which will lighten our virtual transporter appliance footprint. Currently our replication process must send data from the local DD to a local transporter VM that then sends the data across an inter-dc network link to a remote transporter VM that finally stores the data on the remote DD. The replication process uses a good chunk of virtual cluster resources that could be better directed elsewhere.

    • Like 1
  5. It's certainly nice to hear it is in development, but hoping it will come sooner rather than later.

    Regarding the frequency of reclamation, as I said in my original post, we have space constraints to deal with. Our old backup application was scheduled to back up VMs weekly, but we would backup SQL databases nightly. Since Nakivo has no "agent", we have to back up the entire VM nightly; yeah we could create a job for the drives containing SQL, but that is where most of the change occurs anyway, so we keep it simple. We actually prefer the simplicity of the scheduling now and the nightly backups of entire VMs, but the testing of Nakivo in our lab didn't accurately predict the dedupe ratio's we would see in production. Separate backup storage is not available in some of our smaller, shall we say fiscally-challenged, environments so the repo is placed on local SAN and then replicated off-site. The amount of available SAN storage is limited, and with the slightly lower than expected dedupe ratios, we get close to running out of repo space after a few days if we don't run reclamation after our backups. We have SLAs around retention, so I'm keeping the bare minimum.

    As was mentioned in a diffent thread, the reclamation process reclaims a good portion of the available space early on in the process. Thus, with my current scheduling, I get a very good return on my reclamation period even if it gets interrupted by the next day's backup scheduling.

    • Like 1
  6. Thanks. So please confirm what I am reading.

    • Running "apt update' followed by "apt upgrade" is safe.
    • Running "apt dist-upgrade", "apt full-upgrade, and/or "apt autoremove" is also safe?
    • Running "do-release-upgrade" followed by associated apt commands is NOT safe.

    Sounds like you covered by 1st and 3rd points, but I'm still a little fuzzy about the second.

    • Like 1
  7. A few months back, I was told by support that a new type of repository was being developed where space reclamation could run without blocking access to the repo. The process would be similar to how a Data Domain can run space reclamation while still accessible but at the expense of I/O performance; however, reclamation can be throttled to increase I/O performance at the expense of running a longer reclamation period. Is it possible to get an update as to the status of this technology?

    I run backups nightly and need to reclaim space after each run due to storage space constraints. I've designed my job calendar to maximize the reclamation time period, but it sometimes runs into the next day's schedule. While normal backup jobs will pause reclamation, the director backup will fail if it is running. While my systems often run without incident, the Director can be chatty with its alerting which reduces the effectiveness of bringing important events to my attention.

    • Like 1
  8. Is it safe/supported to perform an (apt) update of the Ubuntu OS while using the Nakivo Virtual Appliance? I've previously performed standard updates of the Nakivo software using the "scp sh file > run update from console" process as documented. However, I couldn't find any documentation about keeping the OS patched. We recently had a security scan point out flaws that are resolved by updates available in the public Ubuntu patch channel(s). Please advise.

    • Like 1
×
×
  • Create New...