Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation on 08/20/21 in all areas

  1. Hi Team, I am new for this tool, Need your help to make RestAPI Calls. To add the new server in backup jobs, how to remove it. All I want to make the automation of all the manual tasks. Thanks Sumit
    1 point
  2. Thanks for the info. I have stuck with not enabling the storing of backups in seperate files because I'm a little worried about increased space requriements, and because I rely on NFS, which I don't believe is supported for immutable recovery points. I am using a ZFS back-end for my NFS repository, and have good quality SAS drives, ECC RAM, as well as regular ZFS pool scrubs and S.M.A.R.T. checks (outside of backup times). A key feature I've enabled is syncrhonous writes ("enforce explicit filesystem sync") as I've read a lot about how important that is to keep data from corrupting. My recommendation would be for your product to set this on as the default, telling customers to disable it only if they have a battery-backed write cache for their storage, or if they suffer crippling speed issues (I would recommend different NAS hardware if that is the case!) My speeds are still fine with explicit filesystem sync enabled. I also have the NAS on two high-quality UPS units (one per power feed), which do double power conversion; active PCF; and pure sine wave output. My CPUs also have extremely good thermal paste. All firmware is up to date. I'm pretty sure I've done everything in my power to not get any repository corruption. Will see how it goes after several months of service. If I ever get corruption I'd be looking next towards storing backups in seperate files. So far so good though
    1 point
  3. Run into the same problem !!! Please put pressure on synology to get this working again!
    1 point
  4. Hi, @Gavino! In a new release, we have updated the UI for the Repository screen. The repository type menu was removed. "Store backups in separate files" option was added (enabled by default). Backup repositories created in the previous versions of the product keep operating in the same way they were configured. Storing backups in separate files is recommended. Downside: it has higher space requirements, no built-in deduplication. Upsides: it is more resilient to network, power, and hardware failures; it does not require running space reclaim; it is suitable for enterprise-grade deduplication storage devices; it supports immutable recovery points. If you still want to use the "forever incremental" repository and built-in data deduplication, please uncheck the "Store backups in separate files" checkbox when creating the backup repository. Please let me know if you have further questions or concerns.
    1 point
  5. Thanks for the info . What's strange is that "Deduplication" and "Store backups in separate files" are both listed as repository parameters at the same time - when really they are the negative of each other aren't they? Like - it's not possible to have both be enabled, or both be disabled, is it? It's like you haven't introduced a new feature, but just added another name for an existing one, greatly confusing people in the process. I suspect "store backups as separate files" (aka no deduplication) would take up too much space. But I also find that global deduplication is risky, for the reliability reasons you've listed above. What I was hoping you were doing with this new feature, is to have per-VM deduplication, rather than global. That way if there is any corruption, then it would/should only affect a single VM, and I could just start backing up that VM from scatch again, rather than blowing everything away - which takes a long time to do for that inital backup (I can only start that on a Saturday afternoon). To me, that's a great trade off between global dedup vs no dedup. I was really hoping this new feature would be "store VM backups seperately" with the option to still have deduplication on (on a per-VM basis).
    1 point
×
×
  • Create New...