Jump to content
NAKIVO Introduces Agent-Based Data Protection for Proxmox VE ×
NAKIVO Community Forum

NAKIVO Backup & Replication v10.4 Out with More Ransomware Protection Features!


Recommended Posts

We’re happy to announce that version 10.4 of NAKIVO Backup & Replication brings more ransomware protection and recovery features! Download the new version to try out the latest security-enhancing features and strengthen the resilience of your backup data to ransomware. The latest release boosts the security of backup data with Immutable Repository and Two-Factor Authentication and expands the recovery options available in Nutanix AHV environments.


Two-Factor Authentication prevents unauthorized access to your data management activities and Immutable Repository increases the safety of the locally stored backups against deletion, encryption and modification by ransomware. Together, these features provide an additional layer of protection in your incident response strategy. 


With Instant Granular Recovery for Nutanix AHV, you don’t have to perform a full VM recovery just to restore that one missing or corrupted item. Just choose the backup, recovery point and item you need, and click recover!


Try the full-featured Free Trial of NAKIVO Backup & Replication and enjoy improved security of your data.


DOWNLOAD NOW!

v10.4 GA.jpg

Link to comment
Share on other sites

Great to get 2FA for the Login 👌

Just found the option

Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance.

Why this is more reliable? Does this have an effect on the global dedup?

And can I convert the curent backup to this option?

From my point of view, the Repos are the current bottleneck of Nakivo.
if there is an improvement with this, it is very welcome!

 

  • Like 2
Link to comment
Share on other sites

Hello, @Mario!

We are glad you like our new features 🙂 Concerning your questions:

1. Why is this option more reliable?

Each backup and each recovery point are stored in separate files. Therefore, damaging a single file will only cause damage to a single backup. Furthermore, such repository structure does not require running periodic space reclaim.

2. Does this affect global deduplication?

Yes. When backups are stored in separate files, global deduplication is not available. It can result in higher space requirements.

3. If you can he convert the current backup to this option?

As for now, the only option to convert existing backups to this format is to create a new repository and run a backup copy job from the old repository, copying the entire contents.

Let me know if have any other questions!

Link to comment
Share on other sites

Thanks for the info . What's strange is that "Deduplication" and "Store backups in separate files" are both listed as repository parameters at the same time - when really they are the negative of each other aren't they? Like - it's not possible to have both be enabled, or both be disabled, is it? It's like you haven't introduced a new feature, but just added another name for an existing one, greatly confusing people in the process.

I suspect "store backups as separate files" (aka no deduplication) would take up too much space. But I also find that global deduplication is risky, for the reliability reasons you've listed above. What I was hoping you were doing with this new feature, is to have per-VM deduplication, rather than global. That way if there is any corruption, then it would/should only affect a single VM, and I could just start backing up that VM from scatch again, rather than blowing everything away - which takes a long time to do for that inital backup (I can only start that on a Saturday afternoon).

To me, that's a great trade off between global dedup vs no dedup.  I was really hoping this new feature would be "store VM backups seperately" with the option to still have deduplication on (on a per-VM basis).

  • Like 1
Link to comment
Share on other sites

Hi, @Gavino!

In a new release, we have updated the UI for the Repository screen. The repository type menu was removed. "Store backups in separate files" option was added (enabled by default). Backup repositories created in the previous versions of the product keep operating in the same way they were configured.

Storing backups in separate files is recommended. Downside: it has higher space requirements, no built-in deduplication. Upsides: it is more resilient to network, power, and hardware failures; it does not require running space reclaim; it is suitable for enterprise-grade deduplication storage devices; it supports immutable recovery points.

If you still want to use the "forever incremental" repository and built-in data deduplication, please uncheck the "Store backups in separate files" checkbox when creating the backup repository.

Please let me know if you have further questions or concerns.

  • Like 1
Link to comment
Share on other sites

Thanks for the info. I have stuck with not enabling the storing of backups in seperate files because I'm a little worried about increased space  requriements, and because I rely on NFS, which I don't believe is supported for immutable recovery points. I am using a ZFS back-end for my NFS repository, and have good quality SAS drives, ECC RAM, as well as regular ZFS pool scrubs and S.M.A.R.T. checks (outside of backup times). A key feature I've enabled is syncrhonous writes ("enforce explicit filesystem sync") as I've read a lot about how important that is to keep data from corrupting. My recommendation would be for your product to set this on as the default, telling customers to disable it only if they have a battery-backed write cache for their storage, or if they suffer crippling speed issues (I would recommend different NAS hardware if that is the case!) My speeds are still fine with explicit filesystem sync enabled.

I also have the NAS on two high-quality UPS units (one per power feed), which do double power conversion; active PCF; and pure sine wave output. My CPUs also have extremely good thermal paste. All firmware is up to date. I'm pretty sure I've done everything in my power to not get any repository corruption. Will see how it goes after several months of service. If I ever get corruption I'd be looking next towards storing backups in seperate files.  So far so good though 🙂

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...