Jump to content
NAKIVO Community Forum

Deduplication & Backup size


Recommended Posts

I'm experiencing something with NBR that I want to see if others are having the same issue. Regardless of weather I am doing a backup of a VM or physical machine, because the machine is on a SSD disk the files on the disk were never defragged. Over time this caused files on the disk to be fragmented and the disk sectors to have many "holes" between files. A new temporary file could be divided to many pieces on the disk. As NBR backup is running it is looking for changed blocks (each block as I was told is 4MB). So even on a windows machine that has set idle for 24 hours (and simply updated some files like the antivirus definition files and similar), the backup could be as large as 100GB when in reality almost nothing changed on it (to be specific the few files that did change are spread across so many 4MB blocks that a lot of 4MB blocks are read and each is unique as it contains a small changed piece).

 

as I am using a SSD running disk defrag is not really an option and I am already working with Nakivo support on this (maybe trying to reduce block size or have another solution) but I was wondering how do others experience backup size and deduplication performance?

Link to comment
Share on other sites

Hi DaniAvin

I see often the same problem on VM servers. The answer from support is always,

that they relay on the microsoft / vmware "change blocking driver" and it's not a nakivo issue.

Please keep us up to date if you have a feedback from support.

 

 

Link to comment
Share on other sites

@Mario, This was also the answer I got but I do not agree it is on the MS/VMWare side. I have a VM that is basically a windows logging server. It writes about 200GB/day in new logs and in deleting events from previous files as they expire. The SSD disk this resides on is very heavily fragmented with some log files split into over 60000 fragments. I know there are about 200GB a day in file changes because when I was using Arcserve with a 4KB block size it had to back up about 200GB and because it was installed inside the windows OS (not using CBT), it knew exactly what blocks to read from the disk. Now with Nakivo instead on a 200GB backup I get a 1.2TB backup because the changed files are split into so many blocks and each block is made unique because of this change so NBR despite deduplication still needs to backup about 1.2TB of data

 

Support initially suggested I defrag the disk which is not an option since it is an SSD and even if I do it once the files keep changing so it will get back to this fragmented state fast.

I made suggestions to the support about supporting different block sizes for backup (like Competitor and others do). This is now in the hands of the dev team and hopefully will be added sooner rather than later. Support has also mentioned they are not aware of this issue and that I was the 1st customer to complain. While I do not know if this is accurate, I think pressure from other customers might help to speed the development of a solution

Link to comment
Share on other sites

Hello, @DaniAvni!

Due to the agentless architecture using for the VMware backups mentioned in this request, NAKIVO using VMware CBT as a priority change tracking source.

In some cases, VMware CBT can provide incorrect data, or for as some described in this case, the issue caused by massive disk fragmentation of the backup data can be more significant than expected. This behavior is expected for the change tracking based on the changed blocks. The backup size, in this case, is not related to the repository block size (4Mb) and does not impact the transferred data. NAKIVO is not affecting disk fragmentation inside the VM and getting the data "as is." In case if the VMware CBT provides incorrect data can be used "NAKIVO Proprietary CBT" or "Double-check changed blocks provided by CBT." Proprietary change tracking is based on data block change tracking, so if many blocks are changed, it can impact the final backup size. 

Currently, the simplest way to reduce the backup size is to reduce the amount of the changed blocks. Nevertheless, our Development team is working on improving the change tracking to provide you the best solution.

Should you need any further information, please do not hesitate to contact us.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...