@Mario, This was also the answer I got but I do not agree it is on the MS/VMWare side. I have a VM that is basically a windows logging server. It writes about 200GB/day in new logs and in deleting events from previous files as they expire. The SSD disk this resides on is very heavily fragmented with some log files split into over 60000 fragments. I know there are about 200GB a day in file changes because when I was using Arcserve with a 4KB block size it had to back up about 200GB and because it was installed inside the windows OS (not using CBT), it knew exactly what blocks to read from the disk. Now with Nakivo instead on a 200GB backup I get a 1.2TB backup because the changed files are split into so many blocks and each block is made unique because of this change so NBR despite deduplication still needs to backup about 1.2TB of data
Support initially suggested I defrag the disk which is not an option since it is an SSD and even if I do it once the files keep changing so it will get back to this fragmented state fast.
I made suggestions to the support about supporting different block sizes for backup (like Competitor and others do). This is now in the hands of the dev team and hopefully will be added sooner rather than later. Support has also mentioned they are not aware of this issue and that I was the 1st customer to complain. While I do not know if this is accurate, I think pressure from other customers might help to speed the development of a solution