Jump to content
NAKIVO Community Forum

DaniAvni

Members
  • Posts

    5
  • Joined

  • Last visited

DaniAvni's Achievements

Newbie

Newbie (1/14)

  • Conversation Starter
  • Week One Done
  • First Post

Recent Badges

0

Reputation

  1. @Mario, This was also the answer I got but I do not agree it is on the MS/VMWare side. I have a VM that is basically a windows logging server. It writes about 200GB/day in new logs and in deleting events from previous files as they expire. The SSD disk this resides on is very heavily fragmented with some log files split into over 60000 fragments. I know there are about 200GB a day in file changes because when I was using Arcserve with a 4KB block size it had to back up about 200GB and because it was installed inside the windows OS (not using CBT), it knew exactly what blocks to read from the disk. Now with Nakivo instead on a 200GB backup I get a 1.2TB backup because the changed files are split into so many blocks and each block is made unique because of this change so NBR despite deduplication still needs to backup about 1.2TB of data Support initially suggested I defrag the disk which is not an option since it is an SSD and even if I do it once the files keep changing so it will get back to this fragmented state fast. I made suggestions to the support about supporting different block sizes for backup (like Competitor and others do). This is now in the hands of the dev team and hopefully will be added sooner rather than later. Support has also mentioned they are not aware of this issue and that I was the 1st customer to complain. While I do not know if this is accurate, I think pressure from other customers might help to speed the development of a solution
  2. I'm experiencing something with NBR that I want to see if others are having the same issue. Regardless of weather I am doing a backup of a VM or physical machine, because the machine is on a SSD disk the files on the disk were never defragged. Over time this caused files on the disk to be fragmented and the disk sectors to have many "holes" between files. A new temporary file could be divided to many pieces on the disk. As NBR backup is running it is looking for changed blocks (each block as I was told is 4MB). So even on a windows machine that has set idle for 24 hours (and simply updated some files like the antivirus definition files and similar), the backup could be as large as 100GB when in reality almost nothing changed on it (to be specific the few files that did change are spread across so many 4MB blocks that a lot of 4MB blocks are read and each is unique as it contains a small changed piece). as I am using a SSD running disk defrag is not really an option and I am already working with Nakivo support on this (maybe trying to reduce block size or have another solution) but I was wondering how do others experience backup size and deduplication performance?
  3. I took a look on what Arcserve is doing and they are not backup up the pagefile and hiberfil contents but they do backup their location on the MFT so that the file returns to the same place (only content is replaced). Any solution is valid as long as the contents of the file do not waste space in my backups. My machine with almost nothing changes on disk can still produce a 100GB backup just because page file moves, antivirus changes and the fact that my SSD is fragmented (making a lot of disk blocks appear as changed)
  4. @Mario, I thought about it (thanks!) but the problem is that once I start excluding drives I need to do it over all physical machines and if I add a drive it will not be included automatically. A much better solution is for the backup software to exclude these files.
  5. @Official Moderator, I think that while it is true that the page file serves as additional memory, it should not be backed up on a physical machine. the file contents are reset when the physical machine turns on and the file itself is not usable as you cannot restore it or use it's contents in any way (windows prevents access to pagefile.sys so you cannot restore it anyway). I would say the same applies for hiberfil.sys which is the hibernation file for windows On my machine (32GB ram) the page file is about 13GB which is a waste of backup space and time as it is not usable and even if you skip the file during restore then windows will create it again
×
×
  • Create New...