Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation on 12/06/23 in all areas

  1. The current software doesn't have the option for us to control the number of Synthetic full backups. The best option I can find is to create a Synthetic full every month. Most of our clients are required to store backup data for at least one year. Some clients even require to save backup for 3-10 years. With the current software option, one-year retention will create 12 Synthetic full backups. Even Synthetic full backup doesn't really perform a backup from the protected server to save backup time, it still constructs a full backup from the previous incremental and eat up the storage space. If one full backup takes 1TB, one-year retention means we will need at least 12TB to store the backup. Can Nakivo make a new feature to set the maximum number of Synthetic full backups so we can control how many full backups are in the entire backup storage? We have clients with large file servers that a full backup is more than 3TB.
    1 point
  2. We're using Nakivo 10.8.0. It's provided as a Service by our hosting provider - they maintain the console, we have a dedicated transporter and we collocate a Synology NAS for storage of our backups. I have a file server backup job that required about 3.75TB for a full backup, and generates incrementals of a couple hundred gig/night. I've got the backup job set to run a full every weekend and then incrementals during the week. I have enough space on the NAS to keep about a month's worth of nightly recovery points. I have retention settings currently defined on the job that will eventually exceed my available space, but I haven't hit the wall yet. I'd like to start using a backup copy job to S3 to stash extra copies of backups and also build a deeper archive of recovery points. I'd like to keep about 2 weeks worth of nightly recovery points in S3, then 2 more weeks of weekly recovery points, then 11 more monthly recovery points, and finally a couple yearly. I built a backup copy job with the retention settings specified, and then ran it. My data copy to S3 ran at a reasonable fraction of the available bandwidth from my provider. However, once the first backup was copied up to S3 in 700+ 5GB chunks, it took an additional 60+ HOURS to "move" the chunks from the transit folder to the folder that contained that first backup. All told, it was 100 hours of "work" for the transporter, and only the first 35ish were spent actually moving data over the internet from our datacenter to S3. Either I've failed spectacularly at setting up the right job settings (a possibility I'm fully prepared to accept - I just want to know the RIGHT way), or the Nakivo software uses a non-optimal set of API calls when copying to S3, or S3 is just slow. I've read enough case studies and whatnot to believe that S3 can move hundreds of TB/hour if you ask it correctly, so 60 hours to move <5 TB of data seems like something is wrong. So, my questions are these - what's the best way to set my retention settings on both my backup job and my backup copy jobs so that I have what I want to keep, and how do I optimize the settings on the backup copy to minimize the amount of time moving data around? The bandwidth from my datacenter to S3 is what it is and I know that a full backup is going to take a while to push up to Uncle Jeff, but once the pieces are in S3 what can be done to make the "internal to AWS" stuff go faster?
    1 point
  3. Is there any way of having this detail added to the messages in the Director? I'd hate to leave debugging turned on permanently to have visibility of this.
    1 point
  4. I think Nakivo should change the way the software gathers the data from the server. If the software allows you to exclude the files from the file system, that means the software is looking at the data changes in each file. This is the slowest method when doing data backup because the backup process needs to walk through every file in the file system to determine the changes. I think that is the main reason the physical server backup process takes a very long time even if only a few files get changed. It spends the most of the time walking the file with no changes. The best way to backup the data on the disk is the block-level backup. The software should just take a snapshot of the data blocks on the disk level instead of the individual file level. Slide the disk with multiple (million or billion blocks). Look for the change in each block and only backup the changed blocks. This method will not care what files has changed. Every time a file gets changed, it will change data in the blocks in which the file resides. The backup software just backs up the changed block. This should be the best way for Nakivo to speed up the backup process. I normally put the page file and the printer spooler folder or any file that I don't want to backup in another volume or disk in the Windows system. Skip that disk/volume in the backup job to reduce the change that needs to be backed up.
    1 point
×
×
  • Create New...