Jump to content
NAKIVO Community Forum

Mario

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

5 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you for the explanation.
  2. Hi on a ESXi server I had troubles with the snapshot of one VM (also when I do snapshots from the Vmware interfac, so not an Nakivo issue) Since I don't have the possibility to take the server offline, I removed the VM from the backup Job but I left the data on the repository. Then I added the VM directly as "physical server" to my inventory and created a new backup jop to the same repo. Bevor I started the new Job, I had 1.03 TB free space, the VM has a disk size of 200 GB and about 150 GB used. After the new Backup I had 900 GB of free space (100 GB used by new backup). According to your website you have "Global deduplication for economical storage space utilization" in your product. So why does a second backup of the same VM take this much space? I just use a diffrent way to backup the same data... regards Mario
  3. Yes, I do the full install (mostly transporter only since I have a multi-tenatn setup) and attacht the Storage from the nas as "Remote CIFS share".
  4. Hi Simon I agree with official Moderator, it also depends on the CPU power of the included devices. Are you sending from a Server or Nas to a Server or Nas? At the beginnign I was using the Transporter on the Synology Nas, but this was a bottleneck. Now I use whenever the HpyerV/ESX as Transporter and att the storage over SMB. This has increased the processing speed. Mario
  5. Hi I don't like or understand the way Nakivo is rating some errors. As example, when I have setup the self-healing /reclaim job at the reopo and set the option: "Stop backup and recovery to run space reclaim " Why I get an "alarm" if the backup job can't be run during this time? I've done the decision to priorize reclaiming, so this is only a notification. Also if a periodic job has an error but at next runtime all is OK, this is only a notification. Does anyone agree with me?
  6. I'm at deployment right now. (Since it's multi-tenatn this is a lot of work). Until now all is OK. One strange thing I've discovered at remote replication: The uplad speed is looks strange at V9.1 At V9.0 it was this way... But I need to check later Mario
  7. @Official Moderator:Do you have a release date for 9.1 I agree with Steve, that's not cool to get an update but no changelog. Thanks Mario
  8. Hi Marco I can confirm the very low speed of replication with new versions. I had a long running supportcase with no real sucess. I've got a new release ( 9.0.0.36029), now Hyper-V is working as expected. But at Vmware the speed is still extreme low. From support stuff, I also only get the "bottlneck" link, but this dosn't help at all! Did you had any sucess? Thanks for your feedbak Mario
×
×
  • Create New...