Jump to content
NAKIVO Community Forum

Mario

Members
  • Posts

    40
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by Mario

  1. Hello I'm on the way to migrate to "Tenant Resource Allocation". Now my 3 repos are offline and I can only remove it. I don't have the option to attach agan? How can I resolve this sitation? Thanks Mario
  2. Hi Nakivo Since I had a maintenance window at the backupsystem today, I was just doing the update. Update is done and all systems are working. Well done I'm running multi-tenant and I like the the changes about "tenant allocation"! Just some question: I can use the new settings only for new tenants? Any way to migrate an existing tenant? What is "Maintenance scheduling", as seen in changelog Thanks Mario
  3. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
  4. Just another question about v10: In the blog was a description of the File System Indexing (https://www.nakivo.com/blog/nakivo-backup-replication-v10-beta-released/) Is this in the final of v10? Mario
  5. Thank you for the information. In the releasenote of v10 I found: Critical Issue of NAKIVO Backup & Replication v9.4 The issue with the Skip unused blocks feature that in rare cases could potentially skip data blocks beyond unused space has been fixed in the current version. Users of NAKIVO Backup & Replication v9.3 and earlier versions are not affected by this issue. If you are using NAKIVO Backup & Replication v9.4, make sure you do the following: Update NAKIVO Backup & Replication to v9.4.2 or v10.0. Re-run your jobs. Note that jobs with the enabled Skip unused blocks feature can take longer to complete. How do I know if I'm affected by this problem? Also there is noting about the release v9.4.2 in the releasenotes? Currently, I'm on v9.4.0.45189 and I don't like to move ASAP to v10, how do I get the 9.4.2? Thanks Mario
  6. @Leezyany result from support for this topic? I had this problem once, but it was a false positive from defender at the nakivo repo. bye Mario
  7. Hi on a ESXi server I had troubles with the snapshot of one VM (also when I do snapshots from the Vmware interfac, so not an Nakivo issue) Since I don't have the possibility to take the server offline, I removed the VM from the backup Job but I left the data on the repository. Then I added the VM directly as "physical server" to my inventory and created a new backup jop to the same repo. Bevor I started the new Job, I had 1.03 TB free space, the VM has a disk size of 200 GB and about 150 GB used. After the new Backup I had 900 GB of free space (100 GB used by new backup). According to your website you have "Global deduplication for economical storage space utilization" in your product. So why does a second backup of the same VM take this much space? I just use a diffrent way to backup the same data... regards Mario
  8. Yes, I do the full install (mostly transporter only since I have a multi-tenatn setup) and attacht the Storage from the nas as "Remote CIFS share".
  9. Hi Simon I agree with official Moderator, it also depends on the CPU power of the included devices. Are you sending from a Server or Nas to a Server or Nas? At the beginnign I was using the Transporter on the Synology Nas, but this was a bottleneck. Now I use whenever the HpyerV/ESX as Transporter and att the storage over SMB. This has increased the processing speed. Mario
  10. Hi I don't like or understand the way Nakivo is rating some errors. As example, when I have setup the self-healing /reclaim job at the reopo and set the option: "Stop backup and recovery to run space reclaim " Why I get an "alarm" if the backup job can't be run during this time? I've done the decision to priorize reclaiming, so this is only a notification. Also if a periodic job has an error but at next runtime all is OK, this is only a notification. Does anyone agree with me?
  11. I'm at deployment right now. (Since it's multi-tenatn this is a lot of work). Until now all is OK. One strange thing I've discovered at remote replication: The uplad speed is looks strange at V9.1 At V9.0 it was this way... But I need to check later Mario
  12. @Official Moderator:Do you have a release date for 9.1 I agree with Steve, that's not cool to get an update but no changelog. Thanks Mario
  13. Hi Marco I can confirm the very low speed of replication with new versions. I had a long running supportcase with no real sucess. I've got a new release ( 9.0.0.36029), now Hyper-V is working as expected. But at Vmware the speed is still extreme low. From support stuff, I also only get the "bottlneck" link, but this dosn't help at all! Did you had any sucess? Thanks for your feedbak Mario
×
×
  • Create New...