Jump to content
NAKIVO Community Forum

All Activity

This stream auto-updates

  1. Last week
  2. The Halloween season is upon us! We know how busy sysadmins are, no matter the time of the year you’re still on call. That’s why we want to help you choose a costume to wear this year. Relax, take the quiz, and find out your perfect costume match! https://quiz.tryinteract.com/#/5f858ef5443f5800149ffb20
  3. Hello Leezy, thank you for letting us know your issue. Please help to perform the followings: - Check if the physical disks on the server where the backup repository located are healthy and not overloaded - Run checking File System for the server where backup repository locate. Please stop NAKIVO services before running checking file system. - Make sure that no firewall/antivirus/defender/3rd party block connection to the repository during the jobs. (antivirus software deleting some files in Repository) - Please run Backup Verification to make sure that reposito
  4. Earlier
  5. is there any proper KB article that shows how to fix inconsistent repository. am getting following errors. [10/24 16:15:12.844] WRN: Backup repository with not finished consolidation... [10/24 16:16:07.801] LOG: Version 10.1.0.r47251 [10/24 16:16:07.935] LOG: Backup repository "bh-mount/a767cdfc-a844-4abe-97b5-ea2819c2be5a/NakivoBackup": [10/24 16:16:08.105] DBG: Missmatching repository allocated size 628502294336 vs. 628502294216... [10/24 16:16:08.177] WRN: Backup repository with not finished consolidation... [10/24 16:16:12.384] LOG: Version 10.1.0.r47251 [10/24 16:16
  6. Hi Bedders, thank you for your update. I hope you have great experiences with NAKIVO Backup & Replication.
  7. My apologies, I forgot to reply here. Updating for anyone in the future. We upgraded NAKIVO B&R to v10.0.0 which resolved the issue. VM's are now all on Server 2019 hosts and boot verification is working as expected.
  8. Hello Zhiyuan, NAKIVO has its own detailed logging (log folder inside the product installation path)You can check the logs yourself but it is better to send a support bundle for further investigation to our Support team (https://helpcenter.nakivo.com/display/NH/Support+Bundles)For the bottleneck issues, you may use the following instructions (https://helpcenter.nakivo.com/display/KB/How+to+Identify+a+Bottleneck) Thank you
  9. Hi nakivo, would you be by any way that nakivo console will be able to provide more logging during backup process. this helps us to read the logs so that we can try troubleshooting first. it would be nice to read the logs on nakivo or process during execute of backup of selected job. this can indicate that backup is running to certain steps and we can know where is the issue and etc. bottleneck monitoring would be great too... regards, zhiyuan
  10. We need to analyze your log files to investigate why you encountered this issue. Please generate and send a support bundle https://helpcenter.nakivo.com/display/NH/Support+Bundles to support@nakivo.com so our 2nd Level Support team can further investigate the issue you've experienced and suggest a proper solution. Also, the networking diagram will be really useful.
  11. ok, i deployed the transpoters but i see no change.. Nakivo is installed locally on a storage1, that provides datastore via NFS to the 2 esxi hosts i also have another storage2 that provides datastore via NFS to the 2 esxi hosts, but for this test, im not using it. I have VMs on esxi1, with datastore1, and when i backup them, they go at very low speed, and use the network all nakivo has to do is copy the snapshots locally, as the datastore and nakivo are the same server.. what am i missing?
  12. "Self-healing" and "Verification" guarantee that all backup data in the repository is in the same consistent state as during backup - NAKIVO compares checksums of blocks that were counted during the backup with actual repository data. In this case File level recovery should work fine. The only possible issue in this case - bootability of VM in case there are some issues with boot sector, for example. It could be also in case OS updates were installed during the backup. In this case we would recommend to use automated Screenshot verification (or Boot verification) feature or manually run F
  13. In this case we would recommend you the following NAKIVO layout: 1. Deploy NAKIVO transporter on both ESXI hosts to be able to use HotAdd datatransfer mode. 2. Install NAKIVO transporter on NFS storage, in case it is supported. In this case both involved transporters will perform direct read and write operations on the source and target sides correspondingly and also have direct connection between each other. For detailed analysis, please send a support bundle (http://www.nakivo.com/helpcenter/display/NH/Support+Bundles) and the following information to support@nakivo.com : - in w
  14. Do the "Self-healing" and "Verification" tasks guarantee that the backups are restorable?
  15. Hello! In this case, we suggest you check the following article on Physical machine requirements - https://helpcenter.nakivo.com/display/NH/Supported+Platforms#SupportedPlatforms-PhysicalMachineRequirements If nothing helps, please send us a support bundle (https://helpcenter.nakivo.com/display/NH/Support+Bundles) for further investigation.
  16. Alright, let me explain my scenario I have a couple of Esxi hosts with 6 VMs on each I have 2 NFS Linux Storages ( storage1 an storage2) and nakivo running in one of those storages (storage2) locally Right now, if I want to backup a VM, it goes something like this: VMs get snapshotted, and new files are created on their respective storages Nakivo starts copying the files to the repository, BUT, its goes like this storage1 -> esxi host -> nakivo repository (storage2) storage2 -> esxi host -> nakivo repository (storage2) Instead of going
  17. Hi, I installed nakivo on a CnetOS7 Linux machine. Next I want to create the Backup of this machine. Add the transporter works, but after that: An internal error has occurred. Reason: There was an error when collecting the host info. Error: Internal error (PhysicalDiscoveryServiceException). Machine: CentOS7 8GB DDR3 RAM Two RAID
  18. We’re happy to announce the release of NAKIVO Backup & Replication v10.1! The NAKIVO team has added two long-anticipated features, OneDrive for Business Backup and HPE StoreOnce Catalyst Integration. Download the Free Trial and see the new functionality in action! https://www.nakivo.com/resources/releases/v10.1/
  19. Based on the described error, we could recommend to check if 9445 and 9446 ports are opened on the affected Linux machine - NAKIVO tries to connect to agent on the first step and then to installed transporter service on these ports. Also please check and make sure that all requirements for Linux physical machine are met - https://helpcenter.nakivo.com/display/NH/Supported+Platforms#SupportedPlatforms-PhysicalMachineRequirements In case it still fails, please generate a support bundle http://www.nakivo.com/helpcenter/display/NH/Support+Bundles and send it to support@nakivo.com for further i
  20. I cannot add linux physical server to inventory when adding to 35% then get below error. XXX: Cannot add the machine An internal error has occurred. Reason: The deployed agent is inaccessible. (TransporterServiceException). Any idea of this error? is need root account with can ssh login?
  21. Please generate and send a support bundle https://helpcenter.nakivo.com/display/NH/Support+Bundles to support@nakivo.com with the description you put in the comment, so we can create a Feature request and send it to our Product Development team for possible further implementation.
  22. there are 3 options for overwrite behavior. 1. rename if item exist 2. skip if item exist 3. overwrite the original if item exist can we request for few more possible options? 1. skip files older then specific date and also fix the "skip if item exisit" as the following option only compare with the folder name, it does not goes into the files if they exist. there should be an option to choose if skip folder if exist or skip files items if files items exist. that would be helpful for us to do recovery. thanks
  23. With current NAKIVO workflow, the director needs access to all ESXi hosts and transporters for correct management. In case the transporter will be deployed on the customer side, then, yes, all the data will be transferred inside the customer site (without using WAN) ,but the director still will need access to the ESXi host and transporter. Generally, it is possible to do this in two ways: - configure port forwarding at customer's firewall; - configure VPN to customer's site to have full network access.
  24. Hi, we have a IT company services and we like to deploy multitenant-mode in our-service to resell to our clients. The problems is than we need direct connection from ESXI to multi-tenant director and not all clients has firewall or only allows this connection to our servers. For us, the correct implementation must be install a "transporter/agenet/whatever" in client side and this software connect to ESXI and then backup to ther local infraestructure or upload to multi-tenant that can configure repository. With this, the client not need to open ports, configure VPN, etc. Only in
  25. Hello! Could you please clarify if you get any error in this case? Please specify the names of affected VMs and send us a new support bundle so that we could check this case further.
  26. https://www.willingtodo.com/

     

  27. Hello! It looks like the FUSE driver was used to mount the folder locally. Out of our experience, it should run with poor speed because that driver works in the user space.
  1. Load more activity
×
×
  • Create New...