Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/01/20 in Posts

  1. Thank you so much for your help. It worked for me using win SCP.
    2 points
  2. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  3. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  4. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  5. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  6. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  7. I added 4 workstations, which were shut down for the night, to my inventory. Is it possible to stop the errors for the non-responding workstations and the error for the transporters (the same as for the workstations) that do not respond? (or can I turn it into an alert so I won't receive any emails?). For the errors of the workstations I receive an email every 2 hours and another email for the transporters (the same of the workstations). Do you think for the future to create a dedicated section where we can configure alerts and error messages? It would be useful to be able to check the presence of the workstation and the transporter only when the backup is to be done. I understand for a server that it should always be on, but for a workstation it can happen that it is turned off for the night or for the weekend or for the holidays. Just as it was thought, the management of physical machines is unmanageable for an inventory with some workstations other than 3 or 4. The error messages generated in this case risk losing focus on the really important error messages (real transporters not responding, storage not responding, vcenters or resources that must always be present that do not respond). Loris
    1 point
  8. Hello, I tried to contact Nakivo support by chat but the person did not want to answer my questions. So I try here. Our company has just acquired 100 Nakivo BR Enterprise workloads. I have installed the director from the installation file in trial version. The trial version was in 10.2 and since I updated it to 10.3. I found the update download page by chance (or google search). Everything is working and I have no particular problems. I would like to know the following things: Is there a customer portal : - Which allows us to download Nakivo without going through the download form for trial versions? - that allows us to access a history of support requests? Do the support technicians in Europe speak French? Thank you in advance for your help. Roland Belistan
    1 point
  9. After a successfull backup of our file server (virtual), If I test a Recovery task, I cannot find data partition (bitlocker encrypted) but only o.s. partition is available for granular selection of files. Where I'm wrong ? backup size seems correct (1TB) but I can see only 3 parts; the most important (data part) seems has been skipped for some reason. Anybody experienced this issue ? Alex
    1 point
  10. Thanks for the response. Is your roadmap accessible online?
    1 point
  11. @Mike Spragg, thanks a lot for your contribution to our blog! Sharing is caring
    1 point
  12. ok thanks it's good
    1 point
  13. That's not the relevant part - the "file" (as I can't attach it) are shown above - you only need to modify the lines: KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512,ssh-rsa,ssh-dss Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1-96,hmac-sha1,hmac-md5-96,hmac-md5 This adds the right KexAlgorithm, HostKeyAlgorithms and Ciphers (the original shows what they are now in 7.0U1) The file you need to modify is /etc/ssh/sshd_config
    1 point
  14. In HotAdd mode, virtual plates from the virtual machines being supported up are naturally mounted to the intermediary, so they can be gotten to by the intermediary as neighborhood circles. The ESX have the intermediary is running on should approach all datastores for the virtual machine. In the event that the virtual machine and the intermediary are not on a similar host, all datastores should be divided among the hosts and the hosts should be inside the equivalent datacenter. In the event that SAN mode isn't accessible, HotAdd mode can accomplish near SAN mode execution. To empower gradual reinforcements of virtual circles, Changed Square Following (CBT) should be utilized for the main full reinforcement. (CBT is empowered for reinforcements of course.)
    1 point
  15. Hi. have been using Nakivo for over 5 years. Has anyone else had issues with the 10.3 update on Synology NAS units? First after upgrading the certificates for the transporters would not work and I had to rename them. Now my offisite transporter on my offsite NAS units will not accept connections on 9446. i am stuck with no off site backups. yes I have a ticket open but looking for feedback form the community. I do not want to go all weekend without offsite backups
    1 point
  16. Thank you very much. I solved the problem, but i don't understand. This problem shows up with the latest version of vmware (7.02) because with 7.0 i never had this problem.
    1 point
  17. You have to modify VMWare itself: https://helpcenter.nakivo.com/display/KB/SSH+Requirements+for+NAKIVO+Backup+and+Replication without the mods to sshd_config - it will never see it. KexAlgorithms HostKeyAlgorithms Ciphers I've include the original and replacement files. Changed: # Version 7.0.2.1 # running from inetd # Port 22 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key # Fips mode restricts ciphers to only FIPS-permitted ciphers FipsMode yes # vPP FCS_SSH_EXT.1.7: rekey after 1GB, 1H (instead of default 4GB for AES) RekeyLimit 1G, 1H SyslogFacility auth LogLevel info PermitRootLogin yes PrintMotd yes TCPKeepAlive yes # Key algorithms used in SSHv2 handshake # (ed25519 not allowed by current FIPS module) KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512,ssh-rsa,ssh-dss Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1-96,hmac-sha1,hmac-md5-96,hmac-md5 UsePAM yes # only use PAM challenge-response (keyboard-interactive) PasswordAuthentication no Banner /etc/issue Subsystem sftp /usr/lib/vmware/openssh/bin/sftp-server -f LOCAL5 -l INFO AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys # Timeout value of 10 mins. The default value of ClientAliveCountMax is 3. # Hence, we get a 3 * 200 = 600 seconds timeout if the client has been # unresponsive. ClientAliveCountMax 3 ClientAliveInterval 200 # sshd(8) will refuse connection attempts with a probability of "rate/100" # (30%) if there are currently "start" (10) unauthenticated connections. The # probability increases linearly and all connection attempts are refused if the # number of unauthenticated connections reaches "full" (100) MaxStartups 10:30:100 # ESXi is not a proxy server AllowTcpForwarding no AllowStreamLocalForwarding no # The following settings are all default values. They are repeated # here to simplify auditing settings (for example, DoD STIG). IgnoreRhosts yes HostbasedAuthentication no PermitEmptyPasswords no PermitUserEnvironment no StrictModes yes Compression no GatewayPorts no X11Forwarding no AcceptEnv PermitTunnel no # The following settings are disabled during the OpenSSH build. # They are commented out to avoid spurious warnings in log files. #GSSAPIAuthentication no #KerberosAuthentication no Original # Version 7.0.2.1 # running from inetd # Port 22 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key # Fips mode restricts ciphers to only FIPS-permitted ciphers FipsMode yes # vPP FCS_SSH_EXT.1.7: rekey after 1GB, 1H (instead of default 4GB for AES) RekeyLimit 1G, 1H SyslogFacility auth LogLevel info PermitRootLogin yes PrintMotd yes TCPKeepAlive yes # Key algorithms used in SSHv2 handshake # (ed25519 not allowed by current FIPS module) KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512 Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512 UsePAM yes # only use PAM challenge-response (keyboard-interactive) PasswordAuthentication no Banner /etc/issue Subsystem sftp /usr/lib/vmware/openssh/bin/sftp-server -f LOCAL5 -l INFO AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys # Timeout value of 10 mins. The default value of ClientAliveCountMax is 3. # Hence, we get a 3 * 200 = 600 seconds timeout if the client has been # unresponsive. ClientAliveCountMax 3 ClientAliveInterval 200 # sshd(8) will refuse connection attempts with a probability of "rate/100" # (30%) if there are currently "start" (10) unauthenticated connections. The # probability increases linearly and all connection attempts are refused if the # number of unauthenticated connections reaches "full" (100) MaxStartups 10:30:100 # ESXi is not a proxy server AllowTcpForwarding no AllowStreamLocalForwarding no # The following settings are all default values. They are repeated # here to simplify auditing settings (for example, DoD STIG). IgnoreRhosts yes HostbasedAuthentication no PermitEmptyPasswords no PermitUserEnvironment no StrictModes yes Compression no GatewayPorts no X11Forwarding no AcceptEnv PermitTunnel no # The following settings are disabled during the OpenSSH build. # They are commented out to avoid spurious warnings in log files. #GSSAPIAuthentication no #KerberosAuthentication no
    1 point
  18. Hello, @Mr.Ed! I am very sorry to hear that you didn't resolve your issue with the Support Team. Please continue the discussion in the opened ticket until your problem is fixed. Let me know when you find a solution for your case. As for appending all daily backups to the same tape and keeping it in the library while storing offsite your weekend full backups only, unfortunately, this feature isn't currently implemented. However, the feature request will be submitted to the Product Team.
    1 point
  19. Hello, I'd like to inquire what the best way is to set up backup to cloud - Amazon S3. I don't have much Amazon S3 experience but would like to keep the backups fast & the cost low, and I'm not sure how to proceed. * If I set up backup with incrementals & weekly full backups, my S3 costs will surely be low because I don't download anything. However, my weekend backups will be fulls and take considerable time. Correct? * If I set up backup with incrementals & synthetic fulls, does that work well with S3? Will my backups be faster or will S3 use my local transporters to merge the backups and create high costs due to downloading from S3? * In a local test (not to S3) I saw that backup copy jobs ignore the synthetic setting and revert to full backups - in normal backup jobs it works. Is this true for S3 also? I'd love to read some advice from people running backups to S3 at this moment so I make the right choice? Thanks!
    1 point
  20. Pls update you page https://www.nakivo.com/vmware-backup/ on this - 7u1.
    1 point
  21. that was done this morning with link to URL of this discussion in summary as well as the question that started this
    1 point
  22. Hello, @lionking62! We are planning to support Ubuntu 20.04 in NBR v10.4. The release is planned for July 2021. Please find the extended Linux OS support list for it: Ubuntu 20.04, RHEL7.6/7.7/8.0/8.1/8.2, SLES15/CentOS 8 As for now, you can upgrade OS Ubuntu to v18.04 which is currently supported as well.
    1 point
  23. The data sheet for VMware Backup and Replication 10.3 and the Supported Platforms section of the User Guide both list "Ubuntu 16.04–18.04 Server (x64)". That's not to say it won't work on the later version, but you should speak with them directly about your support options.
    1 point
  24. 1 point
  25. Goodmorning I have a nakivo installation with an unique esx server as inventory. Currently i backup all vms in a qnap based on intel cpu with nakivo transporter app. I added a new qnap at the network and i'm trying to use it as second trasporter/repository. No problem adding it as transporter in nakivo's dashboard but i can't add the folder as repository cfis. The error is tha "the folder doesn't exists", so it can't create the NakivoBackup folder. Currently I have the doubt that I choosed the wrong nas : it's an Arm processor. I opened a support ticket but i'm also trying to use this forum. Thanks
    1 point
  26. however there is still a missing link when comes to physical backup or proxmox VM backup. restoration only limited to vmware and hyperV. not other platform, that is why i have been asking for physical server or Bare metal restore options to be available. this would help us open up other market.
    1 point
  27. Model is an RS2418+ and I currently have the full 10.2 installed.
    1 point
  28. I noticed that on Windows server, I can only run one job at a time. When I backup to tape or Wasabi, I cannot run my scheduled backups. QNAP says that Nakivo does not have this limitation when running on a Intel or AMD based QNAP NAS. I assume that on QNAP the Backup Copy jobs are running from a snapshot freeing the primary backup to be used at the same time? What other repositories allow replication or backup copy jobs while source backups are running? Synology? FreeNAS? Linux? - Ed Hammond
    1 point
  29. Hello, I would also like the updater package for Synology. x86 Architecture, no other transporters. Please and thank you!
    1 point
  30. Hello, I think it would be great if you all could add a report feature to the product. Nakivo is a key part in my DR strategy and being able to have reports that show for example how long it took to restore backups, perform backups etc. would help when talking to business stakeholders.
    1 point
  31. Hi. Do you maybe have package for synology also ? (for version 7.0.2) Best regards Marko
    1 point
  32. 1 point
  33. I know it's possible to encrypt the backup data during the transfer but is it possible to encrypt the backups?
    1 point
  34. I'm running 10.2.0.52518 and it seems to be not compatible with vmware 7.0.2 i get the error An internal error has occurred. Reason: cannot find field: statelessNvdsMigrationReady (NoSuchFieldException). When will we have a release that is compatible with vmware 7.0.2 ?
    1 point
  35. It's a bug and it will be fixed in an upcomming release.
    1 point
  36. Hello, just like in title. Can you consider adding year to file/dir modification date view in restore individual files window? Having day and month is making not so obvious to see when some files where modified last time. Thanks
    1 point
  37. In Nakivo i see my wasabi-repo is 1.33 TB, but when logging into Wasabi, there is 12 TB there? Why?
    1 point
  38. Official reply from support : "All read/write operations to tape work with the switch off cache to guarantee that data will not be lost. That approach makes the read-write operation is slower than usual. You can try to change the tape block size to find more optimal for high speed." Should I say something else, or are you kidding me? If this is true , Nakivo is ruining every tapedrive over time, becaus everyone knows , that shoe-shining is bad for tape drives One example of the speed, while copying to tape: ( This is not an exeption): Over about 10 minutes there is written few to tape: Meanwhile, there is almost no load on local disc-storage
    1 point
  39. Hi I've installed Backup & Replication on Synology Now I want to add a Win 10 PC which is connected to MS account Login How to find the correct Username and PW log in the PC with Microsoft email and pin what to put in here Pls help THX
    1 point
  40. I have a new vSphere 7.0U1 farm and new Nakivo server setup. I tried to backup VM on ESXI host, I got the following error message. All backup jobs got the same error. Local reseller already helped to upgrade Nakivo to 10.1.1 (build 49573 from 2020 Nov Sat at 12:11:00 PM) but the result is still same. VMware vSphere Storage API error has occurred on the "Onboard transporter" transporter.
    1 point
  41. Hi everyone, Has anyone successfully implemented Backup from AMS/EC2? I'm not having much luck. I have a local VMware Nakivo Director appliance targeting an Ubuntu EC2 instance for Backup. I followed these steps: https://helpcenter.nakivo.com/display/NH/Deploying+Transporters+in+Amazon+EC2 When I let the transporter deploy with Automatic settings, I get a new Nakivo VPC which my local Director can reach, but it cannot initiate a backup (I get an error message). However when I create the transporter Manually, the deployment fails partway through with an error that the transporter can't be reached. Since the automatically deployed Transporter was at least reachable from my Director, I've focused on trying to get that working, but I've not had real success. With the separate Nakivo VPC, I tried setting up a VPC Peer and adjusting my routing tables. I can get the AWS Transporter to ping my target EC2 box, but I don't know what else is needed. Has anyone had success with this kind of setup? Essentially...what is "X" below? [VMware Director appliance] ====> (internet) ==== (Nakivo VPC [Nakivo Txport])...X...(Target VPC [target EC2 Ubuntu]) Any help is greatly appreciated! --Mike
    1 point
  42. I got an answer from Support which sparked a realization. I believe I have figured out the problem. The transporter instance size was much smaller than the target EC2 instance. I deployed a new transporter of a size that matched the target instance and kicked off the backup. It is running now and no errors so far, knock wood.
    1 point
  43. Hi Mod, Thanks for the info--just to clarify I did also follow your linked steps previously. Using my Transporter with Automatic settings, I deployed a new test Nano instance with a small (8g) disk into the same VPC as the Real target. That Test target *does* back up (using your EC2 job steps). However another Test Nano with a large (100g) disk fails with an error. I transmitted a support request, but haven't seen a reply yet. The Real target instance has a 500g data volume mounted to it, so I'm very interested in knowing if size is a limiting factor. My next trials will be to add a volume to my Test instance and see if there's some point where I can't fetch it.
    1 point
  44. Does the NBR supports AHV community edition?
    1 point
  45. My apologies, I forgot to reply here. Updating for anyone in the future. We upgraded NAKIVO B&R to v10.0.0 which resolved the issue. VM's are now all on Server 2019 hosts and boot verification is working as expected.
    1 point
  46. ok, i deployed the transpoters but i see no change.. Nakivo is installed locally on a storage1, that provides datastore via NFS to the 2 esxi hosts i also have another storage2 that provides datastore via NFS to the 2 esxi hosts, but for this test, im not using it. I have VMs on esxi1, with datastore1, and when i backup them, they go at very low speed, and use the network all nakivo has to do is copy the snapshots locally, as the datastore and nakivo are the same server.. what am i missing?
    1 point
  47. Hello, thanks for support, it works. Regards Francesco
    1 point
×
×
  • Create New...