Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/19/19 in all areas

  1. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  2. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  3. Thank you so much for your help. It worked for me using win SCP.
    2 points
  4. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  5. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  6. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  7. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  8. Could you please do a video for Nakivo B&R to Microsoft Azure ?
    2 points
  9. 2 points
  10. It is something that i may be interested in but i would like to see a "how to" video on the process on your youtube. I saw that you have a "how to" video on amazon ec2 backups (which is old now by the way)
    2 points
  11. Thank you. Someone from Nakivo Support helped me already. This is what I did: to increase java memory on NAKIVO application as following : 1. Stop NAKIVO service from the AppCenter 2. Connect to Synology via SSH using "root" 3. edit nkv-dirsvc file ( mostly that file is located in /volume1/@appstore/NBR/nkv-dirsvc ) and replace: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx320m" with: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx640m" 4. Start NAKIVO service Thank you. Juraj.
    2 points
  12. Hello, thanks for support, it works. Regards Francesco
    2 points
  13. Our team is excited to announce the official release of NAKIVO Backup & Replication v9.1! The most anticipated features are finally available: Backup to Tape, Linux Server Backup, Instant Verification and Windows Workstation Backup. The fully-functional Free Trial version is here https://www.nakivo.com/resources/releases/v9.1/
    2 points
  14. NAKIVO Backup & Replication v9.0 allows you to perform an incremental, application-aware backup of Windows Servers. With our solution, you can instantly recover files as well as application objects! Download our full-featured free trial to try out NAKIVO Backup & Replication for yourself: https://www.nakivo.com/resources/releases/v9.0/
    2 points
  15. Doesn't Microsoft 365 back up the data? Why do you even need another solution for that?
    1 point
  16. "action":"AuthenticationManagement","method":"login","tid":"1","type":"exception","message":"The API access is not licensed. Please contact support.","where":null,"cause":null,"data":null} this is the output when i do login using the API. Can you please tell me what is wrong I am doing here.
    1 point
  17. There were mistakes on my end - I didn't realise I was reading out-of-date documentation, as the documentation only seems have version number on the front page, and I arrived at the info via direct link of a Google search. Even with the most current documentation, I think many long-term users will struggle to understand fully what is going on with these changes. Thanks for following up.
    1 point
  18. Hi Gavino i can fully understand your annoyance with the documentation and the labelling in the gui. I have already asked some questions about the new storage system, see the following link:
    1 point
  19. The support bundle link gives "Page Not Found". The URL has an extra ")" at the end.
    1 point
  20. Hi I did the update to 10.4 and had again some troubles with Cert and Passwords on Windows / Hpyer-V Hosts. After some tests I found the following way to get it running. Bevor you start the Update: Create a copy of your current Nakivo Directory on C:\Program Files\NAKIVO Backup & Replication Do the update, if it's working - be happy If not: stop the nakvio service copy certificate.pem and bhsvc.id from the backup directory to your current directory. start the service Refresh the transporter, now it should work Greetings Mario
    1 point
  21. I added 4 workstations, which were shut down for the night, to my inventory. Is it possible to stop the errors for the non-responding workstations and the error for the transporters (the same as for the workstations) that do not respond? (or can I turn it into an alert so I won't receive any emails?). For the errors of the workstations I receive an email every 2 hours and another email for the transporters (the same of the workstations). Do you think for the future to create a dedicated section where we can configure alerts and error messages? It would be useful to be able to check the presence of the workstation and the transporter only when the backup is to be done. I understand for a server that it should always be on, but for a workstation it can happen that it is turned off for the night or for the weekend or for the holidays. Just as it was thought, the management of physical machines is unmanageable for an inventory with some workstations other than 3 or 4. The error messages generated in this case risk losing focus on the really important error messages (real transporters not responding, storage not responding, vcenters or resources that must always be present that do not respond). Loris
    1 point
  22. In HotAdd mode, virtual plates from the virtual machines being supported up are naturally mounted to the intermediary, so they can be gotten to by the intermediary as neighborhood circles. The ESX have the intermediary is running on should approach all datastores for the virtual machine. In the event that the virtual machine and the intermediary are not on a similar host, all datastores should be divided among the hosts and the hosts should be inside the equivalent datacenter. In the event that SAN mode isn't accessible, HotAdd mode can accomplish near SAN mode execution. To empower gradual reinforcements of virtual circles, Changed Square Following (CBT) should be utilized for the main full reinforcement. (CBT is empowered for reinforcements of course.)
    1 point
  23. Correct, you didn't. In 7.0U2 they [VMWare] uprated/hardened the security requirements through ssh. By doing this change you've reverted that change by VMWare.
    1 point
  24. I've got the following link from support: https://maglar.box.com/shared/static/ebzm01v0080ibtvdh2ubc11s7ops58wa.sh This is a newer build, and it should fix the problems. Since I was able to fix the problems with reinstall, I won't upgrade. But maybe some else can minimize his trouble. Mario
    1 point
  25. Well, something is strange. At version 12.2 all system are working correct, now after the update of today i get the follwoing error: Certificate of this transporter has expired on 27 Dec 2020 at 16:31 (UTC +01:00). Replace the certificate to continue using this transporter Why was a certificate workting till today, when it expired 6 month ago? @Official Moderator can you share the information on the forum so that everyone can benefit?
    1 point
  26. Problem ix fixed via remote connection from the great support Team... thx for the fast fix
    1 point
  27. however there is still a missing link when comes to physical backup or proxmox VM backup. restoration only limited to vmware and hyperV. not other platform, that is why i have been asking for physical server or Bare metal restore options to be available. this would help us open up other market.
    1 point
  28. The difference is that VMware offers dynamic memory support for any guest OS, and Hyper-V has historically supported dynamic memory only for VMs that run Windows. However, Microsoft added dynamic memory support for Linux VMs in Windows Server 2012 R2 Hyper-V.
    1 point
  29. Hello. Sorry for late reply. I already resolved my issue by downloading beta 10.3. It worked like a charm My instalation was on synology DS720 + (Celeron J4125) Best regards Marko
    1 point
  30. Hello, just like in title. Can you consider adding year to file/dir modification date view in restore individual files window? Having day and month is making not so obvious to see when some files where modified last time. Thanks
    1 point
  31. I can tell you how it works manually or how it works in R1Soft / Idera backup software (basically it is the same way). I can do it here or in some ticket.
    1 point
  32. Hello everyone Anyone here who successfully uses Backup 2 Tape? For months we have had the problem that the Backup2 tape job takes far too long. (About 200 VM, about 10 TB of data). We need about 30 hours, which is about 100 MB / s on average. The tape drive is capable of 300 MB / s natively The support could not help so far. It's not the hardware I can copy the data from one repository to another within the storage at a constant rate of over 200 MB / s. (Windows Server 2019, 64GB, 8 * Raid 10 SAS Drives) Nakivo also uses herer several "threats" in parallel. When copying to tape, only one "thread" is used (which is also logical, the tape can only work sequentially). Why Nakivo does not write anything on tape for several minutes, however, is incomprehensible. (regardless of whether it is copied from the deduplicated or undeduplicated repository). The tape drive is connected to its own SAS controller and has been proven to achieve this speed. Nakivo is otherwise a great program, but unfortunately almost useless for us with a tape drive. If there is someone here who is successfully using it in a similar configuration, I would be grateful for help
    1 point
  33. VPXA had been restarted, result is the same. I also tried to disconnected the ESXi host from vCenter and trying backup from ESXi host dirtectly the result is the same.
    1 point
  34. Hi everyone, Has anyone successfully implemented Backup from AMS/EC2? I'm not having much luck. I have a local VMware Nakivo Director appliance targeting an Ubuntu EC2 instance for Backup. I followed these steps: https://helpcenter.nakivo.com/display/NH/Deploying+Transporters+in+Amazon+EC2 When I let the transporter deploy with Automatic settings, I get a new Nakivo VPC which my local Director can reach, but it cannot initiate a backup (I get an error message). However when I create the transporter Manually, the deployment fails partway through with an error that the transporter can't be reached. Since the automatically deployed Transporter was at least reachable from my Director, I've focused on trying to get that working, but I've not had real success. With the separate Nakivo VPC, I tried setting up a VPC Peer and adjusting my routing tables. I can get the AWS Transporter to ping my target EC2 box, but I don't know what else is needed. Has anyone had success with this kind of setup? Essentially...what is "X" below? [VMware Director appliance] ====> (internet) ==== (Nakivo VPC [Nakivo Txport])...X...(Target VPC [target EC2 Ubuntu]) Any help is greatly appreciated! --Mike
    1 point
  35. I got an answer from Support which sparked a realization. I believe I have figured out the problem. The transporter instance size was much smaller than the target EC2 instance. I deployed a new transporter of a size that matched the target instance and kicked off the backup. It is running now and no errors so far, knock wood.
    1 point
  36. Hi everybody from France, i have installed the solution (full) on a workstation with Windows 10 pro. When i want to add a Inventory, i have a message like this "This machine already contains a transporter" and i can't go to the next step. Some one have the same problem ? Help will be welcome.
    1 point
  37. After upgrading to 9.4, backup copy job takes forever, like 47 hours instead of 3 h.
    1 point
  38. Good news. Do you have the QNAP app version ready for testing please?
    1 point
  39. if any of you guys planning to use dedupe on windows server 2019, here is some results i am getting during the backup. i have configure my server to backup without compression and dedupe enable, and with dedupe appliance option enable. here is some interesting results to share. not sure is this the best result i can get on windows server deuplication comparing it to built in dedupe. both runs using post processing deduplication.
    1 point
  40. This is so because forever incremental repository is spread between 4096 files and AWS S3 doesn't allow writing data to all these files simultaneously or works extremely slow with this repository type.
    1 point
  41. Hi, It Locas from Reviewsed. Thanks for your reply. Actually i am completely my cloudways reviews so that i can mention your point in my review.
    1 point
  42. Hello. We are running Nakivo from Synology and lately we have been getting a message to allocate more RAM. System · 17 Apr at 18:00 (UTC +02:00) Application is about to run out of memory Allocate more memory to the application. https://www.nakivo.com/helpcenter9.1/display/KB/How+to+Allocate+More+RAM+to+NAKIVO+Director+Service Could someone please walk me through this? Thank you, Juraj.
    1 point
  43. Hello everyone, i downloaded and configure the trial version 9.0.0. I would to connect in console in order to remove the default storage.I read this KB: https://helpcenter.nakivo.com/display/NH/Removing+the+Disk+with+Backup+Repository+from+Virtual+Appliance in this article the default password is: user:root pwd: root. But when i try to login i receive the error: login incorrect. Can you help me to solve this issue? thanks for support. Regards Francesco
    1 point
  44. Do we have any option to get File level back up ? kind of Working File (excel , word ,outlook .pst)?
    1 point
  45. Like any other password VMware ESX Server root password can be lost or forgotten. Of course, it can cause problems with control over your hosts. Find out how to reset an ESXi root password without reinstalling ESXi on the server here: https://www.nakivo.com/blog/best-way-reset-esxi-default-password
    1 point
  46. I want to understand the differences between synchronous and asynchronous replication.
    1 point
  47. Ah, we are using Forever incremental. Thanks for the reply.
    1 point
  48. The inappropriate links will be deleted and the user will get a warning point.
    1 point
  49. Hi I don't like or understand the way Nakivo is rating some errors. As example, when I have setup the self-healing /reclaim job at the reopo and set the option: "Stop backup and recovery to run space reclaim " Why I get an "alarm" if the backup job can't be run during this time? I've done the decision to priorize reclaiming, so this is only a notification. Also if a periodic job has an error but at next runtime all is OK, this is only a notification. Does anyone agree with me?
    1 point
×
×
  • Create New...