Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/19/19 in Posts

  1. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  2. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  3. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  4. Thank you so much for your help. It worked for me using win SCP.
    2 points
  5. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  6. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  7. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  8. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  9. Could you please do a video for Nakivo B&R to Microsoft Azure ?
    2 points
  10. 2 points
  11. It is something that i may be interested in but i would like to see a "how to" video on the process on your youtube. I saw that you have a "how to" video on amazon ec2 backups (which is old now by the way)
    2 points
  12. Thank you. Someone from Nakivo Support helped me already. This is what I did: to increase java memory on NAKIVO application as following : 1. Stop NAKIVO service from the AppCenter 2. Connect to Synology via SSH using "root" 3. edit nkv-dirsvc file ( mostly that file is located in /volume1/@appstore/NBR/nkv-dirsvc ) and replace: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx320m" with: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx640m" 4. Start NAKIVO service Thank you. Juraj.
    2 points
  13. Hello, thanks for support, it works. Regards Francesco
    2 points
  14. Our team is excited to announce the official release of NAKIVO Backup & Replication v9.1! The most anticipated features are finally available: Backup to Tape, Linux Server Backup, Instant Verification and Windows Workstation Backup. The fully-functional Free Trial version is here https://www.nakivo.com/resources/releases/v9.1/
    2 points
  15. NAKIVO Backup & Replication v9.0 allows you to perform an incremental, application-aware backup of Windows Servers. With our solution, you can instantly recover files as well as application objects! Download our full-featured free trial to try out NAKIVO Backup & Replication for yourself: https://www.nakivo.com/resources/releases/v9.0/
    2 points
  16. Hello, i need the quick fix for Nakivo 10.4 Synology NAS package. Please!!, many thanks
    1 point
  17. Is there any way to get on a notification list, for when the updated product is released?
    1 point
  18. "action":"AuthenticationManagement","method":"login","tid":"1","type":"exception","message":"The API access is not licensed. Please contact support.","where":null,"cause":null,"data":null} this is the output when i do login using the API. Can you please tell me what is wrong I am doing here.
    1 point
  19. OK I worked out that "Store backups in separate files" is the exact same meaning as "no deduplication". If I am wrong - then please let me know. Nakivo should have just kept it simple by keeping the "deduplication" wording and not confuse people with another option that means the same thing. It's a bit crazy to look at a repository after it's been created, and see "Deduplication" and "Store backups in separate files" as completely separate line items (which are far apart). Both refer to the same thing. I bet there is a product designer pointing his finger at a collegue right now going "see, I told you that customers would not like this!". And they would be right. It's pretty strange that for this feature, the new docs are now saying "Enabling this option is highly recommended to ensure higher reliability and performance" WHAT?! Dedup was one of the main reasons I bought Nakivo! https://www.nakivo.com/vmware-backup/#benefits - this talks up dedup as a benefit and cost saving. Nowhere in the sales pitch do you say that it's highly recommended not to use it!! It's interesting to note also that deduplication and immutable backup respository are incompatible with each other, as I found out from emailing tech support. Some feedback on documentation. Nakivo should put 10.3 or 10.4 (or whatever the version is) at the top of each documentation page. That would be very handy. I thought I was reading the latest version when I posted above, as the Release Notes page mentioned 10.4. I was like... ok this looks like I'm in the right place for the latest info as it lists 10.4. https://helpcenter.nakivo.com/display/RN/Release+Notes I didn't realise until after contacting tech support, that it was the older documentation. Also, the images zoom on the 10.3 docs but not 10.4. I prefer zoom, as sometimes the info is unreadable until it's zoomed in. Please fix this in the 10.4 docs. Thanks. An issue I found here: https://helpcenter.nakivo.com/User-Guide/Content/Deployment/System-Requirements/Feature-Requirements.htm#Ransomware ------ quote For Local Folder type of Backup Repository, the following conditions must be met: The Backup Repository data storage type must be set to Incremental with full backups. ----- end quote Whoever wrote that maybe doesn't realise that "incremental with full backups" doesn't exist as an option any more in 10.4, like it did in 10.3. I have been told in an email that "...there is the option "store backups in separate files". When this option is selected, the repository is "Incremental with Full Backup" and Immutable is supported." It would be handy if you explained that in your documentation. The line above sould probably say The Backup Repository data storage type must be set to "store backups in separate files" (i.e. deduplication switched off). This implicitly sets the data storage policy to Incremental with full backups.
    1 point
  20. Hi I did the update to 10.4 and had again some troubles with Cert and Passwords on Windows / Hpyer-V Hosts. After some tests I found the following way to get it running. Bevor you start the Update: Create a copy of your current Nakivo Directory on C:\Program Files\NAKIVO Backup & Replication Do the update, if it's working - be happy If not: stop the nakvio service copy certificate.pem and bhsvc.id from the backup directory to your current directory. start the service Refresh the transporter, now it should work Greetings Mario
    1 point
  21. After a successfull backup of our file server (virtual), If I test a Recovery task, I cannot find data partition (bitlocker encrypted) but only o.s. partition is available for granular selection of files. Where I'm wrong ? backup size seems correct (1TB) but I can see only 3 parts; the most important (data part) seems has been skipped for some reason. Anybody experienced this issue ? Alex
    1 point
  22. Dear Moderator, as stated above - it worked for me. I just moved the installer from root (/) to /home and there I could start the installer without problems. Thanks a lot.
    1 point
  23. the support has already answered me. in fact, nakivo does not duplicate to wasabi. the solution would be to make a local backup with deduplication + 1 backup copy job
    1 point
  24. Hello, In current version 10.3 is feature called "direct connect", which doesn't need VPN, but just for VMWare. Direct connect for physical servers will be available in 10.6. There is no official release date of it. It's been said during today's webinar. So we're still looking forward. Regards, Jaroslav
    1 point
  25. Hello! Just wanted to know what differentiates NAKIVO from the competence solutions? Because frankly, everything seems pretty similar.
    1 point
  26. Model is an RS2418+ and I currently have the full 10.2 installed.
    1 point
  27. Hello. Sorry for late reply. I already resolved my issue by downloading beta 10.3. It worked like a charm My instalation was on synology DS720 + (Celeron J4125) Best regards Marko
    1 point
  28. Hello, I think it would be great if you all could add a report feature to the product. Nakivo is a key part in my DR strategy and being able to have reports that show for example how long it took to restore backups, perform backups etc. would help when talking to business stakeholders.
    1 point
  29. I'm running 10.2.0.52518 and it seems to be not compatible with vmware 7.0.2 i get the error An internal error has occurred. Reason: cannot find field: statelessNvdsMigrationReady (NoSuchFieldException). When will we have a release that is compatible with vmware 7.0.2 ?
    1 point
  30. Hello, just like in title. Can you consider adding year to file/dir modification date view in restore individual files window? Having day and month is making not so obvious to see when some files where modified last time. Thanks
    1 point
  31. Hello everyone Anyone here who successfully uses Backup 2 Tape? For months we have had the problem that the Backup2 tape job takes far too long. (About 200 VM, about 10 TB of data). We need about 30 hours, which is about 100 MB / s on average. The tape drive is capable of 300 MB / s natively The support could not help so far. It's not the hardware I can copy the data from one repository to another within the storage at a constant rate of over 200 MB / s. (Windows Server 2019, 64GB, 8 * Raid 10 SAS Drives) Nakivo also uses herer several "threats" in parallel. When copying to tape, only one "thread" is used (which is also logical, the tape can only work sequentially). Why Nakivo does not write anything on tape for several minutes, however, is incomprehensible. (regardless of whether it is copied from the deduplicated or undeduplicated repository). The tape drive is connected to its own SAS controller and has been proven to achieve this speed. Nakivo is otherwise a great program, but unfortunately almost useless for us with a tape drive. If there is someone here who is successfully using it in a similar configuration, I would be grateful for help
    1 point
  32. this link shows that ISCSI, NFS and FTP is supported. but how do it configure the FTP? https://helpcenter.nakivo.com/display/KB/Creating+a+Backup+Repository+on+NFS%2C+FTP%2C+or+iSCSI
    1 point
  33. I got an answer from Support which sparked a realization. I believe I have figured out the problem. The transporter instance size was much smaller than the target EC2 instance. I deployed a new transporter of a size that matched the target instance and kicked off the backup. It is running now and no errors so far, knock wood.
    1 point
  34. ok, i deployed the transpoters but i see no change.. Nakivo is installed locally on a storage1, that provides datastore via NFS to the 2 esxi hosts i also have another storage2 that provides datastore via NFS to the 2 esxi hosts, but for this test, im not using it. I have VMs on esxi1, with datastore1, and when i backup them, they go at very low speed, and use the network all nakivo has to do is copy the snapshots locally, as the datastore and nakivo are the same server.. what am i missing?
    1 point
  35. can i do flash reboot on repository which are not compressed and deduplicated backups? or from dedupe appliance?
    1 point
  36. The release notes for Nakivo 9.4 have the following item: Removed support for Microsoft Exchange 2010 can you provide additional information about the change? Can we back up exchange 2010 and restore items?
    1 point
  37. 1 point
  38. if any of you guys planning to use dedupe on windows server 2019, here is some results i am getting during the backup. i have configure my server to backup without compression and dedupe enable, and with dedupe appliance option enable. here is some interesting results to share. not sure is this the best result i can get on windows server deuplication comparing it to built in dedupe. both runs using post processing deduplication.
    1 point
  39. Hi there, when I try to backup my virtual machines the following error occures: Backup of the "xxxxxx" VM has failed Source host could not create temporary checkpoint required to back up the "xxxxxxx" VM. The host WIndows Hyper Visor 2019. Only VMs which have been migrated from XEN bring that error. Has anybody an idea? Thank you in advance.
    1 point
  40. i would like to know if we configure AWS S3 as target repository. during backup, or backup copy job, does the transporter store local copy before uploading to AWS? or it will directly upload to S3? will there be a local cache copy during S3 transmission?
    1 point
  41. I would like to seek your advise on the following. Is there any reason why the S3 storage option only allow full backup incremental. But not continuous incremental? This way it will remove the dedupe capability.
    1 point
  42. Hi, we would like to have a features like setting a specific backup job dated on specific date to not to overwrite or for specific reason for long term retention. or is there any other way then using retention in the existing NBR? regards, leezy
    1 point
  43. Hello, I have Esxi 7.0 version and can't backup VM. When do you plan to support 7.0 version?
    1 point
  44. I appreciate your taking the time to answer my question
    1 point
  45. I want to understand the differences between synchronous and asynchronous replication.
    1 point
  46. I need to a recover physical server from backup, but a source physical server is damaged and there is no free physical machine that can be used for recovery.
    1 point
  47. You are right, director could work with newer version of transporter, but cannot work with older version. We recommend to run update with the following steps: 1. Update manually remote transporters - NAS transporters - Linux based transporters that are added as "installed service" - manually installed and added Windows based transporters In this case there will not be any downtime, as director will be able to work with newer version of transporter. 2. Update Director. In this case all remaining transporters will be updated automatically: - Hyper-V transporters - physical transporters - Linux based transporters that are added as Virtual Appliance - AWS transporters
    1 point
  48. I have already apporached NAKIVO on this. This was my response: That is not implemented yet. You could use this backup only for file recovery (https://helpcenter.nakivo.com/display/NH/Granular+Recovery) or backup export ( https://helpcenter.nakivo.com/display/NH/Performing+Cross-Platform+Recovery ) purposes and attach physical machine disk(s) in vmdk/vhdx format to existing VM manually. It's just the first phase of physical server support. To be continued in further releases.
    1 point
  49. We are interested in backing up about 25 of our critical servers to Azure. Can you point me in the right direction as to what I will need to build in Azure, and how many transporters i may require?
    1 point
×
×
  • Create New...