Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/19/19 in Posts

  1. I've just uploaded almost 20GB of very important data on OneDrive. Documents, xlsx, pdfs mostly... Now I wonder about OneDrive's security and reliability. What is your point of view? Should I keep one more copy in other place?
    3 points
  2. Hello, community, let's gather here general and more scecific tips on how to speed up backups. Any suggestions and comments will be helpful. Thank you
    3 points
  3. Hello, @Loki Rodriguez, great idea! From my side I can recommend reading this worthwhile blog posts and references: How to Increase VM Backup Speed Increasing VM Backup and Replication Speed with Network Acceleration Network Acceleration Best Practices for Efficient Hyper-V VM Backup Advanced Bandwidth Throttling VMware Backup: Top 10 Best Practices Hyper-V Best Practices for Administration An Overview of Microsoft Office 365 Backup Best Practices
    3 points
  4. Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario
    2 points
  5. It is not available in the PRO license!
    2 points
  6. @Official Moderator Nice tips. One more from me - use two-factor verification. It is a must
    2 points
  7. Hello, @Tayler Odling, of course if your data is important for you, keep it in multiple places, I recommend to have at least three copies of it, save it to two different types of media and at least one copy offsite. About Microsoft OneDrive, my point of view it is as safe as any other storage, but you should not rely only on one spot, though it provides encryption. Please, find great tips on OneDrive data safety and a list of the common mistakes that can influence your cloud security here: https://www.nakivo.com/blog/microsoft-onedrive-security/ Let me know if you need any additional assistance from me. I look forward to hearing from you, Tayler.
    2 points
  8. When I created this thread, I thought we would be mostly speaking about network load and its critical impact on backups. (Update: Yes, I have found this information if one of the references by @Official Moderator) Reducing the network load is the first I would do in a situation when you want to increase backup server performance.
    2 points
  9. Hello, @Hakim, I think you should find out more information about NAKIVO Backup for Ransomware Protection: https://www.nakivo.com/ransomware-protection/ Anti-ransomware software alone is not sufficient to protect your organization's data. Also I recommend reading best practices for ransomware protection and recovery: https://www.nakivo.com/ransomware-protection/white-paper/best-practices-for-ransomware-protection-and-recovery/ Sorry for the offtop, I just wanted to support your remark
    2 points
  10. Hello, guys two remarks from me - on one hand, anti-virus solutions can slow down saving and reading data from a disk, but on the other hand, ransomware issues can cause performance degradation. So you should always keep the balance
    2 points
  11. Hello, @tommy.cash, thank you for your post. Here is the list of the possible questions you should ask yourself before creating the Backup Policy of your University: 1) Frequency of system/application backups. 2) Frequency of network file backups. 3) Frequency of email backups. 4) Frequency of desktop backups. 5) Storage. 6) Recovery Testing. About Microsoft 365 Backup Policies - there are inbuilt configuration options in your account which can help to prevent data loss. Due to Microsoft's Shared Responsibility Model, it is necessary also to have a backup of your Microsoft 365 data, using a third-party solution as NAKIVO Backup & Replication. Please, check a useful how-to guide on setting up Microsoft 365 backup policies: https://www.nakivo.com/blog/setting-up-microsoft-office-365-backup-policies/
    2 points
  12. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  13. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  14. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  15. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  16. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  17. Thank you so much for your help. It worked for me using win SCP.
    2 points
  18. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  19. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  20. In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards
    2 points
  21. The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.
    2 points
  22. 2 points
  23. Could you please do a video for Nakivo B&R to Microsoft Azure ?
    2 points
  24. It is something that i may be interested in but i would like to see a "how to" video on the process on your youtube. I saw that you have a "how to" video on amazon ec2 backups (which is old now by the way)
    2 points
  25. Thank you. Someone from Nakivo Support helped me already. This is what I did: to increase java memory on NAKIVO application as following : 1. Stop NAKIVO service from the AppCenter 2. Connect to Synology via SSH using "root" 3. edit nkv-dirsvc file ( mostly that file is located in /volume1/@appstore/NBR/nkv-dirsvc ) and replace: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx320m" with: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx640m" 4. Start NAKIVO service Thank you. Juraj.
    2 points
  26. Hello, thanks for support, it works. Regards Francesco
    2 points
  27. Our team is excited to announce the official release of NAKIVO Backup & Replication v9.1! The most anticipated features are finally available: Backup to Tape, Linux Server Backup, Instant Verification and Windows Workstation Backup. The fully-functional Free Trial version is here https://www.nakivo.com/resources/releases/v9.1/
    2 points
  28. NAKIVO Backup & Replication v9.0 allows you to perform an incremental, application-aware backup of Windows Servers. With our solution, you can instantly recover files as well as application objects! Download our full-featured free trial to try out NAKIVO Backup & Replication for yourself: https://www.nakivo.com/resources/releases/v9.0/
    2 points
  29. Hello, @Michael.du Check the Deleted Items folder, if you can't find the needed emails in it, please see the Recoverable Items folder, it is hidden. If you want to recover emails from Outlook, don't forget to inspect the Trash folder. By the way, we have a great article on this topic, I think, this is what you need right now: https://www.nakivo.com/blog/how-to-recover-deleted-emails-office-365-guide Thank you for your post, please, let me know if any further assistance is needed.
    1 point
  30. I had 10.6 installed before I updated to DSM 7.1 (on my x86_64 synology). On the units that had transporters they showed a "REPAIR" option in the package center that worked great but on the main unit with the Nakivo software it just shows it is incompatible. Is there any updater or way to downgrade to 10.51 like it did for the transporters? Thank you!
    1 point
  31. Hi, @Connor0967! You can try restarting the ESXi management agents. Most likely, it resolves the issue. Otherwise, you can restart the ESXi host, but you will have to shut down your VMs or migrate them to another host, which might be problematic for a production environment. So, start by restarting the ESXi management agents. You can find more information on the procedure in our blog post https://www.nakivo.com/blog/how-to-restart-management-agents-on-a-vmware-esxi-host/ Don't hesitate to contact me if you get any questions.
    1 point
  32. Thank you for the response and the blog post. It is very descriptive. I'll start learning more about VXLAN. Seems interesting.
    1 point
  33. Hello, I just open my Synology running DSM7 On the installed paquet the Nakivo tell me to do an update to solve the issue And Nakivo is working again after upgrade ! (version 10.4.0.59271-3000) Michel
    1 point
  34. Hello, i need the quick fix for Nakivo 10.4 Synology NAS package. Please!!, many thanks
    1 point
  35. Hi, can you (moderator) kindly post the path to download the above mentioned quick fix? Thanks!
    1 point
  36. "action":"AuthenticationManagement","method":"login","tid":"1","type":"exception","message":"The API access is not licensed. Please contact support.","where":null,"cause":null,"data":null} this is the output when i do login using the API. Can you please tell me what is wrong I am doing here.
    1 point
  37. I've got the following link from support: https://maglar.box.com/shared/static/ebzm01v0080ibtvdh2ubc11s7ops58wa.sh This is a newer build, and it should fix the problems. Since I was able to fix the problems with reinstall, I won't upgrade. But maybe some else can minimize his trouble. Mario
    1 point
  38. Hello, In current version 10.3 is feature called "direct connect", which doesn't need VPN, but just for VMWare. Direct connect for physical servers will be available in 10.6. There is no official release date of it. It's been said during today's webinar. So we're still looking forward. Regards, Jaroslav
    1 point
  39. @Official ModeratorEverything works now. Had multiple versions installed. Thank you bunch!
    1 point
  40. Hello I'm on the way to migrate to "Tenant Resource Allocation". Now my 3 repos are offline and I can only remove it. I don't have the option to attach agan? How can I resolve this sitation? Thanks Mario
    1 point
  41. Hi I've installed Backup & Replication on Synology Now I want to add a Win 10 PC which is connected to MS account Login How to find the correct Username and PW log in the PC with Microsoft email and pin what to put in here Pls help THX
    1 point
  42. A few months back, I was told by support that a new type of repository was being developed where space reclamation could run without blocking access to the repo. The process would be similar to how a Data Domain can run space reclamation while still accessible but at the expense of I/O performance; however, reclamation can be throttled to increase I/O performance at the expense of running a longer reclamation period. Is it possible to get an update as to the status of this technology? I run backups nightly and need to reclaim space after each run due to storage space constraints. I've designed my job calendar to maximize the reclamation time period, but it sometimes runs into the next day's schedule. While normal backup jobs will pause reclamation, the director backup will fail if it is running. While my systems often run without incident, the Director can be chatty with its alerting which reduces the effectiveness of bringing important events to my attention.
    1 point
  43. I cannot add linux physical server to inventory when adding to 35% then get below error. XXX: Cannot add the machine An internal error has occurred. Reason: The deployed agent is inaccessible. (TransporterServiceException). Any idea of this error? is need root account with can ssh login?
    1 point
  44. Hello, For a reason or another the transporter fail to install on a debian server. How can I install it manually ?
    1 point
  45. we actually failed the restore after exporting and importing back into datastore. the disk refuse to boot .... we are testing on Hyper-V image and export to VMDK, create a new vm server, upload disk vmware datastore, mount the disk and attempt to boot.
    1 point
  46. Would it be possible to schedule the stop of space reclaim? That way it does the bigger blocks which accounts for the majority of the space. Reason I ask is that we will have around 350 separate repositories and if the power goes out at any one of them, it corrupts the backups, and when this is multiplied by the amount of repositories, it makes it a nightmare to manage.
    1 point
  47. i would like to know if we configure AWS S3 as target repository. during backup, or backup copy job, does the transporter store local copy before uploading to AWS? or it will directly upload to S3? will there be a local cache copy during S3 transmission?
    1 point
  48. May I know the release date for physical server restore? It is getting risky with physical server backup. But do not have a way to do full server restore to physical server.
    1 point
  49. I need to patch ESXi with VMware Update Manager. Please help
    1 point
  50. Thanks a lot for your reply! 1) Full in both. 2) Yes 3) Perhaps i did the config wrong on site A then. * Assigned Transporter: The one from site B, correct? * Location I have the following options: (1) Local folder on remote transporter (2) remote cifs folder (3) remote nfs folder. Which option is the correct one if transporter on site A doesn't have direct access to the NAS on site B? 4) That works fine, that's how backup is working on site B The reason why i didn't open a support request with NAKIVO is that i am using the free version. Should I do it? Thank you for your time!
    1 point
×
×
  • Create New...