Leaderboard
Popular Content
Showing content with the highest reputation since 08/19/19 in Posts
-
I've just uploaded almost 20GB of very important data on OneDrive. Documents, xlsx, pdfs mostly... Now I wonder about OneDrive's security and reliability. What is your point of view? Should I keep one more copy in other place?3 points
-
Hello, community, let's gather here general and more scecific tips on how to speed up backups. Any suggestions and comments will be helpful. Thank you3 points
-
Hello, @Loki Rodriguez, great idea! From my side I can recommend reading this worthwhile blog posts and references: How to Increase VM Backup Speed Increasing VM Backup and Replication Speed with Network Acceleration Network Acceleration Best Practices for Efficient Hyper-V VM Backup Advanced Bandwidth Throttling VMware Backup: Top 10 Best Practices Hyper-V Best Practices for Administration An Overview of Microsoft Office 365 Backup Best Practices3 points
-
I think the developer made a fundamental mistake here. The repository should be only responsible for storage type (Disk or tape), where the data is stored (local, NAS or cloud), and how the data is stored (Compression and Encryption). The backup job should determine how the data is structured. "Forever incremental" and "Incremental with full" options should be within the job schedule section, and the retention policy determines how long the data should be kept. To determine if the data "Forever incremental" or "Incremental with full" should be the function of the backup job itself, not the function of the repository.2 points
-
Hi Nakivo Is it possible to add MFA option for the Webinterface. I have some customers, where MFA is required for the backup. Thanks Mario2 points
-
Hi Tape Volume doesn't move to original slot when it removed from drive.2 points
-
I'm perfectly agree with TonioRoffo and Jim. Deduplication efficiency and incremental forever has been, from the beginning, one of the main reasons that made me choose Nakivo as the number one backup solution for my customers. The obligation to periodically perform full backups was, for some of my customers, extremely boring for obvious reasons of time and space consumption. Now I find myself having to destroy the repositories with all my backups to recreate them with the correct settings in order to be able to use them in "incremental forever" as I was always used to on previous versions from Nakivo. I think the change was indeed a bad idea.2 points
-
Could you please do a video for Nakivo B&R to Microsoft Azure ?2 points
-
Could I suggest you include these instructions for manual download and installation on DSM 7 for the current and future versions on the main Updates page (with suitable caveats/warnings). The Synology approval process is clear far too slow (it can take months for an update to appear, and even then it tends to lag behind the current version), so I suspect most of us using Nakivo on synology are now updating manually (and how to do so is becoming a FAQ). Matthew2 points
-
2 points
-
2 points
-
@Official Moderator Nice tips. One more from me - use two-factor verification. It is a must2 points
-
Hello, @Tayler Odling, of course if your data is important for you, keep it in multiple places, I recommend to have at least three copies of it, save it to two different types of media and at least one copy offsite. About Microsoft OneDrive, my point of view it is as safe as any other storage, but you should not rely only on one spot, though it provides encryption. Please, find great tips on OneDrive data safety and a list of the common mistakes that can influence your cloud security here: https://www.nakivo.com/blog/microsoft-onedrive-security/ Let me know if you need any additional assistance from me. I look forward to hearing from you, Tayler.2 points
-
When I created this thread, I thought we would be mostly speaking about network load and its critical impact on backups. (Update: Yes, I have found this information if one of the references by @Official Moderator) Reducing the network load is the first I would do in a situation when you want to increase backup server performance.2 points
-
Hello, @Hakim, I think you should find out more information about NAKIVO Backup for Ransomware Protection: https://www.nakivo.com/ransomware-protection/ Anti-ransomware software alone is not sufficient to protect your organization's data. Also I recommend reading best practices for ransomware protection and recovery: https://www.nakivo.com/ransomware-protection/white-paper/best-practices-for-ransomware-protection-and-recovery/ Sorry for the offtop, I just wanted to support your remark2 points
-
Hello, guys two remarks from me - on one hand, anti-virus solutions can slow down saving and reading data from a disk, but on the other hand, ransomware issues can cause performance degradation. So you should always keep the balance2 points
-
Hello, @tommy.cash, thank you for your post. Here is the list of the possible questions you should ask yourself before creating the Backup Policy of your University: 1) Frequency of system/application backups. 2) Frequency of network file backups. 3) Frequency of email backups. 4) Frequency of desktop backups. 5) Storage. 6) Recovery Testing. About Microsoft 365 Backup Policies - there are inbuilt configuration options in your account which can help to prevent data loss. Due to Microsoft's Shared Responsibility Model, it is necessary also to have a backup of your Microsoft 365 data, using a third-party solution as NAKIVO Backup & Replication. Please, check a useful how-to guide on setting up Microsoft 365 backup policies: https://www.nakivo.com/blog/setting-up-microsoft-office-365-backup-policies/2 points
-
@Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!2 points
-
Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.2 points
-
Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!2 points
-
2 points
-
Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!2 points
-
2 points
-
Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris2 points
-
Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.2 points
-
In a multi-tenant environment, if vpn drops for any reasons (isp problem, customer's ac problem, etc) the sheduled jobs will start in time or they will wait director command to start? thank's and best regards2 points
-
The shortest way to integrate this without going vendor specific, is adding SAML 2.0. This would allow logins based on Microsoft 365 or other MFA vendors.2 points
-
2 points
-
It is something that i may be interested in but i would like to see a "how to" video on the process on your youtube. I saw that you have a "how to" video on amazon ec2 backups (which is old now by the way)2 points
-
Thank you. Someone from Nakivo Support helped me already. This is what I did: to increase java memory on NAKIVO application as following : 1. Stop NAKIVO service from the AppCenter 2. Connect to Synology via SSH using "root" 3. edit nkv-dirsvc file ( mostly that file is located in /volume1/@appstore/NBR/nkv-dirsvc ) and replace: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx320m" with: SVC_OPTIONS="-Dfile.encoding=UTF-8 -server -XX:+AggressiveOpts -XX:-OmitStackTraceInFastThrow -Xnoagent -Xss320k -Xms128m -Xmx640m" 4. Start NAKIVO service Thank you. Juraj.2 points
-
2 points
-
Our team is excited to announce the official release of NAKIVO Backup & Replication v9.1! The most anticipated features are finally available: Backup to Tape, Linux Server Backup, Instant Verification and Windows Workstation Backup. The fully-functional Free Trial version is here https://www.nakivo.com/resources/releases/v9.1/2 points
-
NAKIVO Backup & Replication v9.0 allows you to perform an incremental, application-aware backup of Windows Servers. With our solution, you can instantly recover files as well as application objects! Download our full-featured free trial to try out NAKIVO Backup & Replication for yourself: https://www.nakivo.com/resources/releases/v9.0/2 points
-
Hi there, I hope everyone is doing fine. I have a very unique situation here preventing me to configure AD integration inside Nakivo. My domain was created using a single name, there is no dot with a second FQDN right after. Where it should be "Contoso.com", it is only "Contoso" for example. Nakivo won't let me click on apply and it marks the box with a red color, and it says "The value in this field is invalid". Do any of you have a workaround for this? Or can a fix be implemented? Thank you and regards, Luguta1 point
-
OK, I expected that there would be a simple command to integrate a new repository. It should be included into Nakivo documentation. Fortunately I have good Linux knowledge, so I did it. What do I have to do to remove the old Onboard Repository which is unneeded now? I would remove it from /etc/fstab, remove logical volume, delete partition /dev/sdb and finally remove the 500GB disk from the VM. Is it enough? Or are additional steps needed to remove it properly?1 point
-
Hi, I would like to know if it is possible to change the storage class of S3 repos. For example transitioning to Infrequent access files that are created after one week or something like that. Thank you1 point
-
Hello again, Any news regarding when the NAKIVO 10.6 will be available in Synology Package center? Another month is passing by and even there is beta for 10.7 but we cant still update even to 10.6 on Synology1 point
-
@AalHaiThank you for getting back to us! In this case, we could recommend the following workaround: 1. Create a script on the NAKIVO director's side with the following actions (please note that there should be a separate script for each physical machine that should be backuped): - Refresh the transporter that is assigned to a necessary physical machine. - Start the backup job for a necessary physical machine. 2. Create a task on the physical machine side that will initiate the above script remotely on some trigger (boot, wake up etc.). At the same time, such workflow was not tested by our QA Team, and we could not guarantee its reliability. Thank you so much for your attention and participation. We are looking forward to your feedback soon1 point
-
Hi all, I installed Nakivo in FMCSPC. However, when i want to backup the machine itself by adding it to the inventory, this warning message where physical machine agent cannot be installed on X machine. The machine contains a transporter occured. I googled: https://helpcenter.nakivo.com/display/KB/Physical+Machine+Agent+Installation+Failed Solution To solve the problem, take the following actions: For Windows physical servers Uninstall the Transporter that has been previously installed on the Windows machine by running the corresponding uninstaller accessed from Programs and Features on the Windows OS. Add the Windows physical server to the product inventory again. I have backup my other computers in repositories using the onboard tranporter (FMCSPC). (1) Will the other backup files be corrupted if I deleted the onboard transporter to backup the FMCSPC machine itself? (2) How do i uninstall the transporter in programs and features? i only saw newly added software Nakivo backup & replication and microsoft visual c++ 2013 redistributable (x64) in the control panel.1 point
-
1. I have Backup Repository on NAS device that is mounted as NFS share. 2. I have Backup Copy Job with retention policy with 3 last recovery points. 3. Itselft on a backup repository you can see that VM have 31 points to restore instead of 3 that is setup. My problem that all my repositories always full up.1 point
-
Hi, @eric_76! One of the options for you would be to save emails to a desktop. It is a versatile option. You can save one or multiple emails to single or multiple files. You can also export emails to PDF or other file formats (e.g., CSV or PST files). This method works for both Outlook Online and Microsoft Office Outlook. Check a complete step-by-step guide on how to export your emails to desktop on our blog https://www.nakivo.com/blog/saving-microsoft-365-emails-to-desktop/ If you get any questions along the way, let me know.1 point
-
Yes I understand only a single licence file but can that licence file contain a mix of versions?1 point
-
Solved this problem by successfully downgrding DSM 7 to DSM 6.2 using the following procedure based on infos from: https://emby.media/community/index.php?/topic/89848-tutorialdowngrade-from-dsm-70-to-dsm-623/ 1. Ensure you have a telnet client available on your PC 2. In DSM 7.0, enable Telnet and SSH access 3. Using telnet, connect to your Synology NAS from your PC: (x.x.x.x is the IP address of your NAS) telnet to your NAS x.x.x.x 4. Run the following command on your NAS: sudo -i vi /etc.defaults/VERSION 5. Replace the entire contents of that file with the following: majorversion="6" minorversion="2" productversion="6.2.3" buildphase="GM" buildnumber="25426" smallfixnumber="0" builddate="2020/07/01" buildtime="06:24:39" 6. Save the file (:qw)and reboot your NAS 7. Use the Synology Assistant or any IP-Scanner to get the IP of your NAS (has changed back to DHCP) 8. Install DSM 6.2.3 using the .pat file downloaded from the Synology download center. The upgrade will start but then quickly show an error stating you are unable to downgrade and must install a version of DSM7. Don't worry, because at this point, Telnet will be available and you will be able to connect to your NAS. Synology Archive Download Site - Index of /download/Os/DSM/6.2.3-25426 9. Telnet to your NAS like in step 3, but instead of using your account credentials from before, use the following: username: root password: 101-0101 10. Repeat steps 4 and 5, replacing the VERSION file with the new contents (no sudo as you are root now). 11. Install 6.2.3 again. Your NAS should successfully downgrade to 6.2.3. I haven't verified that this will retain your data on your disks 12. Now you may upgrade to the latest DSM 6.2.x from the NAS GUI. 13. If your NAS had a static IP, you have to reconfigur your network settings as the downgrade changed back to DHCP.1 point
-
Is there any way to get on a notification list, for when the updated product is released?1 point
-
OK I worked out that "Store backups in separate files" is the exact same meaning as "no deduplication". If I am wrong - then please let me know. Nakivo should have just kept it simple by keeping the "deduplication" wording and not confuse people with another option that means the same thing. It's a bit crazy to look at a repository after it's been created, and see "Deduplication" and "Store backups in separate files" as completely separate line items (which are far apart). Both refer to the same thing. I bet there is a product designer pointing his finger at a collegue right now going "see, I told you that customers would not like this!". And they would be right. It's pretty strange that for this feature, the new docs are now saying "Enabling this option is highly recommended to ensure higher reliability and performance" WHAT?! Dedup was one of the main reasons I bought Nakivo! https://www.nakivo.com/vmware-backup/#benefits - this talks up dedup as a benefit and cost saving. Nowhere in the sales pitch do you say that it's highly recommended not to use it!! It's interesting to note also that deduplication and immutable backup respository are incompatible with each other, as I found out from emailing tech support. Some feedback on documentation. Nakivo should put 10.3 or 10.4 (or whatever the version is) at the top of each documentation page. That would be very handy. I thought I was reading the latest version when I posted above, as the Release Notes page mentioned 10.4. I was like... ok this looks like I'm in the right place for the latest info as it lists 10.4. https://helpcenter.nakivo.com/display/RN/Release+Notes I didn't realise until after contacting tech support, that it was the older documentation. Also, the images zoom on the 10.3 docs but not 10.4. I prefer zoom, as sometimes the info is unreadable until it's zoomed in. Please fix this in the 10.4 docs. Thanks. An issue I found here: https://helpcenter.nakivo.com/User-Guide/Content/Deployment/System-Requirements/Feature-Requirements.htm#Ransomware ------ quote For Local Folder type of Backup Repository, the following conditions must be met: The Backup Repository data storage type must be set to Incremental with full backups. ----- end quote Whoever wrote that maybe doesn't realise that "incremental with full backups" doesn't exist as an option any more in 10.4, like it did in 10.3. I have been told in an email that "...there is the option "store backups in separate files". When this option is selected, the repository is "Incremental with Full Backup" and Immutable is supported." It would be handy if you explained that in your documentation. The line above sould probably say The Backup Repository data storage type must be set to "store backups in separate files" (i.e. deduplication switched off). This implicitly sets the data storage policy to Incremental with full backups.1 point
-
1 point
-
Alright, let me explain my scenario I have a couple of Esxi hosts with 6 VMs on each I have 2 NFS Linux Storages ( storage1 an storage2) and nakivo running in one of those storages (storage2) locally Right now, if I want to backup a VM, it goes something like this: VMs get snapshotted, and new files are created on their respective storages Nakivo starts copying the files to the repository, BUT, its goes like this storage1 -> esxi host -> nakivo repository (storage2) storage2 -> esxi host -> nakivo repository (storage2) Instead of going directly like this storage1 -> nakivo repository storage2 -> local nakivo repository (without using the network at all) So, right now, nakivo is creating backups at around 50-60MB/sec but they are using all the bandwidth available on the esxi hosts, completely wrecking their performnce while backups are running. wouldn't make sense for nakivo to pull the snapshot files directly from the esxi storages? is that posible? maybe adding the storages directly on the nakivo admin panel and indicating were each VM is stored?1 point
-
1 point
-
can i do flash reboot on repository which are not compressed and deduplicated backups? or from dedupe appliance?1 point