Jump to content
NAKIVO Community Forum

All Activity

This stream auto-updates

  1. Yesterday
  2. Last week
  3. @DaniAvni and @Mario, hi! Currently, the exclusion of specific files from the backup of the Physical machines is not yet supported. However, this feature is in our backlog. We do not have ETA. We would recommend sending us a support bundle with a detailed description of the feature request so we can pass it to our Product Team. Thank you a lot for your suggestions! They let us improve.
  4. I took a look on what Arcserve is doing and they are not backup up the pagefile and hiberfil contents but they do backup their location on the MFT so that the file returns to the same place (only content is replaced). Any solution is valid as long as the contents of the file do not waste space in my backups. My machine with almost nothing changes on disk can still produce a 100GB backup just because page file moves, antivirus changes and the fact that my SSD is fragmented (making a lot of disk blocks appear as changed)
  5. @Mario, I thought about it (thanks!) but the problem is that once I start excluding drives I need to do it over all physical machines and if I add a drive it will not be included automatically. A much better solution is for the backup software to exclude these files.
  6. Hi again, I've tried to restore Individual Files or Universal Object Recovery from a remote Linux VM server thru a transporter, but seems it cannot establish a iSCSI connection... I have a Linux VM with XFS that the QNAP cannot read directly, so I have to use a transporter with iSCSI to Individual File recovery. I'm just using an UBUNTU VM with a demo NAKIVO to restore, but I need to know if it's possible thru the QNAP. If I enable iSCSI on the QNAP services, the transporter see the QNAP targets so iSCSI must be ok... But connection from Nakivo Backup doesn't work. It always show this error on trasporter logs (flr.log): [09/16 14:16:25.482] ERR: Failed to mount iSCSI target "iqn.2011-09.com.nakivo:backup:dba********" (14)... [09/16 14:16:30.497] ERR: Failed to mount iSCSI targets (14)... I've tried from two different VM Linux machines, an Vm NBR appliance, one Windows, and one Linux Physical transporters, ans same issue. open-iscsi is installed and working on all of them. I just wonder if Nakivo for QNAP allows iSCSI connections, and if it's necessary to enable ISCSI services on the QNAP. There is any other configuration I have to do on the transporter to connect? Thanks a lot.
  7. Just an Idea for an ugly hack: Put pagefile on a diffrent partition and exclude this from backup? Not nice, but maybe effective I think the change tracking of nakivo is not very effective, I've seen mucht better solutions by other vendors. (but I still like Nakivo)
  8. @Official Moderator, I think that while it is true that the page file serves as additional memory, it should not be backed up on a physical machine. the file contents are reset when the physical machine turns on and the file itself is not usable as you cannot restore it or use it's contents in any way (windows prevents access to pagefile.sys so you cannot restore it anyway). I would say the same applies for hiberfil.sys which is the hibernation file for windows On my machine (32GB ram) the page file is about 13GB which is a waste of backup space and time as it is not usable and even if you skip the file during restore then windows will create it again
  9. Hi, @admin inforent! Our Product Team is working on this now, but we don't have an exact ETA. As soon as it's out, we will announce the release. Stay tuned!
  10. @Kubeos GmbH, Synology assures us that it is one of the priority tasks. However, they don't provide any ETA time. We also didn't expect it to take this long.
  11. It's been almost five weeks since this problem surfaced . I thought that Nakivo was a bigger player in the market and had more influence on Synology than it obviously has....
  12. Hi, @Jason! In DSM version 7.0, Synology changed the installation policy, and now it requires additional signs for each application. As soon as the NAKIVO app is signed and approved by Synology, install NAKIVO on DSM 7.0 will be possible. We're sorry for all the inconvenience it has caused you.
  13. when will nakivo support backup of online archives ?
  14. May I ask why 10.4 was released without certification from Synology? Seems like that would be something done before letting us upgrade. I don't have the ability for a work around suggested in this thread and am considering moving to a different back up system (a lot of work) as I do not have off site backup for one of my site....
  15. Hi @RFD, You can follow this thread, and I will inform you as soon as the update is out.
  16. Is there any way to get on a notification list, for when the updated product is released?
  17. Earlier
  18. Solved! I've found the problem: I think there is a config issue on default Intel QNAP TCP config (factory default): Default Receive TCP Window on Intel QNAP = net.ipv4.tcp_rmem = "4096 1638400 1638400" Default Receive TCP Window on ARM QNAP = net.ipv4.tcp_rmem="32768 87380 16777216" I think Intel's config had a too low minimal window As soon as I changed the value on the Intel to the values on the ARM, ssh data transfer went double and beyond So I tweaked the TCP Windows (send and receive), and got my beloved 1Gbps per job code (ssh root on QNAP ) sysctl -w net.ipv4.tcp_rmem="131072 524288 16777216" sysctl -w net.ipv4.tcp_wmem="131072 524288 16777216" Also, I think it's active on default, but not a bad idea forcing autotuning: sysctl -w net.ipv4.tcp_window_scaling=1 I hope it's useful If anyone has the same issue. Just be sure to include the tweaks on autorun.sh to survive reboots Thanks
  19. @Manuel Rivera, no worries I hope you will find a solution to your issue!
  20. Ok, my apologies... Seems to be a QNAP issue... Or a network issue... SSH (SCP or Rsync) transfers from any remote server to the Intel QNAP tops on 200Mpbs... The weird thing is that same SCP from the internal network have no cap. My bets were wrong, just because Nakivo Transporter must use some kind of ssh transfer method, but not his fault. Maybe an internal firewall throttling conections from outside the network? (I have all "visible" firewalls offline) I totally reset the QNAP, still not installed Nakivo, and have the cap. The other QNAP just beside this one it's performing nice. (Arm one) Any clue...? I will send a support ticket to QNAP guys... Sorry again for all the misunderstanding.
  21. Just to give some extra clue to developers, due to separate configs on Transporters: Origin VMs are on ESXi 7 I can't test any other version.
  22. Another revealing test: - Full Removed NAKIVO Backup from Intel QNAP. - Installed NAKIVO Transporter only on the Intel QNAP. (and updated to 10.4) - Configured ARM QNAP job to use same origin transporter as ever but changed CIFS repo to use the installed Transported above. Remember the repo is the one on the Intel via CIFS. Well... 200Mbps top again. All the time. Not using Nakivo Native on Intel QNAP, just using Transporter alone. If I change just the destination transporter to the ARM QNAP I got again full speed. So, I bet... There is a issue with the Nakivo Transporter for Intel QNAP. Let's wait the testing on another independent Intel QNAP
  23. I did the test. I virtualized a Win10 on QNAP's Virtualization Station: Guess what....: Virtualized Windows 10 inside the QNAP, with only 2 CPU and 4GB, runs NAKIVO and deliver more than 3x the speed. There is lots of latency and IOWAIT due to Windows 10 swapping, but still runs much faster in total time (full backup) same job (50GB), with >300Mbps Also note that on the native Nakivo on the same nas NEVER get more than 25MB/s (~200Mbps) running a single job. More interesting is the fact that runs ALL THE TIME at that speed: Also running 2 simultaneous jobs delivers 400Mbps, 3 = 600Mbps, etc... Wondering if I'm so wrong, but I bet that something it's throttling Onboard repository on native QNAP Intel NAKIVO v 10.4 Next week we will test on another borrowed Intel QNAP with same Nakivo version, to see if we can replicate the issue. Thanks a lot
  24. It's the same config, not the same repo. As both are QNAP NAS, they have internal repos. Compression level and dedup are active on both. Also I tried to use the Intel's repo on the ARMs via CIFS, and runs fine at high speeds (a little slower than in his own repo due to internal traffic between NAS, but much faster than the 200Mbps of the intel by himself) If you mean CPU usage, it's below 20% even with 3 backup jobs at a time. Those NAS aren't performing none but backups with Nakivo. And once again, think that the limit it's per job. If I perform 5 simultaneous jobs on the Intel I got the network saturated at almost 1Gbps, and less than 30% CPU load I will do a little test today. As the Intel QNAP allow virtualization, I will try to setup a virtualized Windows 10 minimal just with the Win version of Nakivo inside the Intel NAS. I know there will be much more CPU load, but I want to discard that the issue comes from the Nakivo-Intel-QNAP package if I got speeds over 200Mbps on a single job.
  25. The only thing to check is the compression level and deduplication on the repo, cause this can also have a big impact. Is it the same for both QNAP? And try to check "top" during the backup Update: Mod always points to the KB "find the bottleneck", no matter what how you anlayze your setup
  26. Hi Moderator "the NAKIVO Job speed depends on the speed of the source data read action, the speed of the data transfer, and the speed of the data write action. All those stages are affected by the NAKIVO Job workflow." Yes, I know, but this same Job executes on a ARM QNAP on the same network, and deliver >700Mbps on a single job. Same Origin Vms, same network, same transporters (except the onboard one), and the Intel QNAP have higher specs on CPU and storage. Also note that performing 4 jobs at a time deliver 800Mbps on both QNAPs without high loads. The issue seems a per job throttle on the Intel QNAP version of NAKIVO of any kind... We will test further with Qloudea to another intel QNAP vs Vmware backups to see if it's a replicable fault.
  27. Hi, @Manuel Rivera! I see that you've done quite a research by yourself. However, just NAKIVO software has no speed limitation for the backup|replication|recovery action that can be set in the license. Please keep your attention that the NAKIVO Job speed depends on the speed of the source data read action, the speed of the data transfer, and the speed of the data write action. All those stages are affected by the NAKIVO Job workflow. Also, remember that NAKIVO Backup & Replication uses the routing table from the system where it is hosted to transfer the data. You may try using the instruction from this article to identify the bottleneck of the backup infrastructure: https://helpcenter.nakivo.com/display/KB/How+to+Identify+a+Bottleneck
  1. Load more activity
×
×
  • Create New...