Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 08/18/21 in Posts

  1. Is there any way to get on a notification list, for when the updated product is released?
    1 point
  2. Solved! I've found the problem: I think there is a config issue on default Intel QNAP TCP config (factory default): Default Receive TCP Window on Intel QNAP = net.ipv4.tcp_rmem = "4096 1638400 1638400" Default Receive TCP Window on ARM QNAP = net.ipv4.tcp_rmem="32768 87380 16777216" I think Intel's config had a too low minimal window As soon as I changed the value on the Intel to the values on the ARM, ssh data transfer went double and beyond So I tweaked the TCP Windows (send and receive), and got my beloved 1Gbps per job code (ssh root on QNAP ) sysctl -w net.ipv4.tcp_rmem="131072 524288 16777216" sysctl -w net.ipv4.tcp_wmem="131072 524288 16777216" Also, I think it's active on default, but not a bad idea forcing autotuning: sysctl -w net.ipv4.tcp_window_scaling=1 I hope it's useful If anyone has the same issue. Just be sure to include the tweaks on autorun.sh to survive reboots Thanks
    1 point
  3. Ok, my apologies... Seems to be a QNAP issue... Or a network issue... SSH (SCP or Rsync) transfers from any remote server to the Intel QNAP tops on 200Mpbs... The weird thing is that same SCP from the internal network have no cap. My bets were wrong, just because Nakivo Transporter must use some kind of ssh transfer method, but not his fault. Maybe an internal firewall throttling conections from outside the network? (I have all "visible" firewalls offline) I totally reset the QNAP, still not installed Nakivo, and have the cap. The other QNAP just beside this one it's performing nice. (Arm one) Any clue...? I will send a support ticket to QNAP guys... Sorry again for all the misunderstanding.
    1 point
  4. Hi. I have a VMWare server having 06 different application installed. I am looking for a free software which i can use to take backup of these applications in OVA format on a remote desktop computer. Is there any free software available??? do i have to shutdown the applications before taking backup??
    1 point
  5. Hi I'm setting up an new Ubuntu Server 20.04 as repo to test the "Immutable Repository" feature. I can create repos on this system without encryption, but I'm not able create encrypted repos. The error: repo-test: Cannot create the backup repository An unknown error has occurred during "repo-test" backup repository processing. I installed the following additional packages: ecryptfs-utils cryptsetup How can I get encryption working? Thanks Mario
    1 point
  6. I got a new version from support, now it's working. Version: 10.4.0.r57249
    1 point
  7. Hello everyone, I have a Microsoft Hyper-v server 2019, and I implemented a backup of two virtual machines ( 1 domain controller 2019 and 1 exchange server 2019). I noticed that some folders are missing in the domain controller backup and only the main exchange server database is shown in the exchange server backup but there should be other databases in it, too. I noticed that by doing the recovery procedure. How can I solve the problem? Thanks.
    1 point
  8. Hi @Moderator I don't want to encrypt the partition, I like to setup the encryption for folders as written in the helpcenter: https://helpcenter.nakivo.com/display/KB/How+Backup+Repository+Encryption+Works The ecryptfs package provides encryption of folders Ubuntu When you create a Backup Repository on a Ubuntu machine, you can enable encryption for that Backup Repository. The cryptsetup approach will be used for devices and partitions. The file system of the encrypted device/partition will be transformed to ext4. At the same time, the ecryptfs approach will be used for encrypting folders. Thanks Mario
    1 point
  9. Hi Team, I am new for this tool, Need your help to make RestAPI Calls. To add the new server in backup jobs, how to remove it. All I want to make the automation of all the manual tasks. Thanks Sumit
    1 point
  10. If I have have basic license and want to explore the Nakivo Backup tool using API to integrate and automate its daily tasks. Can I do that?
    1 point
  11. Hello, I've deployed Nakivo on a Synology NAS. I have a physical server running Windows Server 2012 R2 with several roles. Inside this machine, a VM machine is running using Hyper-V. My plan was to add in Nakivo two backup jobs: 1) Backup job for the Hyper-V VM machine (in this way, I have the flexibility to restore it separately if needed) 2) Backup job for the physical server (excluding the VM inside it if possible) But after completing step 1 with no problem, if I attempt to add the backup job for the physical server, I obtain this error when it detects that a Hyper-V job has been configured previously for that physical machine: Physical Machine Agent cannot be installed on the "192.168.0.250" machine. This machine already contains a Transporter Is the only available way to make a full physical machine backup, including the VM in the same batch? Thanks in advance !
    1 point
  12. "action":"AuthenticationManagement","method":"login","tid":"1","type":"exception","message":"The API access is not licensed. Please contact support.","where":null,"cause":null,"data":null} this is the output when i do login using the API. Can you please tell me what is wrong I am doing here.
    1 point
  13. Thanks for the info. I have stuck with not enabling the storing of backups in seperate files because I'm a little worried about increased space requriements, and because I rely on NFS, which I don't believe is supported for immutable recovery points. I am using a ZFS back-end for my NFS repository, and have good quality SAS drives, ECC RAM, as well as regular ZFS pool scrubs and S.M.A.R.T. checks (outside of backup times). A key feature I've enabled is syncrhonous writes ("enforce explicit filesystem sync") as I've read a lot about how important that is to keep data from corrupting. My recommendation would be for your product to set this on as the default, telling customers to disable it only if they have a battery-backed write cache for their storage, or if they suffer crippling speed issues (I would recommend different NAS hardware if that is the case!) My speeds are still fine with explicit filesystem sync enabled. I also have the NAS on two high-quality UPS units (one per power feed), which do double power conversion; active PCF; and pure sine wave output. My CPUs also have extremely good thermal paste. All firmware is up to date. I'm pretty sure I've done everything in my power to not get any repository corruption. Will see how it goes after several months of service. If I ever get corruption I'd be looking next towards storing backups in seperate files. So far so good though
    1 point
  14. Run into the same problem !!! Please put pressure on synology to get this working again!
    1 point
  15. Hi, @Gavino! In a new release, we have updated the UI for the Repository screen. The repository type menu was removed. "Store backups in separate files" option was added (enabled by default). Backup repositories created in the previous versions of the product keep operating in the same way they were configured. Storing backups in separate files is recommended. Downside: it has higher space requirements, no built-in deduplication. Upsides: it is more resilient to network, power, and hardware failures; it does not require running space reclaim; it is suitable for enterprise-grade deduplication storage devices; it supports immutable recovery points. If you still want to use the "forever incremental" repository and built-in data deduplication, please uncheck the "Store backups in separate files" checkbox when creating the backup repository. Please let me know if you have further questions or concerns.
    1 point
  16. Many thanks for your reply! However, sorry if I was unclear, but this is exactly what I did several times and what apparently does NOT fully work: The remote iSCSI drive was already excluded several times from the backup. The remote drive is persistent, so its UUID should not change (to my knowledge, I admittedly did not check). Still NBR sometimes detects it as new and adds it automatically to the backup, what I do not want.
    1 point
  17. Thanks for the info . What's strange is that "Deduplication" and "Store backups in separate files" are both listed as repository parameters at the same time - when really they are the negative of each other aren't they? Like - it's not possible to have both be enabled, or both be disabled, is it? It's like you haven't introduced a new feature, but just added another name for an existing one, greatly confusing people in the process. I suspect "store backups as separate files" (aka no deduplication) would take up too much space. But I also find that global deduplication is risky, for the reliability reasons you've listed above. What I was hoping you were doing with this new feature, is to have per-VM deduplication, rather than global. That way if there is any corruption, then it would/should only affect a single VM, and I could just start backing up that VM from scatch again, rather than blowing everything away - which takes a long time to do for that inital backup (I can only start that on a Saturday afternoon). To me, that's a great trade off between global dedup vs no dedup. I was really hoping this new feature would be "store VM backups seperately" with the option to still have deduplication on (on a per-VM basis).
    1 point
×
×
  • Create New...