Jump to content
NAKIVO Community Forum

Leaderboard

Popular Content

Showing content with the highest reputation since 05/18/21 in Posts

  1. Hi. When is it planned Vmware 7 update 3 support?
    2 points
  2. @Bedders, Please try replacing "q" with "-q": zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class Let me know if it works for you!
    2 points
  3. Hi, @JurajZ and @Bedders! NAKIVO Backup & Replication is using the Apache Log4j library, which is part of Apache Logging Services. You can manually fix the CVE-2021-44228 vulnerability by removing JndiLookup.class located in libs\log4j-core-2.2.jar. Note: If the libs folder contains log4j-core-fixed-2.2.jar instead of log4j-core-2.2.jar, it means that the issue was already fixed for your version of NAKIVO Backup & Replication. For Linux: Go to the libs folder located inside NAKIVO Backup & Replication installation folder. To remove JndiLookup.class from the jar file run the following command: zip -q -d log4j-core*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class For Windows: Ensure you have 7z tool installed. Go to the libs folder located inside NAKIVO Backup & Replication installation folder. Use 7z to open the log4j-core-2.2.jar and remove JndiLookup.class from the jar file. Restart NAKIVO Backup & Replication. For NAS devices: If you are using a NAS, open an SSH connection to your device and locate NAKIVO Backup & Replication installation folder here: For ASUSTOR NAS: /usr/local/AppCentral/NBR For FreeNAS/TrueNAS (inside the jail): /usr/local/nakivo/director For NETGEAR NAS: /apps/nbr For QNAP NAS: /share/CACHEDEV1_DATA/.qpkg/NBR For Raspberry PI: /opt/nakivo/director For Synology NAS: /volume1/@appstore/NBR For Western Digital NAS: /mnt/HD/HD_a2/Nas_Prog/NBR Note: Refer to the NAS vendor documentation to learn how to open an SSH connection to your NAS device. IMPORTANT: CVE-2021-44228 is a severe vulnerability. We strongly advise you to apply the manual fix as soon as you can. This is the best way to avoid the risks of security breaches. Please contact customer support if you require custom build of NAKIVO Backup & Replication that has the fix.
    2 points
  4. Hi, @SALEEL! I am very sorry that we put you in such a position. The latest improvements in DSM7 were unexpected for everyone. And it makes us feel very sad that we can't provide you with our best service as usual. Our team was doing everything to accelerate the process. However, at this point, there is nothing we can do. I understand your disappointment, and I apologize for all the inconvenience it has caused you. One of our main values is customers satisfaction. That's why this situation is frustrating. If only it depended on us! At the moment I can only offer you the following options: 1. You can temporarily install NAKIVO Backup & Replication on some other host and use the NAS-based repository as a CIFS/NFS share. When DSM7 is officially supported, then it will be possible to install NAKIVO on NAS again. 2. Workaround for DSM 7: - Deploying a new NAKIVO installation somewhere else as Virtual Appliance (VA) or installing on a supported VM, Physical or NAS; - Share the repository folder on the Synology NAS with DSM7 as NFS / CIFS (SMB) share; - Add this repository as existing to the new NAKIVO installation; - Restore the configuration of old NAKIVO installation from Self-backup from the original repository. For now, we are working closely with Synology to prepare the release, and we are almost at the final stage. However, we expect that we will need a few more weeks till the final release. Again, I am very sorry for all the convenience!
    2 points
  5. Great to get 2FA for the Login Just found the option Store backups in separate files: Select this option to enable this backup repository to store data of every machine in separate backup files. Enabling this option is highly recommended to ensure higher reliability and performance. Why this is more reliable? Does this have an effect on the global dedup? And can I convert the curent backup to this option? From my point of view, the Repos are the current bottleneck of Nakivo. if there is an improvement with this, it is very welcome!
    2 points
  6. Thank you so much for your help. It worked for me using win SCP.
    2 points
  7. Here is how to fix it... for QNAP Login to QNAP via SSH admin cd /share/CACHEDEV1_DATA/.qpkg/NBR/transporter # ./nkv-bhsvc stop Stopping NAKIVO Backup & Replication Transporter service: [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./bhsvc -b "<UsePassword>" [/share/CACHEDEV1_DATA/.qpkg/NBR/transporter] # ./nkv-bhsvc start Starting NAKIVO Backup & Replication Transporter service: Use Crome Browser https://<qnapip>:4443 Go to Settings / Transporters / Onboard transporter / Edit Press F12 -> go to Console and type following JavaScript codes (Paste one by one command and Press Enter): var a=Ext.ComponentQuery.query("transporterEditView")[0] a.masterPasswordContainer.show() a.connectBtn.show() you get new fields in the Browser: - Master password: - Connect Button In Master password: <ThePassworfromTheSSHCommand> (Same as used here in the bhsvc -b "<UsePassword>") Press Connect Button Then Refresh Transporter an Repository by Boris
    2 points
  8. Anyone else have problems with Grand-Father-Son (GFS) retention scheme not working as expected? Daily backups work correctly and the retention is correct, but instead of getting weekly and Monthly backups, all my Full backups are set to yearly week after week at both my sites where I have tape libraries. (They expire 11 years from now) I opened a ticket, but they were not able to tell me anything and claimed that everything was working fine. At the time I was not doing daily backups, so I turned that on and they work, but they didn't affect my problem with yearly backups, so for now I'm turning it off to see what happens with just weekly and monthly backups. These are my settings: Retain one recovery point per day for 7 days Retain one recovery point per week for 4 weeks Retain one recovery point per month for 12 months Retain one recovery point per year for 11 years Tape Job Options: Create Full Backup: Every Saturday Tape appending: Start full backup with an empty tape** **I found that now that daily backups are turned on, Nakivo will start writing daily backups to my Full Backup tape before I get a chance to eject it. This is not desirable, but there is no options to segregate GFS tapes. Setting this to "Always start with an empty tape" would burn a tape each day, also not desirable, but I may have no choice. I would like to append all daily backups to the same tape and keep it in the library and only offsite my weekend full backups.
    2 points
  9. our company is already using nakivo as a backup solution for our servers, now i would like to add a backup job for our physical clients (notebooks in our case), problem is that due to the nature of our workflows a lot of our ork is done on premises of the customer, and it is not predictable when the notebooks in question are connected to our internal network. this makes a regular scheduled backup job very impractical, since at any given point in time the devices may or may not be connected to our internal network. is there a way to initiate a backup of a notebook automatically when said notebook connects to our internal network ?
    1 point
  10. @3STYLER I am very glad to help you. Please, let me know if you have any more questions in the future
    1 point
  11. I was excited to see this announcement. I've been waiting for Nakivo to support Wasabi Cloud in the Canadian region. However, once I followed the directions to apply the update. I get an error showing me that my DSM is too new to install the package. How can this be, when you just released this version of Nakivo? Did you just build a release that doesn't support the latest version of DSM for Synology? However, V10.5x is fine to run on this version of DSM??? See attached screenshots for my DSM version, the Nakivo version currently installed and the error message I'm receiving. Confused.
    1 point
  12. I use Nakivo in a multi tenant configuration (MSP) for Office 365 backups only. I use onboard transporter of the director VA ( 4vCPU and 16GB ram ) for all tenants. Each tenant has a SaaS repository configured on an NFS backed VMDK. The backups run overnight and work fine. I noticed however that even when backups are not running, there is a transporter process that appears to do some housekeeping on the repo. Each thread that does this housekeeping generates ~3000 iops over nfs and more than one thread can run concurrently. As a result, My storage has a load of 3,000-25,000 iops (peaking at ~260MB/sec) continuously, 24/7. No space or inodes appear to be freed. I've only got 8 SaaS repos at the moment. What happens when I have 80? What is this background process doing that requires this much IO? Is there some way I can tell if it is actually accomplishing anything? Is there a way to limit it to running out of hours? -Mark
    1 point
  13. Hello, I've deployed Nakivo on a Synology NAS. I have a physical server running Windows Server 2012 R2 with several roles. Inside this machine, a VM machine is running using Hyper-V. My plan was to add in Nakivo two backup jobs: 1) Backup job for the Hyper-V VM machine (in this way, I have the flexibility to restore it separately if needed) 2) Backup job for the physical server (excluding the VM inside it if possible) But after completing step 1 with no problem, if I attempt to add the backup job for the physical server, I obtain this error when it detects that a Hyper-V job has been configured previously for that physical machine: Physical Machine Agent cannot be installed on the "192.168.0.250" machine. This machine already contains a Transporter Is the only available way to make a full physical machine backup, including the VM in the same batch? Thanks in advance !
    1 point
  14. Well, I have the same problem and this article doesn't solve anything. It shows what is the nkvuser password. When you want to execute anything in the shell it points out that you need to be root. su - command doesn't work. Login as root doesn't work due to unknown password. Resetting password is only for users not for root. What is the solution than? how to remove storage, how to add new local drive as a repository - everything needs to be done from the command line as root. Nakivo version: 10.5.1, Ubuntu 20.04
    1 point
  15. Yes thanks, I have a 10.2 SPK file now.
    1 point
  16. Hi, @mark007! There are various ways to do that. If you're an end-user, the best solution for you is to use the Outlook web interface. To configure forwarding, follow these steps: In a web browser, open https://outlook.office.com Tap on the Settings (gear) icon to open Office 365 email settings. At the bottom of the Settings menu, select View all Outlook settings. Click on Mail > Forwarding. Check the Enable forwarding checkbox. Enter your destination email address to forward your emails to. Note: It is recommended to check the Keep a copy of forwarded messages checkbox. This way, your emails will be saved in your current mailbox. So, if something goes wrong and the destination email address doesn't receive your messages, you won't lose your emails. Click Save. If you want to use other options (using email client, Office 365 admin center, etc.) for moving your Microsoft Office 365 emails, check this blog post https://www.nakivo.com/blog/how-to-move-office-365-emails-to-other-email-accounts/
    1 point
  17. Hi, @Martijn! The feature request is created. However, there is no ETA for it yet. I will inform you when our team implements it.
    1 point
  18. @fio_fan, Thank you for the response and the blog post. It is very descriptive. I'll start learning more about VXLAN. Seems interesting.
    1 point
  19. @Official Moderator, Thank you for the walkthrough. It was really helpful!
    1 point
  20. @darkhorse Hi! We always used Microsoft 365 plans because they have a lot of functionality. Even the cheapest options offer so many different services and apps, while the pricing is much lower than getting it separately. If you are on the budget, check the most affordable plan that includes Exchange Online.
    1 point
  21. I have a question mostly to sysadmins or anyone related. How do you monitor your environment? What tools do you like? I feel that my company needs an upgrade in terms of monitoring potential issues. Thanks for sharing your experience and for any relevant suggestions.
    1 point
  22. Hello Does anybody know if I can mix levels of licence? I have 3 x Basic already, can I add an Essentials Pro? Thank you Daren
    1 point
  23. Thanks. I usually look in to the report.
    1 point
  24. Hi, Many thank's, now inventory works but the advice of vcenter not supported version still, hero, i hope the backup works tonight. Thank you very much
    1 point
  25. Hello, i need the quick fix for Nakivo 10.4 Synology NAS package. Please!!, many thanks
    1 point
  26. I've to apologize, I wasn't clear enough: I'm on Windows platform, NBR is 10.4 b58736... ...please. post the relevant link! Kind regards
    1 point
  27. Hi guys, since a few week I cannot backup my shared mail boxex from 365 because an error "mailbox not licenced". How can I solve? I don't want to buy 365 licence for the shared mailbox. Thanks.
    1 point
  28. @fio_fan, I've checked it already, not my case. Any other suggestions? Thank you anyways!
    1 point
  29. Hi. I have a VMWare server having 06 different application installed. I am looking for a free software which i can use to take backup of these applications in OVA format on a remote desktop computer. Is there any free software available??? do i have to shutdown the applications before taking backup??
    1 point
  30. I got a new version from support, now it's working. Version: 10.4.0.r57249
    1 point
  31. Hello everyone, I have a Microsoft Hyper-v server 2019, and I implemented a backup of two virtual machines ( 1 domain controller 2019 and 1 exchange server 2019). I noticed that some folders are missing in the domain controller backup and only the main exchange server database is shown in the exchange server backup but there should be other databases in it, too. I noticed that by doing the recovery procedure. How can I solve the problem? Thanks.
    1 point
  32. If I have have basic license and want to explore the Nakivo Backup tool using API to integrate and automate its daily tasks. Can I do that?
    1 point
  33. Many thanks for your reply! However, sorry if I was unclear, but this is exactly what I did several times and what apparently does NOT fully work: The remote iSCSI drive was already excluded several times from the backup. The remote drive is persistent, so its UUID should not change (to my knowledge, I admittedly did not check). Still NBR sometimes detects it as new and adds it automatically to the backup, what I do not want.
    1 point
  34. Thanks for the link Mario. For my use I still really like global deduplication, for the space/cost savings, and indeed it might be necessary given my storage space requirements. I've recently upgraded from an older Dell R710 6-disk setup to a newer Dell R730 8-disk setup, and switched from SATA to SAS drives. I definitely recommend SAS over SATA - especially for VM Flash Boot and Screenshot Verification. It's worth the extra $$ IMHO. The firmware quality is usally better too, as is the SMART reporting. Previously with SATA I found it too slow to have "Enforce explicit file system sync" on. The SATA bus is only half-duplex, and only 6Gbps, vs the full-duplex SAS3 12Gbps. Now with my new setup I have exlicit filesystem sync turned on and still get about 1.2-1.5Gbps speeds, which I'm happy with. It was terrible on the old system so had it off there. With my old setup (without filesystem sync) I did have one corruption on a single VM which has about 2TB of data - a Flash VM Boot would always come up with disk errors and would take forever to try and fix those on boot with checkdisk, which meant Screenshot Verification wouldn't work for that one, but my other 8 VMs were fine. My main repo is now a high quality TrueNAS Core 12.0-U5 ZFS setup with enterprise SAS drives and ECC buffered RDIMMs, and the ZFS dataset shared via NFS to my Nakivo Director on Ubuntu 18.04 respects the explicit filesystem sync setting set by Nakivo. I have everything on power conditioning UPS, so hoping I don't have any further repo corruptions. I put a 2TB NVMe drive as L2ARC for my ZFS pool, so hoping that gives a nice speed boost to the VM Flash Boot and Screenshot Verification, even after months of backups (and dedup complexity). Another thing I did was turn off compression for Nakivo, as my NFS datastore uses LZ4 ZFS compression natively, so avoiding double-compression issues. Now that Nakivo only has to worry about dedup, and not dedup + compression, I am hoping that my new setup will be super reliable, even with global dedup on. I really think global dedup is still fine to use from a reliablity standpoint, but you just have to manage the setup with more care (quality ECC RAM, quality UPS, high quaity (SAS) drives, high quality drive controller, filesystem sync, optimal compression setings, keep the BIOS and controller+drive firmwares up to date etc). Anyway I'll see how I go.
    1 point
  35. Thanks for the info . What's strange is that "Deduplication" and "Store backups in separate files" are both listed as repository parameters at the same time - when really they are the negative of each other aren't they? Like - it's not possible to have both be enabled, or both be disabled, is it? It's like you haven't introduced a new feature, but just added another name for an existing one, greatly confusing people in the process. I suspect "store backups as separate files" (aka no deduplication) would take up too much space. But I also find that global deduplication is risky, for the reliability reasons you've listed above. What I was hoping you were doing with this new feature, is to have per-VM deduplication, rather than global. That way if there is any corruption, then it would/should only affect a single VM, and I could just start backing up that VM from scatch again, rather than blowing everything away - which takes a long time to do for that inital backup (I can only start that on a Saturday afternoon). To me, that's a great trade off between global dedup vs no dedup. I was really hoping this new feature would be "store VM backups seperately" with the option to still have deduplication on (on a per-VM basis).
    1 point
  36. Hi Gavino i can fully understand your annoyance with the documentation and the labelling in the gui. I have already asked some questions about the new storage system, see the following link:
    1 point
  37. OK I worked out that "Store backups in separate files" is the exact same meaning as "no deduplication". If I am wrong - then please let me know. Nakivo should have just kept it simple by keeping the "deduplication" wording and not confuse people with another option that means the same thing. It's a bit crazy to look at a repository after it's been created, and see "Deduplication" and "Store backups in separate files" as completely separate line items (which are far apart). Both refer to the same thing. I bet there is a product designer pointing his finger at a collegue right now going "see, I told you that customers would not like this!". And they would be right. It's pretty strange that for this feature, the new docs are now saying "Enabling this option is highly recommended to ensure higher reliability and performance" WHAT?! Dedup was one of the main reasons I bought Nakivo! https://www.nakivo.com/vmware-backup/#benefits - this talks up dedup as a benefit and cost saving. Nowhere in the sales pitch do you say that it's highly recommended not to use it!! It's interesting to note also that deduplication and immutable backup respository are incompatible with each other, as I found out from emailing tech support. Some feedback on documentation. Nakivo should put 10.3 or 10.4 (or whatever the version is) at the top of each documentation page. That would be very handy. I thought I was reading the latest version when I posted above, as the Release Notes page mentioned 10.4. I was like... ok this looks like I'm in the right place for the latest info as it lists 10.4. https://helpcenter.nakivo.com/display/RN/Release+Notes I didn't realise until after contacting tech support, that it was the older documentation. Also, the images zoom on the 10.3 docs but not 10.4. I prefer zoom, as sometimes the info is unreadable until it's zoomed in. Please fix this in the 10.4 docs. Thanks. An issue I found here: https://helpcenter.nakivo.com/User-Guide/Content/Deployment/System-Requirements/Feature-Requirements.htm#Ransomware ------ quote For Local Folder type of Backup Repository, the following conditions must be met: The Backup Repository data storage type must be set to Incremental with full backups. ----- end quote Whoever wrote that maybe doesn't realise that "incremental with full backups" doesn't exist as an option any more in 10.4, like it did in 10.3. I have been told in an email that "...there is the option "store backups in separate files". When this option is selected, the repository is "Incremental with Full Backup" and Immutable is supported." It would be handy if you explained that in your documentation. The line above sould probably say The Backup Repository data storage type must be set to "store backups in separate files" (i.e. deduplication switched off). This implicitly sets the data storage policy to Incremental with full backups.
    1 point
  38. Hello, We need to be able to encrypt the backups repositories that are stored on windows servers. Is this something that is considered for future Nakivo versions? We would like to have safe backups, even if the Nakivo console would be compromised by hackers.
    1 point
  39. Hi # running from inetd # Port 22 what is inetd, a program? I do it via putty thanks
    1 point
  40. Same problem with NAKIVO Appliance and Transporter + Repository on SYNOLOGY Same trick wor like a charm BUT: First stage on SYNOLOGY /volume1/@appstore/NBR-Transporter# ./nkv-bhsvc stop ./bhsvc -b "<UsePassword>" Aftter these restart Service from Synology console Connect to NAKIVO Console , edit Transporter , Connect .et voila :))
    1 point
  41. You have to modify VMWare itself: https://helpcenter.nakivo.com/display/KB/SSH+Requirements+for+NAKIVO+Backup+and+Replication without the mods to sshd_config - it will never see it. KexAlgorithms HostKeyAlgorithms Ciphers I've include the original and replacement files. Changed: # Version 7.0.2.1 # running from inetd # Port 22 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key # Fips mode restricts ciphers to only FIPS-permitted ciphers FipsMode yes # vPP FCS_SSH_EXT.1.7: rekey after 1GB, 1H (instead of default 4GB for AES) RekeyLimit 1G, 1H SyslogFacility auth LogLevel info PermitRootLogin yes PrintMotd yes TCPKeepAlive yes # Key algorithms used in SSHv2 handshake # (ed25519 not allowed by current FIPS module) KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512,ssh-rsa,ssh-dss Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1-96,hmac-sha1,hmac-md5-96,hmac-md5 UsePAM yes # only use PAM challenge-response (keyboard-interactive) PasswordAuthentication no Banner /etc/issue Subsystem sftp /usr/lib/vmware/openssh/bin/sftp-server -f LOCAL5 -l INFO AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys # Timeout value of 10 mins. The default value of ClientAliveCountMax is 3. # Hence, we get a 3 * 200 = 600 seconds timeout if the client has been # unresponsive. ClientAliveCountMax 3 ClientAliveInterval 200 # sshd(8) will refuse connection attempts with a probability of "rate/100" # (30%) if there are currently "start" (10) unauthenticated connections. The # probability increases linearly and all connection attempts are refused if the # number of unauthenticated connections reaches "full" (100) MaxStartups 10:30:100 # ESXi is not a proxy server AllowTcpForwarding no AllowStreamLocalForwarding no # The following settings are all default values. They are repeated # here to simplify auditing settings (for example, DoD STIG). IgnoreRhosts yes HostbasedAuthentication no PermitEmptyPasswords no PermitUserEnvironment no StrictModes yes Compression no GatewayPorts no X11Forwarding no AcceptEnv PermitTunnel no # The following settings are disabled during the OpenSSH build. # They are commented out to avoid spurious warnings in log files. #GSSAPIAuthentication no #KerberosAuthentication no Original # Version 7.0.2.1 # running from inetd # Port 22 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key # Fips mode restricts ciphers to only FIPS-permitted ciphers FipsMode yes # vPP FCS_SSH_EXT.1.7: rekey after 1GB, 1H (instead of default 4GB for AES) RekeyLimit 1G, 1H SyslogFacility auth LogLevel info PermitRootLogin yes PrintMotd yes TCPKeepAlive yes # Key algorithms used in SSHv2 handshake # (ed25519 not allowed by current FIPS module) KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256 HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,rsa-sha2-256,rsa-sha2-512 Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-256,hmac-sha2-512 UsePAM yes # only use PAM challenge-response (keyboard-interactive) PasswordAuthentication no Banner /etc/issue Subsystem sftp /usr/lib/vmware/openssh/bin/sftp-server -f LOCAL5 -l INFO AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys # Timeout value of 10 mins. The default value of ClientAliveCountMax is 3. # Hence, we get a 3 * 200 = 600 seconds timeout if the client has been # unresponsive. ClientAliveCountMax 3 ClientAliveInterval 200 # sshd(8) will refuse connection attempts with a probability of "rate/100" # (30%) if there are currently "start" (10) unauthenticated connections. The # probability increases linearly and all connection attempts are refused if the # number of unauthenticated connections reaches "full" (100) MaxStartups 10:30:100 # ESXi is not a proxy server AllowTcpForwarding no AllowStreamLocalForwarding no # The following settings are all default values. They are repeated # here to simplify auditing settings (for example, DoD STIG). IgnoreRhosts yes HostbasedAuthentication no PermitEmptyPasswords no PermitUserEnvironment no StrictModes yes Compression no GatewayPorts no X11Forwarding no AcceptEnv PermitTunnel no # The following settings are disabled during the OpenSSH build. # They are commented out to avoid spurious warnings in log files. #GSSAPIAuthentication no #KerberosAuthentication no
    1 point
  42. I was wondering if we could use choose Amazon S3 for the repository but use any S3 compatible service. Like Backblaze B2 is supposed to be S3 compatible. Could I use it for a repository? We are tying to find a good solution for offsite copies of the backups for customers.
    1 point
  43. Thanks, we looked at Wasabi before and the cost was adding up quickly. That is the issue we have been having running into. Backups locally are great but offsite storage of the backups is so expensive clients don't want to pay.
    1 point
  44. @Boris Works like a charm!
    1 point
  45. Dear all, I have a problem when trying to install NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh under Ubuntu 20.04. I know that it isn't fully supported at the moment but the problem is at a very early step of the installation - I can't aeven see the EULA, it stops before this step. I tried with ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh as well as sudo ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh as well as sudo su and run ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh after that. Everytime it's the same: Support for current OS was not tested. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied To see the list of supported systems, please visit https://www.nakivo.com/helpcenter/display/NH/System+Requirements ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Successful operation of all features and good performance are not guaranteed. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Proceed with installation at your own risk [y/N]? y ******************************************************* * * * Nakivo Backup & Replication Installation and Update * * * ******************************************************* ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 793: cannot create /tmp/nkv-install.log: Permission denied Verifying Installer... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Extracting NAKIVO Backup & Replication... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 807: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 808: cannot create /tmp/nkv-install.log: Permission denied NAKIVO Backup & Replication extraction failed. ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied Aborting... ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 38: cannot create /tmp/nkv-install.log: Permission denied ./NAKIVO_Backup_Replication_v10.3.0.55051_Installer-FREE.sh: 812: cannot create /tmp/nkv-install.log: Permission denied
    1 point
  46. Enabling screenshot verification returns a black screen no matter what RTO and Screenshot settings I use. Nakivo considers the job successful and discards the VM before the login screen appears. Support have not been able to help so far. Any one else experiencing this issue? Any top tips?
    1 point
  47. Seeing identical issues, tried a new 10.3 ova and it has problems with generic transporter errors that are useless for troubleshooting.
    1 point
×
×
  • Create New...