Leezy Posted May 5, 2020 Share Posted May 5, 2020 Hi Team, i am excited for the AWS S3 storage integration which will be coming on V9.4. however, i was thinking if there are anyway we could use compatible S3 instead of AWS. i would like to test it out with S3 Compatible like Mimio or Wasabi Storage if there are options for it. thanks. i am also interested to explore if there are anyway i could change that configuration for AWS S3 on NBR to use other S3 storage. thanks. regards, leezy 1 Quote Link to comment Share on other sites More sharing options...
Official Moderator Posted May 6, 2020 Share Posted May 6, 2020 Currently NAKIVO supports AWS S3 only, so it is impossible as of now. Support of the S3 -based storages like Wasabi is planed in the future releases, however, there is no ETA for it yet. Quote Link to comment Share on other sites More sharing options...
Leezy Posted May 6, 2020 Author Share Posted May 6, 2020 I would like to seek your advise on the following. Is there any reason why the S3 storage option only allow full backup incremental. But not continuous incremental? This way it will remove the dedupe capability. 1 Quote Link to comment Share on other sites More sharing options...
Official Moderator Posted May 6, 2020 Share Posted May 6, 2020 This is so because forever incremental repository is spread between 4096 files and AWS S3 doesn't allow writing data to all these files simultaneously or works extremely slow with this repository type. 1 Quote Link to comment Share on other sites More sharing options...
Leezy Posted May 6, 2020 Author Share Posted May 6, 2020 i think i have just manage to create a S3FS-Fuse with Tardigrade Decentralize Storage on the NBR VA... and the backup to cloud seems to work too ... i got a whooping 256TB storage space. this is S3 Compatible storage hosted on Tardigrade network... Quote Link to comment Share on other sites More sharing options...
Leezy Posted May 6, 2020 Author Share Posted May 6, 2020 i would like to know if we configure AWS S3 as target repository. during backup, or backup copy job, does the transporter store local copy before uploading to AWS? or it will directly upload to S3? will there be a local cache copy during S3 transmission? 1 Quote Link to comment Share on other sites More sharing options...
Official Moderator Posted May 7, 2020 Share Posted May 7, 2020 No, the transporter uploads the data directly to AWS S3. The product doesn't use the cache. Quote Link to comment Share on other sites More sharing options...
kdc415 Posted September 3, 2020 Share Posted September 3, 2020 On 5/6/2020 at 1:36 PM, Leezy said: i think i have just manage to create a S3FS-Fuse with Tardigrade Decentralize Storage on the NBR VA... and the backup to cloud seems to work too ... i got a whooping 256TB storage space. this is S3 Compatible storage hosted on Tardigrade network... How did you do this? What was your throughput? 1 Quote Link to comment Share on other sites More sharing options...
Official Moderator Posted September 9, 2020 Share Posted September 9, 2020 Hello! It looks like the FUSE driver was used to mount the folder locally. Out of our experience, it should run with poor speed because that driver works in the user space. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.