Replies: 3 comments 4 replies
-
|
borg2 currently stores each chunk separately. That's very simple to deal with (especially for "compacting"), but also does a lot of api calls and can be slow. Implementing packs are one of the bigger changes still needing to be done before borg2 release. There is no concurrency. But packs need to be done first and then we'll see whether we need concurrency. |
Beta Was this translation helpful? Give feedback.
-
|
[I didn't see TW's response above before writing this.] I've been running a backup of a ~8GB directory structure for a while and after ~200 minutes borg managed to push around 5GB to backblaze. This is over a residential fiber with 400-500MB/s fiber and backblaze is ~18ms away. I need to back up several hundred GB so this isn't going to work. I assume there just isn't enough/any concurrency. [Please take this as beta feedback, I know v2 isn't ready for prime time yet. ... but I'm curious what the plans are to deal with the tiny objects / lack of concurrency problem.] |
Beta Was this translation helpful? Give feedback.
-
|
Just as points of comparison:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Has anyone done some performance measurements / observations when using borg 2.0.0 b20 with an S3 backend? (Not necessarily AWS)
I'm starting to try Backblaze and one issue I've hit is that it has a quota for "class B" operations, which include HeadObject. Apparently, when performing a backup borg performs a lot of HeadObject operations.
According to the docs: https://www.backblaze.com/cloud-storage/transaction-pricing
Are the HeadObject calls expected?
In terms of performance, I'm wondering how much concurrency around S3 operations there is...
Beta Was this translation helpful? Give feedback.
All reactions