We have > 10k tables, which we know is a design that is abnormal. However, this has worked for our use case and backups up until version v2.6.42 have been working fine. We run 2 nodes with all tables replicated, logically treating one node (with more cpu/memory) as a read/write node, and the other purely as a reader. We perform alternating backups on both nodes and beginning with clickhouse-backup version v2.6.42, the reader node backup fails 100% of the time due to memory. We are wondering if this could be related to this change: #1194
We have > 10k tables, which we know is a design that is abnormal. However, this has worked for our use case and backups up until version v2.6.42 have been working fine. We run 2 nodes with all tables replicated, logically treating one node (with more cpu/memory) as a read/write node, and the other purely as a reader. We perform alternating backups on both nodes and beginning with clickhouse-backup version v2.6.42, the reader node backup fails 100% of the time due to memory. We are wondering if this could be related to this change: #1194