On Tue, 3 Feb 2026 at 13:10, Simon Avery via discuss <[email protected]> wrote:
> I am trying to draw a concept of a MariaDB Backup Strategy. > I am aware of full Backups (via Restic as outlined here: > https://archive.fosdem.org/2022/schedule/event/mariadb_backup_restic/attachments/slides/5135/export/events/attachments/mariadb_backup_restic/slides/5135/mariabackup_restic.pdf) If your database is beyond a certain size, in my experience the overheads and inefficiencies will add up to the point where the only workable solution is snapshot based backups. - mysqldump: backup/restore times make it unusable in reasonable time once your backups get beyond maybe 100GB on a top of the line current generation bare metal server optimised for single-thread performance. - mydumper: massively parallel mysqldump, workable if disk I/O isn't the bottleneck (~2-3x the writes relative to the size of the data) - mariabackup: good option, but only usable up to the point where you can complete the backup before the redo log rolls over - LVM snapshot based backups: optimal option, provided you have enough snapshot space reserved to complete the backup - ZFS snapshot based backups: none of the limitations of any of the above - but you have to run on ZFS. As a bonus with ZFS backup target you can have huper-efficient incrementals as well. For ZFS servers with object storage targets, you don't get hyper-efficient incrementals, but it is usually reasonably workable, as long as you have enough disk I/O and network bandwidth. We use this tool that we developed in-house, as a counterpart to sanoid: https://github.com/shatteredsilicon/backoid > I am searching through the net and could not find a good solution for > Binlogs. Postgres f.e. has WAL Archiving to a remote Storage or onto a Master. > How can I do something like this with MariaDB (without MaxScale Binlog > Router). You can set log_slave_updates on the slave, which will make the slave generate binlogs. The file/offset coordinates won't match the master, but this isn't a problem if you use GTID. > Can in a Master-Slave Replica the Binlogs be synced or are they only synced > and deleted after replication? Is there a remote solution (S3?) If this is your key requirement, you can rig up something like lsyncd on your binlog directory and make it trigger a custom script that will copy the closed-after-write file to a target of your choosing, e.g. using rclone. You can tune the frequency by adjusting max_binlog_size. lsyncd will trigger on close-after-write, so once the binlog is finalised, it should trigger. This should work provided you don't have an unrealistic total loss disaster RPO (for anything less, you have binlogs on the slave that you can promote and re-seed a new slave from it). > I suspect there has to be some solutions deployed in companies, universities > ...). The max. loss is usually Last commit (Full backup + Binlog). I seem to recall mariabackup has an incremental binlog updates option, it just comes down to the tradeoff you are willing to accept between a "standard" solution, an optimal solution, and the amount of work you are willing to put in to get to the optimal solution. :-) Unfortunately, "standard" ways are skewed toward easy rather than optimal - and that means they fall apart when your data is large or resources are limited. -- Gordan Bobic Database Specialist, Shattered Silicon Ltd. https://shatteredsilicon.net Follow us: LinkedIn: https://www.linkedin.com/company/shatteredsilicon X: https://x.com/ssiliconbg _______________________________________________ discuss mailing list -- [email protected] To unsubscribe send an email to [email protected]
