quoting Alan Williamson <[EMAIL PROTECTED]> ..
>> This recipe is intended to minimize the impact on ongoing database
>> operations by inhibiting writes only during a relatively speedy
>> operation (creating a snapshot). The long dump operation can ...
>
>This seems to be a rather long winded way o
> This recipe is intended to minimize the impact on ongoing database
> operations by inhibiting writes only during a relatively speedy
> operation (creating a snapshot). The long dump operation can ...
This seems to be a rather long winded way of doing this. Why not
replicate the database and th
ables"
>
> 4. mount the snapshot
>
> 5. load a second database server daemon accessing the db within the
> snapshot (with a suitable alternate my.cnf file)
>
> 6. perform mysqldump operation on the snapshot-db
>
> 7. cleanup (unload second db server,
snapshot
5. load a second database server daemon accessing the db within the
snapshot (with a suitable alternate my.cnf file)
6. perform mysqldump operation on the snapshot-db
7. cleanup (unload second db server, unmount and delete snapshot)
So what monsters lurk within this backup strategy?
..jim
+ FC4 (linux 2.6.11 (or .12, maybe)
kludges used debug build (--with-debug=full) variation of the rpm
I believe a common backup strategy (works for myisam) is the following:
--
-flush tables with read lock
-lvmcreate -s (sna
Scott Purcell wrote:
Hello,
After many months of preparation, I am finally going to go live with a project
I have created. It is your basic e-commerce site, where I need to make sure I
have a current backup, specifically on the orders placed, etc.
I am going to run the mysql server on a PC po
Hello,
After many months of preparation, I am finally going to go live with a project
I have created. It is your basic e-commerce site, where I need to make sure I
have a current backup, specifically on the orders placed, etc.
I am going to run the mysql server on a PC possibly running XP. (Sma
On Tue, May 04, 2004 at 02:44:26PM -0700, Ron Gilbert wrote:
>
> I am wondering what the best backup strategy is for my database.
>
> The database is used to store a very large number of binary files,
> ranging from a few K to 20MB's. The database stores thousands of these
>You may wish to also look into replication, which is a cinch to setup
>with MySQL.
Unfortunately replication does not handle point in time recovery. This
is usually required to happen when someone accidentally drops a table
or deletes too many rows from the database inadvertently. Under
replica
On Tue, 04 May 2004 14:44:26 -0700
Ron Gilbert <[EMAIL PROTECTED]> wrote:
> Is there a better way to be doing this given the huge amount of
> binary data I have?
You may wish to also look into replication, which is a cinch to setup with MySQL.
Josh
--
MySQL General Mailing List
For list archiv
I am wondering what the best backup strategy is for my database.
The database is used to store a very large number of binary files,
ranging from a few K to 20MB's. The database stores thousands of these
files. I can not put this data on the file server, it needs to be in
the dat
11 matches
Mail list logo