Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Am 28.11.18 um 10:00 schrieb Olivier:
> Is there a way that gives an estimate of the size of a mysqldump such a
> way that it would always be larger than the real size?
keep in mind that a dump has tons of sql statements not existing that
way in the data
--
MySQL General Mailing List
Fo
Hi Olivier,
On 28/11/2018 8:00 pm, Olivier wrote:
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
information_schema.tables WHERE table_s
will also use mydumper instead of mysqldump due to the features of
compression and encryption. Mysqldump stops being useful on full|large
datasets due to it's single-threaded-ness.
On Tue, Oct 7, 2014 at 8:35 AM, yoku ts. yoku0...@gmail.com wrote:
Maybe no, as you knew.
It means that after lock
Hello,
If you use any *NOT InnoDB* storage engine, you're right.
mysqldump with --single-transaction doesn't have any consistent as you say.
If you use InnoDB all databases and tables, your dumping process is
protected by transaction isolation level REPEATABLE-READ.
http://dev.mysql.com/doc
any *NOT InnoDB* storage engine, you're right.
mysqldump with --single-transaction doesn't have any consistent as you say.
If you use InnoDB all databases and tables, your dumping process is
protected by transaction isolation level REPEATABLE-READ.
http://dev.mysql.com/doc/refman/5.6/en
Maybe no, as you knew.
It means that after lock is released, dump is made while the read and
write
activity is going on. This dump then, would be inconsistent.
Not only binary logs, each tables in your dump is based the time when
mysqldump began to dump *each* tables.
It means, for example
Hello Geetanjali,
On 9/23/2014 7:14 AM, geetanjali mehra wrote:
Can anybody please mention the internals that works when we use mysqldump
as follows:
*mysqldump --single-transaction --all-databases backup_sunday_1_PM.sql*
MySQL manual says:
This backup operation acquires a global read lock
mysqldump
as follows:
*mysqldump --single-transaction --all-databases backup_sunday_1_PM.sql*
MySQL manual says:
This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en
Can anybody please mention the internals that works when we use mysqldump
as follows:
*mysqldump --single-transaction --all-databases backup_sunday_1_PM.sql*
MySQL manual says:
This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES
2014/04/07 08:02 -0800, Tim Johnson
2)mysqldump forces all database names to lower case in the CREATE
DATABASE statement. I know, one shouldn't use upper case in
database names, but :) tell that to my clients.
Why not? That is not mentioned in the section devoted to mapping such names
* h...@tbbs.net h...@tbbs.net [140407 23:09]:
2014/04/07 08:02 -0800, Tim Johnson
2)mysqldump forces all database names to lower case in the CREATE
DATABASE statement. I know, one shouldn't use upper case in
database names, but :) tell that to my clients.
Why
have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.
I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?
thoughts? Opinions?
thanks
I should add the following:
1
already set up a mysql user on the ubuntu OS.
In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.
I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user
Currently I'm running mysql on a Mac OSX partition.
I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS.
In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux
* Tim Johnson t...@akwebsoft.com [140404 17:46]:
Currently I'm running mysql on a Mac OSX partition.
I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS.
In the past I have used mysqldump with just the --all-databases
--databases, methinks.
- Original Message -
From: Daevid Vincent dae...@daevid.com
To: mysql@lists.mysql.com
Sent: Thursday, 21 November, 2013 10:44:39 PM
Subject: How do I mysqldump different database tables to the same .sql file?
I'm working on some code where I am trying
.
#
#!/bin/sh
echo USE `database1`; outflfile.sql
mysqldump -(firstsetofoptions) outfile.sql
echo USE `database2`; outflfile.sql
mysqldump -(secondsetofoptions) outfile.sql
On Thu, Nov 21, 2013 at 4:44 PM, Daevid Vincent dae...@daevid.com wrote:
I'm working
do I mysqldump different database tables to the same .sql
file?
There is a good reason that the USE database is not output in those
dumps..
it would make the tool very difficult to use for moving data around.
If I might suggest, a simple workaround is to create a shell script along
`COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
Then I crafted this, but it pukes on the db name portion. :-(
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote
Hat Enterprise Linux Server release 6.3 (Santiago)
I have a backup script which at some point calls:
mysqldump --default-character-set=utf8 --routines --no-data
--no-create-info --skip-triggers -S /mysql/database.sock -u backup
-pxxx database
and I have error:
mysqldump: Got error: 1045: Access
Do not try to dump or reload information_schema. It is derived meta
information, not real tables.
-Original Message-
From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
Sent: Monday, February 04, 2013 12:17 AM
To: mysql@lists.mysql.com
Subject: Mysqldump routines dump, problem
On Tue, 20 Nov 2012, Ricardo Barbosa wrote:
Hi all.
I'm trying to do a recover on a table for a client, with the following message
root@falcon:~# mysqldump -u root -pXXX database
-- MySQL dump 10.13 Distrib 5.1.30, for pc-linux-gnu (i686)
--
-- Host: localhost Database: database
an algorithm that analyses
some patterns in the dump file to recognize that it is correct,
starting may be from one that is working as 'valid' sample.
Cheers
Claudio
2012/11/7 Gary listgj-my...@yahoo.co.uk
Can anyone suggest how I could verify that the files created by
mysqldump are okay
' sample.
Cheers
Claudio
2012/11/7 Gary listgj-my...@yahoo.co.uk
Can anyone suggest how I could verify that the files created by
mysqldump are okay? They are being created for backup purposes, and
the last thing I want to do is find out that the backups themselves are
in some way corrupt
2012/11/7 Ananda Kumar anan...@gmail.com
you can use checksum to make sure there are not corruption in the file
That would work for the file integrity itself not for the data integrity
_in_ the file.
As Claudio suggested, probably going thru the whole recovery process from
time to time is
In the past when I used mysqldump, I used a slave database for backups and
periodically testing restores.
My process for testing:
- Stop the slave process (so the db doesn't get updated).
- Run the backup.
- Create restore_test database.
- Restore the backup to the restore_test database.
- Use
, 2012 7:09 AM
To: 'Gary'; mysql@lists.mysql.com
Subject: RE: How to verify mysqldump files
In the past when I used mysqldump, I used a slave database for backups and
periodically testing restores.
My process for testing:
- Stop the slave process (so the db doesn't get updated).
- Run
Hello everybody.
I'm trying to create a backup of mysql database:
mysqldump --all-databases --routines --master-data=2 all_databases_`date
+'%y%m%d-%H%M'`.sql
It looks like backup has been created but I've got this Warning:
Warning: mysqldump: ignoring option '--databases' due to invalid value
mysqldump --databases test --tables ananda test.dmp
mysql show create table ananda\G;
*** 1. row ***
Table: ananda
Create Table: CREATE TABLE `ananda` (
`id` int(11) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT
My backups from a mysqldump process are useless, because the dump files are not
escaping single quotes in the data in the fields.
So, O'Brien kills it - instead of spitting out
'O\'Brien'
it spits out
'O'Brien'
I don't see anywhere in the documentation about mysqldump where you can tweak
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below
LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi
Are you using an abnormal CHARACTER SET or COLLATION?
SHOW CREATE TABLE
Show us the args to mysqldump.
-Original Message-
From: James W. McNeely [mailto:j...@newcenturydata.com]
Sent: Friday, June 15, 2012 10:19 AM
To: mysql@lists.mysql.com
Subject: mysqldump not escaping single
Is it safe to kill a mysqldump while it's in process ?
i mean aside loosing the dumped file, would it affect the running DB
being dumped?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
Yes, killing a mysqldump is perfectly safe. Caveat being that the dump file
produced may be pretty useless.
Singer
On Thu, May 31, 2012 at 7:41 AM, Roland Roland r_o_l_a_...@hotmail.comwrote:
Is it safe to kill a mysqldump while it's in process ?
i mean aside loosing the dumped file, would
Today I needed to split a mysqldump -A into it several databases.
I didn't have access to the original source, so I only had the texr file to
work.
It was a webhosting server dump, so there was a LOT of databases...
I split the file with this little script I made:
file=myqdl dump file
nextTable
; 2012/01/03 11:52 -0500, Govinda
...which strikes me as odd (again, showing how new I am to driving from the
CL), because I do NOT see any entry like this:
/usr/local/mysql/bin/mysqldump
Is mysql a symbolic link?
..which I just (earlier this morning) changed to this:
export
PATH=/usr/local
..which I just (earlier this morning) changed to this:
export
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin/mysqldump:$PATH
You are missing a point, that the proper thing for PATH is directory (or
effective directory), not runfile in directory
as well that I let it beat into my head how the
CL is actually working (working out the full paths)
You should fix the $PATH, as you'll need it for utilities (such as mysqldump)
and such.
well , yes, it will be nice to no how to manipulate the $PATH ... and meanwhile
using full paths when
(working out the full paths)
You should fix the $PATH, as you'll need it for utilities (such as mysqldump)
and such.
You need to edit your shell startup file. For bash, it's .bash_profile in
your home directory. Other shells will have their own startup script. My
.bash_profile includes:
export
So then I try (in Mac OS X Terminal, while logged in as me (not root)):
mysqldump -uroot -p myDBname myTableName ~/myTestDumpedTable.sql
...and again it produces:
sh: mysqldump: command not found..
that is because Mac OSX is missing a package-managment and so you need
a little knowledge
Am 31.12.2011 23:53, schrieb Jan Steinman:
And for the record, there are at least two excellent package managers
available for Mac OS, and either MacPorts or Fink
if you call this package-managment from the view of a operating
system you have never seen a real one - this are ADDITIONAL
So then I try (in Mac OS X Terminal, while logged in as me (not root)):
mysqldump -uroot -p myDBname myTableName ~/myTestDumpedTable.sql
...and again it produces:
sh: mysqldump: command not found..
that is because Mac OSX is missing a package-managment and so you need
a little knowledge
[snip]
that is because Mac OSX is missing a package-managment and so you need
a little knowledge about your OS to fix the PATH or you have to use
full-qualified calls or configure/install your software to locations
which are already in the path
which mysqldump as normal user wil tell you
Am 29.12.2011 19:21, schrieb Govinda:
Just a side note, that:
Govind% which mysqldump
mysqldump: Command not found.
Govind% which /usr/local/mysql/bin/mysqldump
/usr/local/mysql/bin/mysqldump
kind of defeats the purpose of having to know the path in advance in order
to use
. First step for me is just
to dump the tables, one at a time.
I successfully login to my local MySQL like so:
Govind% /usr/local/mysql/bin/mysql -uroot
but while in this dir (and NOT logged into MySQL):
/usr/local/mysql/bin
...when I try this:
mysqldump -uroot -p myDBname myTableName
I would suggest trying:
mysqldump -uroot -p myDBname myTableName /tmp/myTestDumpedTable.sql
Maybe you don't have permission (or space) to write into /usr/local/mysql/bin.
That would
be an unusual place for such files.
On 12/29/11 9:15 AM, Govinda wrote:
Hi Everyone
This should be quick
Am 29.12.2011 18:15, schrieb Govinda:
...when I try this:
mysqldump -uroot -p myDBname myTableName myTestDumpedTable.sql
..then I keep getting this:
myTestDumpedTable.sql: Permission denied.
your unix-user has no write permissions to myTestDumpedTable.sql
this has nothing to do wirh
Hi Reindl,
what do you delete by
rm -f /Volumes/dune/mysql_data/bin*
and why?
Many thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
Am 24.12.2011 23:13, schrieb Igor Shevtsov:
Hi Reindl,
what do you delete by
rm -f /Volumes/dune/mysql_data/bin*
and why?
this should be /mysql_data/bin* to match to the rest of the sample
why?
because this is my script to make a new backup of a mysqld-master to
re-init the slave and in
Am 23.12.2011 21:14, schrieb Jim McNeely:
Hello all, happy holidays!
What is the best way to run a mysqldump to get the tables, the data, the
triggers, the views, the procedures, the privileges and users, everything? It
seems confusing in the online documentation, or is that just me
Hi Jim happy holidays to you!
actually you just need to add the --routines trigger
mysqldump --all-databases --*routines* fulldump.sql
with this you get all databases including the system one with privileges
(mysql), triggers is on by default, you enable routines with the flag --*
routines
On Fri, December 23, 2011 12:27, Reindl Harald wrote:
Am 23.12.2011 21:14, schrieb Jim McNeely:
Hello all, happy holidays!
What is the best way to run a mysqldump to get the tables, the data, the
triggers, the views, the procedures, the privileges and users,
everything? It seems confusing
Am 23.12.2011 22:42, schrieb Wm Mussatto:
so you have a REAL consistent backup with minimal downtime you can restore
on any machine and pull dumps of whatever you really need instead of
breaindead hughe dumps with long locking time while they are done or
withut locking inconsistent state
On 2011/10/21 10:26 AM, Johan De Meersman wrote:
- Original Message -
From: Alex Schaftal...@quicksoftware.co.za
Got my app reading in a dump created with extended-inserts off, and
lumping all of the insert statements together. Works like a charm
Just for laughs, would you mind
On 2011/10/20 03:43 PM, Johan De Meersman wrote:
- Original Message -
From: Alex Schaftal...@quicksoftware.co.za
I realize that, I'm just trying to stop the phone calls saying I
started a restore, and my pc just froze
I might just read all the single insert lines, and get a whole
- Original Message -
From: Alex Schaft al...@quicksoftware.co.za
Got my app reading in a dump created with extended-inserts off, and
lumping all of the insert statements together. Works like a charm
Just for laughs, would you mind posting the on-disk size of your database, and
the
Hi,
I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the restore
side, I'm trying to feed these to mysql via the c api so I can monitor
progress (no of lines in the dump file vs no of lines sent to mysql
On 2011/10/20 10:53 AM, Alex Schaft wrote:
What can I pass to mysqldump to get more sane statement lengths?
+1 for extended-inserts...
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
- Original Message -
From: Alex Schaft al...@quicksoftware.co.za
I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed these to mysql via the c api so I can
monitor progress
On 2011/10/20 11:54 AM, Johan De Meersman wrote:
- Original Message -
From: Alex Schaftal...@quicksoftware.co.za
I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed
- Original Message -
From: Alex Schaft al...@quicksoftware.co.za
I realize that, I'm just trying to stop the phone calls saying I
started a restore, and my pc just froze
I might just read all the single insert lines, and get a whole lot of
values clauses together before
I remain convinced that users simply need to learn patience, though.
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!
Good one!
Sent from my iPad
On Oct 20, 2011, at 8:44 AM, Johan De Meersman vegiv...@tuxera.be wrote:
I remain convinced that users simply need to learn patience, though.
--
MySQL General
@lists.mysql.com
Subject: Re: mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
Hello Shafi,
Adding to Prabhat alternatives, you can use --force to the mysqldump command
to ignore the errors and continue taking backup.
Regarding the error
Hello Shafi,
Adding to Prabhat alternatives, you can use --force to the mysqldump command
to ignore the errors and continue taking backup.
Regarding the error, we need to check whether the table is present or not
and the engine type specifically.
Thanks
Suresh Kuna
On Sat, Sep 24, 2011 at 3:31
Folks
I have a mysql database of 200G size and the backup fails due to the foll.
Issue.
mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
Can someone assist pls.?
Best Rgs,
Shafi AHMED
mysqld to
see if it works.
- Original Message -
From: Shafi AHMED shafi.ah...@sifycorp.com
To: mysql@lists.mysql.com
Sent: Friday, 23 September, 2011 1:42:26 PM
Subject: mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK
In the last episode (Sep 23), Shafi AHMED said:
I have a mysql database of 200G size and the backup fails due to the foll.
Issue.
mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
Can someone assist pls.?
$ perror 24
OS
correct. mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
opened
Adarsh,
1)
When restoring a mysqldump you have the option of which database to restore.
mysql database1 backup.sql
2)
You might be able to use the --ignore-table command. I'm not sure if
this would work
mysqldump --all-databases -q --single-transaction
--ignore-table=databasetoignore
On 15-09-2011 10:31, Chris Tate-Davies wrote:
Adarsh,
1)
When restoring a mysqldump you have the option of which database to
restore.
mysql database1 backup.sql
Admittedly, it's been a few years since I last used mysqldump, but I
suspect that it will contain USE commands
or u can use for loop, have only the database to be exported and use that
variable in --database and do mysqldump of each database.
On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen cars...@bitbybit.dkwrote:
On 15-09-2011 10:31, Chris Tate-Davies wrote:
Adarsh,
1)
When restoring
Dear all,
Today i backup my all databases (25) by using the below command :-
mysqldump --all-databases -q --single-transaction | gzip
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz
Now I have some doubts or problems that I need to handle in future :
1
Dear all,
Today I got stucked around a strange problem.
Don't know why me linux reboot automatically.
*Server Info :
*Linux Server-5 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC 2007
x86_64 x86_64 x86_64 GNU/Linux
Welcome to SUSE Linux Enterprise Server 10 SP1 (x86_64) - Kernel \r (\l).
Thanks Jon,
I couldn't locate my error.log.
But i found one clue :
My server reboots at :
reboot system boot 2.6.16.46-0.12-s Thu Jul 28 10:32 (00:24)
reboot system boot 2.6.16.46-0.12-s Wed Jul 27 18:47 (16:10)
reboot system boot 2.6.16.46-0.12-s Wed Jul 27
Dear all,
I am currently trying to figure-out how I could ignore multiple tables in
mysql using a simple a regex. For example I have multiple tables which have
the following structure: mytable1, mytable2, ..,mytable100. And I
would like these tables to be ignore when doing mysqldump
Hi Daniel,
you can use a workaround from the shell,
cd /path/to/your/database (e.g.: cd /var/lib/mysql/mydb)
ls -al *table** | awk '{print $8}' | awk -F. '{print --ignore-table=*mydb
*.$1}' | xargs mysqldump -u*root* -p*toor* *--your-flags **mydb*
It's not that beautiful but it should work
I haven't bothered to look for the bug, but it seems to me to be quite
reasonable default behaviour to lock the whole lot when you're dumping
transactional tables - it ensures you dump all tables from the same consistent
view.
I would rather take this up with the ZRM people - it should just
On Mon, 06 Jun 2011 12:44 +0200, Johan De Meersman
vegiv...@tuxera.be wrote:
I haven't bothered to look for the bug, but it seems to me to be quite
reasonable default behaviour to lock the whole lot when you're dumping
transactional tables - it ensures you dump all tables from the same
- Original Message -
From: ag...@airpost.net
Excluding 'performance_schema' appears to eliminate the error. And it
seems does NOT cause a reliability-of-the-backup problem.
Hah, no, backing that up is utterly pointless. Never noticed it doing that.
It's basically a virtual schema
On Mon, 06 Jun 2011 18:54 +0200, Johan De Meersman
vegiv...@tuxera.be wrote:
Excluding 'performance_schema' appears to eliminate the error. And it
seems does NOT cause a reliability-of-the-backup problem.
Hah, no, backing that up is utterly pointless.
that's a useful/final confirmation.
logical backup
manual:backup:WARNING: The database(s) drupal6
performance_schema will be backed up in logical mode since they
contain tables that use a transactional engine.
manual:backup:INFO: Command used for logical backup is
/usr/bin/mysqldump --opt --extended
in logical mode since they
contain tables that use a transactional engine.
manual:backup:INFO: Command used for logical backup is
/usr/bin/mysqldump --opt --extended-insert --create-options
--default-character-set=utf8 --routines --host=localhost
--port=3306 --socket=/var
hi,
On Sun, 05 Jun 2011 22:24 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
have you checked you permissions-table if all privileges are active for root
i've got,
mysql show grants for 'root'@'localhost';
the grant statements does nobody interest
maybe use phpmyadmin for a clearer display
mysql select * from mysql.user where user='root' limit 1;
fwiw, others are seeing this. e.g., in addition to the two bugs i'd
already referenced,
http://www.directadmin.com/forum/showthread.php?p=202053
and one
http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
claims a solution
Add --skip-add-locks
referenced,
http://www.directadmin.com/forum/showthread.php?p=202053
and one
http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
claims a solution
Add --skip-add-locks to your mysqldump command
which, having added as i mentioned above
/showthread.php?p=202053
and one
http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
claims a solution
Add --skip-add-locks to your mysqldump command
which, having added as i mentioned above, to the [mysqldump] section of
/etc/my.cnf, does
On Sun, 05 Jun 2011 23:30 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
BTW
WHY is everybody ansering to the list AND the author of the last post?
this reults in get every message twice :-(
Reply - sends to ONLY the From == h.rei...@thelounge.net
Reply to all sends to BOTH the From ==
apparently broken with mysqldump -- enough so that lots of
people are seeing and reporting this same error after the 5.1 - 5.5
upgrade.
why would setting up a replication slave be necessary or a good solution
to the problem?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
unfortunately, i have no idea what that means.
something's apparently broken with mysqldump -- enough so that lots of
people are seeing and reporting this same error after the 5.1 - 5.5
upgrade.
why would setting up a replication slave be necessary or a good solution
to the problem?
because
i still have no idea why this is necessary.
there seems to be a but, problem, misconfiguration, etc.
wouldn't it make some sense to try to FIX it, rather than setting up a
completely different server?
perhaps someone with an idea of the problem and its solution will be
able to chime in.
--
Am 05.06.2011 23:55, schrieb ag...@airpost.net:
i still have no idea why this is necessary.
take it or not
it is a professional solution which works for
databses with 20 GB every day here with rsync
without interrupt/lock mysqld a second
and it is much faster
there seems to be a but,
) shawn.l.gr...@oracle.com
On 3/29/2011 19:09, John G. Heim wrote:
I would like to use mysqldump to get a copy of the code for a stored
procedure in a format that is similar to the code I used to create it.
The problem is that I'm blind and I have to listen to the code to debug
it. I think I have
) shawn.l.gr...@oracle.com
Cc: John G. Heim jh...@math.wisc.edu, mysql@lists.mysql.com
Sent: Wednesday, 30 March, 2011 9:01:06 AM
Subject: Re: getting procedure code via mysqldump
In case you use a linux or unix system, to strip off the comments in
linux
bash is very easy, you can use this simple
From: Claudio Nanni claudio.na...@gmail.com
To: Shawn Green (MySQL) shawn.l.gr...@oracle.com
Cc: John G. Heim jh...@math.wisc.edu; my...@lists.mysql.com
Sent: Wednesday, March 30, 2011 2:01 AM
Subject: Re: getting procedure code via mysqldump
In case you use a linux or unix system
Hi all!
John G. Heim wrote:
From: Claudio Nanni claudio.na...@gmail.com
[[...]]
In case you use a linux or unix system, to strip off the comments in
linux
bash is very easy, you can use this simple bash command:
grep -v ^/\* yourdumpfile.sql yourdumpfilewithoutcomments.sql
That
I would like to use mysqldump to get a copy of the code for a stored
procedure in a format that is similar to the code I used to create it. The
problem is that I'm blind and I have to listen to the code to debug it. I
think I have a file containing the code that I used to create the stored
1 - 100 of 1469 matches
Mail list logo