Re: Estimate mysqldump size

2018-11-28 Thread Olivier
Ronan McGlue  writes:

> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have found:
>>
>> mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
>>information_schema.tables WHERE table_schema NOT IN
>>('information_schema','performance_schema','mysql');
>>
>> but the result may be smaller than the real size.
>
> In the above example, you also need to account for index_length, eg
>
> mysql>  select round(SUM(data_length+index_length)/POWER(1024,2),1) 
> Total_MB,round(SUM(data_length)/POWER(1024,2),1) 
> data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB  FROM 
> information_schema.tables where TABLE_SCHEMA not in ( 
> "information_schema", "performance_schema", "mysql") ;
> +--+-+--+
> | Total_MB | data_MB | index_MB |
> +--+-+--+
> |   4546.0 |  4093.7 |    452.2 |
> +--+-+--+
> 1 row in set (0.00 sec)

Thanks.

> However, this doesn't 100% map to OS file size ( if using innodb file 
> per table ) and will likely never be 100% accurate to what the OS 
> reports, due to fragmentation etc.
>
>>
>> I am writting a program that takes the result of mysqldump and pipe it
>> in a tar file.
>
> A typical global mysqldump ( ie taken with -A ) will be a single file.  
> Why are you then wanting to pipe this to a tar archive?

The tar file will be part of Amanda backup. On a full backup, it should
have the mysqldump and on incremental backups it should have the binary
logs.

Having everything in a tar file makes it very consistent and easy to
deal with in case of catastrophic failure (like everything is lost
except the tape, the backup can still be extracted by hand on a live
CD/single user system as it is all tar).

Amanda will also take care of the compression.

> Its also common for mysqldump to be compressed via a pipe due to the 
> nature of the output file created ( eg text files compress *very* well ) 
> , to then be sent across the network , eg via ssh
>
> mysqldump -u.. -p -A | gzip > schema.sql.gz
>
>
> Aside from your stated goal of piping to tar, if we can step back a 
> level briefly - what are you trying to achieve here?

A plugin for Amanda. I think a commercial solution exist, I don't need
anything very fancy, so I am trying to come up with my own solution.

Best regards,

Olivier

>
>> Tar file format has the size in the header, before the
>> data and if the size of the dump is bigger than the size declared in the
>> header, tar does not like that (if the size of the dump is smaller than
>> the actual size, it can be padded with spaces).
>>
>> So, the estimate must be larger than the actual dump, how to acheive
>> that?
>
> It wont be anything other than an estimate , however it should still be 
> reasonably close if you arent doing a *lot* of dml on it.
>
> You could artificially inflate the expected size by ,eg multiplying by 
> 1.1x or 1.2x , however there will always be an edge case table which 
> will be greater still..
>
>
> Regards
>
> Ronan McGlue
>
> MySQL Support
>
>
>
>>
>> Thanks in advance,
>>
>> Olivier
>>
>>
>

-- 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Estimate mysqldump size

2018-11-28 Thread Olivier
Ronan McGlue  writes:

> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have found:
>>
>> mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
>>information_schema.tables WHERE table_schema NOT IN
>>('information_schema','performance_schema','mysql');
>>
>> but the result may be smaller than the real size.
>
> In the above example, you also need to account for index_length, eg

But I thought I had read that indexes are not saved by a myslqdump, but
recreated on a restore?

Thanks in advance,

Olivier

>
> mysql>  select round(SUM(data_length+index_length)/POWER(1024,2),1) 
> Total_MB,round(SUM(data_length)/POWER(1024,2),1) 
> data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB  FROM 
> information_schema.tables where TABLE_SCHEMA not in ( 
> "information_schema", "performance_schema", "mysql") ;
> +--+-+--+
> | Total_MB | data_MB | index_MB |
> +--+-+--+
> |   4546.0 |  4093.7 |    452.2 |
> +--+-+--+
> 1 row in set (0.00 sec)
>
> However, this doesn't 100% map to OS file size ( if using innodb file 
> per table ) and will likely never be 100% accurate to what the OS 
> reports, due to fragmentation etc.
>
>>
>> I am writting a program that takes the result of mysqldump and pipe it
>> in a tar file.
>
> A typical global mysqldump ( ie taken with -A ) will be a single file.  
> Why are you then wanting to pipe this to a tar archive?
>
> Its also common for mysqldump to be compressed via a pipe due to the 
> nature of the output file created ( eg text files compress *very* well ) 
> , to then be sent across the network , eg via ssh
>
> mysqldump -u.. -p -A | gzip > schema.sql.gz
>
>
> Aside from your stated goal of piping to tar, if we can step back a 
> level briefly - what are you trying to achieve here?
>
>> Tar file format has the size in the header, before the
>> data and if the size of the dump is bigger than the size declared in the
>> header, tar does not like that (if the size of the dump is smaller than
>> the actual size, it can be padded with spaces).
>>
>> So, the estimate must be larger than the actual dump, how to acheive
>> that?
>
> It wont be anything other than an estimate , however it should still be 
> reasonably close if you arent doing a *lot* of dml on it.
>
> You could artificially inflate the expected size by ,eg multiplying by 
> 1.1x or 1.2x , however there will always be an edge case table which 
> will be greater still..
>
>
> Regards
>
> Ronan McGlue
>
> MySQL Support
>
>
>
>>
>> Thanks in advance,
>>
>> Olivier
>>
>>
>

-- 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Estimate mysqldump size

2018-11-28 Thread Reindl Harald



Am 28.11.18 um 10:00 schrieb Olivier:
> Is there a way that gives an estimate of the size of a mysqldump such a
> way that it would always be larger than the real size?
keep in mind that a dump has tons of sql statements not existing that
way in the data

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Estimate mysqldump size

2018-11-28 Thread Ronan McGlue

Hi Olivier,

On 28/11/2018 8:00 pm, Olivier wrote:

Hello,

Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?

So far, I have found:

mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
   information_schema.tables WHERE table_schema NOT IN
   ('information_schema','performance_schema','mysql');

but the result may be smaller than the real size.


In the above example, you also need to account for index_length, eg

mysql>  select round(SUM(data_length+index_length)/POWER(1024,2),1) 
Total_MB,round(SUM(data_length)/POWER(1024,2),1) 
data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB  FROM 
information_schema.tables where TABLE_SCHEMA not in ( 
"information_schema", "performance_schema", "mysql") ;

+--+-+--+
| Total_MB | data_MB | index_MB |
+--+-+--+
|   4546.0 |  4093.7 |    452.2 |
+--+-+--+
1 row in set (0.00 sec)

However, this doesn't 100% map to OS file size ( if using innodb file 
per table ) and will likely never be 100% accurate to what the OS 
reports, due to fragmentation etc.




I am writting a program that takes the result of mysqldump and pipe it
in a tar file.


A typical global mysqldump ( ie taken with -A ) will be a single file.  
Why are you then wanting to pipe this to a tar archive?


Its also common for mysqldump to be compressed via a pipe due to the 
nature of the output file created ( eg text files compress *very* well ) 
, to then be sent across the network , eg via ssh


mysqldump -u.. -p -A | gzip > schema.sql.gz


Aside from your stated goal of piping to tar, if we can step back a 
level briefly - what are you trying to achieve here?



Tar file format has the size in the header, before the
data and if the size of the dump is bigger than the size declared in the
header, tar does not like that (if the size of the dump is smaller than
the actual size, it can be padded with spaces).

So, the estimate must be larger than the actual dump, how to acheive
that?


It wont be anything other than an estimate , however it should still be 
reasonably close if you arent doing a *lot* of dml on it.


You could artificially inflate the expected size by ,eg multiplying by 
1.1x or 1.2x , however there will always be an edge case table which 
will be greater still..



Regards

Ronan McGlue

MySQL Support





Thanks in advance,

Olivier




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Estimate mysqldump size

2018-11-28 Thread Olivier
Hello,

Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?

So far, I have found:

   mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
  information_schema.tables WHERE table_schema NOT IN
  ('information_schema','performance_schema','mysql');

but the result may be smaller than the real size.

I am writting a program that takes the result of mysqldump and pipe it
in a tar file. Tar file format has the size in the header, before the
data and if the size of the dump is bigger than the size declared in the
header, tar does not like that (if the size of the dump is smaller than
the actual size, it can be padded with spaces).

So, the estimate must be larger than the actual dump, how to acheive
that?

Thanks in advance,

Olivier


-- 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump with single-transaction option.

2014-10-08 Thread Andrew Moore
We will tend to use binary backups (Xtrabackup) for full consistent dataset
restore (think slave provisioning and disaster recovery) and logical
backups to perform single table restores in the event that a rollback may
need to occur if someone drops a table or carries out an insane update. We
will also use mydumper instead of mysqldump due to the features of
compression and encryption. Mysqldump stops being useful on full|large
datasets due to it's single-threaded-ness.



On Tue, Oct 7, 2014 at 8:35 AM, yoku ts. yoku0...@gmail.com wrote:

 Maybe no, as you knew.

  It means that after lock is released, dump is made while the read and
 write
  activity is going on.   This dump then, would be inconsistent.

 Not only binary logs, each tables in your dump is based the time when
 mysqldump began to dump *each* tables.
 It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
 and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

 I don't have a motivation for restoring its consistency..


 Regards,


 2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

  So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump
 be
  of any useful?
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:
 
   Hello,
  
   If you use any *NOT InnoDB* storage engine, you're right.
   mysqldump with --single-transaction doesn't have any consistent as you
  say.
  
   If you use InnoDB all databases and tables, your dumping process is
   protected by transaction isolation level REPEATABLE-READ.
  
  
  
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
  
   Regards,
  
  
   2014-10-07 12:52 GMT+09:00 geetanjali mehra 
 mailtogeetanj...@gmail.com
  :
  
   It seems to me that once the read lock is acquired, only the binary
 log
   coordinates are read. Soon after binary log coordinates are read, lock
  is
   released.  Is there anything else that happens here?
  
   It means that after lock is released, dump is made while the read and
   write
   activity is going on.   This dump then, would be inconsistent.  So, to
   make
   this dump a consistent one when restoring it, binary log will be
 applied
   starting from the binary log  coordinates that has been read earlier.
  
   This is what I understand. Please correct me if my understanding is
  wrong.
  
   Best Regards,
   Geetanjali Mehra
   Senior Oracle and MySQL DBA Corporate Consultant and Database Security
   Specialist
  
  
   On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green 
 shawn.l.gr...@oracle.com
  
   wrote:
  
Hello Geetanjali,
   
On 9/23/2014 7:14 AM, geetanjali mehra wrote:
   
Can anybody please mention the internals that works when we use
   mysqldump
as follows:
   
   
*mysqldump --single-transaction --all-databases 
   backup_sunday_1_PM.sql*
   
MySQL manual says:
   
This backup operation acquires a global read lock on all tables at
  the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
  this
lock
has been acquired, the binary log coordinates are read and the lock
  is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
   issued,
the backup operation may stall until those statements finish. After
   that,
the dump becomes lock-free and does not disturb reads and writes on
  the
tables.
   
Can anyone explain it more? Please.
   
   
Which part would you like to address first?
   
I have a feeling it's more about how FLUSH TABLES WITH READ LOCK
 works
   but
I want to be certain before answering.
   
Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work
 Together.
Office: Blountville, TN
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
   
  
  
  
 



Re: mysqldump with single-transaction option.

2014-10-07 Thread yoku ts.
Hello,

If you use any *NOT InnoDB* storage engine, you're right.
mysqldump with --single-transaction doesn't have any consistent as you say.

If you use InnoDB all databases and tables, your dumping process is
protected by transaction isolation level REPEATABLE-READ.

http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction

Regards,


2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 It seems to me that once the read lock is acquired, only the binary log
 coordinates are read. Soon after binary log coordinates are read, lock is
 released.  Is there anything else that happens here?

 It means that after lock is released, dump is made while the read and write
 activity is going on.   This dump then, would be inconsistent.  So, to make
 this dump a consistent one when restoring it, binary log will be applied
 starting from the binary log  coordinates that has been read earlier.

 This is what I understand. Please correct me if my understanding is wrong.

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

  Hello Geetanjali,
 
  On 9/23/2014 7:14 AM, geetanjali mehra wrote:
 
  Can anybody please mention the internals that works when we use
 mysqldump
  as follows:
 
 
  *mysqldump --single-transaction --all-databases 
 backup_sunday_1_PM.sql*
 
  MySQL manual says:
 
  This backup operation acquires a global read lock on all tables at the
  beginning of the dump (using *FLUSH TABLES WITH READ LOCK
  http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
  lock
  has been acquired, the binary log coordinates are read and the lock is
  released. If long updating statements are running when the FLUSH
  http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
 issued,
  the backup operation may stall until those statements finish. After
 that,
  the dump becomes lock-free and does not disturb reads and writes on the
  tables.
 
  Can anyone explain it more? Please.
 
 
  Which part would you like to address first?
 
  I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
 but
  I want to be certain before answering.
 
  Yours,
  --
  Shawn Green
  MySQL Senior Principal Technical Support Engineer
  Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
  Office: Blountville, TN
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Re: mysqldump with single-transaction option.

2014-10-07 Thread geetanjali mehra
So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump be
of any useful?

Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:

 Hello,

 If you use any *NOT InnoDB* storage engine, you're right.
 mysqldump with --single-transaction doesn't have any consistent as you say.

 If you use InnoDB all databases and tables, your dumping process is
 protected by transaction isolation level REPEATABLE-READ.


 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction

 Regards,


 2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 It seems to me that once the read lock is acquired, only the binary log
 coordinates are read. Soon after binary log coordinates are read, lock is
 released.  Is there anything else that happens here?

 It means that after lock is released, dump is made while the read and
 write
 activity is going on.   This dump then, would be inconsistent.  So, to
 make
 this dump a consistent one when restoring it, binary log will be applied
 starting from the binary log  coordinates that has been read earlier.

 This is what I understand. Please correct me if my understanding is wrong.

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

  Hello Geetanjali,
 
  On 9/23/2014 7:14 AM, geetanjali mehra wrote:
 
  Can anybody please mention the internals that works when we use
 mysqldump
  as follows:
 
 
  *mysqldump --single-transaction --all-databases 
 backup_sunday_1_PM.sql*
 
  MySQL manual says:
 
  This backup operation acquires a global read lock on all tables at the
  beginning of the dump (using *FLUSH TABLES WITH READ LOCK
  http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
  lock
  has been acquired, the binary log coordinates are read and the lock is
  released. If long updating statements are running when the FLUSH
  http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
 issued,
  the backup operation may stall until those statements finish. After
 that,
  the dump becomes lock-free and does not disturb reads and writes on the
  tables.
 
  Can anyone explain it more? Please.
 
 
  Which part would you like to address first?
 
  I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
 but
  I want to be certain before answering.
 
  Yours,
  --
  Shawn Green
  MySQL Senior Principal Technical Support Engineer
  Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
  Office: Blountville, TN
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 





Re: mysqldump with single-transaction option.

2014-10-07 Thread yoku ts.
Maybe no, as you knew.

 It means that after lock is released, dump is made while the read and
write
 activity is going on.   This dump then, would be inconsistent.

Not only binary logs, each tables in your dump is based the time when
mysqldump began to dump *each* tables.
It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

I don't have a motivation for restoring its consistency..


Regards,


2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

 So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump be
 of any useful?

 Best Regards,
 Geetanjali Mehra
 Senior Oracle and MySQL DBA Corporate Consultant and Database Security
 Specialist


 On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:

  Hello,
 
  If you use any *NOT InnoDB* storage engine, you're right.
  mysqldump with --single-transaction doesn't have any consistent as you
 say.
 
  If you use InnoDB all databases and tables, your dumping process is
  protected by transaction isolation level REPEATABLE-READ.
 
 
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
 
  Regards,
 
 
  2014-10-07 12:52 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com
 :
 
  It seems to me that once the read lock is acquired, only the binary log
  coordinates are read. Soon after binary log coordinates are read, lock
 is
  released.  Is there anything else that happens here?
 
  It means that after lock is released, dump is made while the read and
  write
  activity is going on.   This dump then, would be inconsistent.  So, to
  make
  this dump a consistent one when restoring it, binary log will be applied
  starting from the binary log  coordinates that has been read earlier.
 
  This is what I understand. Please correct me if my understanding is
 wrong.
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
 
  wrote:
 
   Hello Geetanjali,
  
   On 9/23/2014 7:14 AM, geetanjali mehra wrote:
  
   Can anybody please mention the internals that works when we use
  mysqldump
   as follows:
  
  
   *mysqldump --single-transaction --all-databases 
  backup_sunday_1_PM.sql*
  
   MySQL manual says:
  
   This backup operation acquires a global read lock on all tables at
 the
   beginning of the dump (using *FLUSH TABLES WITH READ LOCK
   http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
 this
   lock
   has been acquired, the binary log coordinates are read and the lock
 is
   released. If long updating statements are running when the FLUSH
   http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
  issued,
   the backup operation may stall until those statements finish. After
  that,
   the dump becomes lock-free and does not disturb reads and writes on
 the
   tables.
  
   Can anyone explain it more? Please.
  
  
   Which part would you like to address first?
  
   I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works
  but
   I want to be certain before answering.
  
   Yours,
   --
   Shawn Green
   MySQL Senior Principal Technical Support Engineer
   Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
   Office: Blountville, TN
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 
 



Re: mysqldump with single-transaction option.

2014-10-06 Thread shawn l.green

Hello Geetanjali,

On 9/23/2014 7:14 AM, geetanjali mehra wrote:

Can anybody please mention the internals that works when we use mysqldump
as follows:


*mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

MySQL manual says:

This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this lock
has been acquired, the binary log coordinates are read and the lock is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
the backup operation may stall until those statements finish. After that,
the dump becomes lock-free and does not disturb reads and writes on the
tables.

Can anyone explain it more? Please.



Which part would you like to address first?

I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works 
but I want to be certain before answering.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump with single-transaction option.

2014-10-06 Thread geetanjali mehra
It seems to me that once the read lock is acquired, only the binary log
coordinates are read. Soon after binary log coordinates are read, lock is
released.  Is there anything else that happens here?

It means that after lock is released, dump is made while the read and write
activity is going on.   This dump then, would be inconsistent.  So, to make
this dump a consistent one when restoring it, binary log will be applied
starting from the binary log  coordinates that has been read earlier.

This is what I understand. Please correct me if my understanding is wrong.

Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green shawn.l.gr...@oracle.com
wrote:

 Hello Geetanjali,

 On 9/23/2014 7:14 AM, geetanjali mehra wrote:

 Can anybody please mention the internals that works when we use mysqldump
 as follows:


 *mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

 MySQL manual says:

 This backup operation acquires a global read lock on all tables at the
 beginning of the dump (using *FLUSH TABLES WITH READ LOCK
 http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this
 lock
 has been acquired, the binary log coordinates are read and the lock is
 released. If long updating statements are running when the FLUSH
 http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
 the backup operation may stall until those statements finish. After that,
 the dump becomes lock-free and does not disturb reads and writes on the
 tables.

 Can anyone explain it more? Please.


 Which part would you like to address first?

 I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works but
 I want to be certain before answering.

 Yours,
 --
 Shawn Green
 MySQL Senior Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




mysqldump with single-transaction option.

2014-09-23 Thread geetanjali mehra
Can anybody please mention the internals that works when we use mysqldump
as follows:


*mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

MySQL manual says:

This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this lock
has been acquired, the binary log coordinates are read and the lock is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
the backup operation may stall until those statements finish. After that,
the dump becomes lock-free and does not disturb reads and writes on the
tables.

Can anyone explain it more? Please.



Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Consultant and Database Security
Specialist


Re: Excluding MySQL database tables from mysqldump

2014-04-08 Thread hsv
 2014/04/07 08:02 -0800, Tim Johnson 
  2)mysqldump forces all database names to lower case in the CREATE
  DATABASE statement. I know, one shouldn't use upper case in
  database names, but :) tell that to my clients. 

Why not? That is not mentioned in the section devoted to mapping such names to 
the file-system.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-08 Thread Tim Johnson
* h...@tbbs.net h...@tbbs.net [140407 23:09]:
  2014/04/07 08:02 -0800, Tim Johnson 
   2)mysqldump forces all database names to lower case in the CREATE
   DATABASE statement. I know, one shouldn't use upper case in
   database names, but :) tell that to my clients. 
 
 Why not? That is not mentioned in the section devoted to mapping such names 
 to the file-system.
  I found 'official' documentation here regarding Mac OS X:
  https://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

  I had also found some reference to this having been a side effect
  of migrating files from older macs (of which I am not familiar)
  filesystems. 

  I don't find any reference in the mysqldump documentation at
  https://dev.mysql.com/doc/refman/5.1/en/mysqldump.html to any
  mechanism for overriding this.

  The incompatibility kicks in when trying to restore the databases
  on linux - and I presume FreeBSD, sun OS and other posix systems
  would show the same problem.

  Live and learn ...
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-07 Thread shawn l.green

Hello Tim,

On 4/4/2014 10:27 PM, Tim Johnson wrote:

* Tim Johnson t...@akwebsoft.com [140404 17:46]:

Currently I'm running mysql on a Mac OSX partition.

I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS.

In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.

I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?

thoughts? Opinions?
thanks

   I should add the following:

   1)the only user added to the new partition is the same as the
   primary non-root user on the Mac partition. Same credentials

   2)this is a workstation - it is closed to the outside world.

   FYI: ...



There are several ways to select which data you want in the backup. You 
can backup per-table, per-database, object type per database (routines, 
triggers), or global objects (events). What level of detail you want to 
copy from your old instance into your new instance is completely up to you.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-07 Thread Tim Johnson
* shawn l.green shawn.l.gr...@oracle.com [140407 07:05]:
 Hello Tim,
 
 On 4/4/2014 10:27 PM, Tim Johnson wrote:
 * Tim Johnson t...@akwebsoft.com [140404 17:46]:
 Currently I'm running mysql on a Mac OSX partition.
 
 I have installed an ubuntu dual-booted partition and put mysql on
 it. I have already set up a mysql user on the ubuntu OS.
 
 In the past I have used mysqldump with just the --all-databases
 option to transfer data across different linux partitions.
 
 I'm wondering if I should explicitly exclude some of the tables from
 the mysql database. If so, which? perhaps mysql.user?
 
 thoughts? Opinions?
 thanks
I should add the following:
 
1)the only user added to the new partition is the same as the
primary non-root user on the Mac partition. Same credentials
 
2)this is a workstation - it is closed to the outside world.
 
FYI: ...
 
 
 There are several ways to select which data you want in the backup. You 
 can backup per-table, per-database, object type per database (routines, 
 triggers), or global objects (events). What level of detail you want to 
 copy from your old instance into your new instance is completely up to you.

  I've run into other problems, such as a 

  1)running mysqldump exactly as I would have in linux and not
  getting all databases. Dunno why, but keep on reading.

  2)mysqldump forces all database names to lower case in the CREATE
  DATABASE statement. I know, one shouldn't use upper case in
  database names, but :) tell that to my clients.

  It turns out '2)' is a known problem in Mac, but I just didn't
  know it...

  My workaround was to write a python app that uses the MySQLdb
  module to get the name of all databases, iterate through the list
  and selectively operate on them, and ensure that proper case is
  used in the CREATE DATABASE command..

  So I'm good here, I think.
  Thanks much for the reply.

-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Excluding MySQL database tables from mysqldump

2014-04-04 Thread Tim Johnson
Currently I'm running mysql on a Mac OSX partition. 

I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS. 

In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.

I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?

thoughts? Opinions?
thanks
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-04 Thread Tim Johnson
* Tim Johnson t...@akwebsoft.com [140404 17:46]:
 Currently I'm running mysql on a Mac OSX partition. 
 
 I have installed an ubuntu dual-booted partition and put mysql on
 it. I have already set up a mysql user on the ubuntu OS. 
 
 In the past I have used mysqldump with just the --all-databases
 option to transfer data across different linux partitions.
 
 I'm wondering if I should explicitly exclude some of the tables from
 the mysql database. If so, which? perhaps mysql.user?
 
 thoughts? Opinions?
 thanks
  I should add the following:

  1)the only user added to the new partition is the same as the
  primary non-root user on the Mac partition. Same credentials

  2)this is a workstation - it is closed to the outside world.

  FYI: ...
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How do I mysqldump different database tables to the same .sql file?

2013-11-22 Thread Johan De Meersman
--databases, methinks.

- Original Message -
 From: Daevid Vincent dae...@daevid.com
 To: mysql@lists.mysql.com
 Sent: Thursday, 21 November, 2013 10:44:39 PM
 Subject: How do I mysqldump different database tables to the same .sql file?
 
 I'm working on some code where I am trying to merge two customer accounts
 (we get people signing up under different usernames, emails, or just create
 a new account sometimes). I want to test it, and so I need a way to restore
 the data in the particular tables. Taking a dump of all the DBs and tables
 is not feasible as it's massive, and importing (with indexes) takes HOURS. I
 just want only the tables that are relevant. I can find all the tables that
 have `customer_id` in them with this magic incantation:
  
 SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS` WHERE
 `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
  
 Then I crafted this, but it pukes on the db name portion. :-(
  
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --tables
 member_sessions.users_last_login support.tickets mydb1.clear_passwords
 mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
 mydb1.customers_free_tracking mydb1.customers_log
 mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
 mydb1content.actors_comments mydb1content.actor_collections
 mydb1content.actor_likes_users mydb1content.collections
 mydb1content.dvd_likes_users mydb1content.free_videos
 mydb1content.genre_collections mydb1content.playlists
 mydb1content.poll_votes mydb1content.scenes_comments
 mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
 mydb1content.scene_likes_users mydb1content.videos_downloaded
 mydb1content.videos_viewed  merge_backup.sql
  
 -- Connecting to localhost...
 mysqldump: Got error: 1049: Unknown database
 'member_sessions.users_last_login' when selecting the database
 -- Disconnecting from localhost...
  
 I searched a bit and found that it seems I have to split this into multiple
 statements and append like I'm back in 1980. *sigh*
  
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 member_sessions --tables users_last_login  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 support --tables tickets  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 mydb1 --tables clear_passwords customers customers_free
 customers_free_tracking customers_log customers_subscriptions
 customers_transactions players  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 content --tables actors_comments actor_collections actor_likes_users
 collections dvd_likes_users free_videos genre_collections playlists
 poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
 scene_likes_users videos_downloaded videos_viewed  merge_backup.sql
  
 The critical flaw here is that the mysqldump program does NOT put the
 necessary USE DATABASE statement in each of these dumps since there is
 only one DB after the -database apparently. UGH. Nor do I see a command line
 option to force it to output this seemingly obvious statement.
  
 It's a pretty significant shortcoming of mysqldump if you ask me that I
 can't do it the way I had it in the first example since that's pretty much
 standard SQL convetion of db.table.column format. And even more baffling is
 why it wouldn't dump out the USE statement always even if there is only
 one DB. It's a few characters and would save a lot of headaches in case
 someone tried to dump their .sql file into the wrong DB on accident.
  
 Plus it's not easy to edit a 2.6GB file to manually insert these USE
 lines.
  
 Is there a way to do this with some command line option I'm not seeing in
 the man page?
 

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How do I mysqldump different database tables to the same .sql file?

2013-11-21 Thread Michael Dykman
There is a good reason that the USE database is not output in those dumps..
 it would make the tool very difficult to use for moving data around.

If I might suggest, a simple workaround is to create a shell script along
these lines..  you might to do something a little more sophisticated.

#
#!/bin/sh

echo  USE `database1`;  outflfile.sql
mysqldump -(firstsetofoptions)  outfile.sql
echo  USE `database2`;  outflfile.sql
mysqldump -(secondsetofoptions)  outfile.sql




On Thu, Nov 21, 2013 at 4:44 PM, Daevid Vincent dae...@daevid.com wrote:

 I'm working on some code where I am trying to merge two customer accounts
 (we get people signing up under different usernames, emails, or just create
 a new account sometimes). I want to test it, and so I need a way to restore
 the data in the particular tables. Taking a dump of all the DBs and tables
 is not feasible as it's massive, and importing (with indexes) takes HOURS.
 I
 just want only the tables that are relevant. I can find all the tables that
 have `customer_id` in them with this magic incantation:

 SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS`
 WHERE
 `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`

 Then I crafted this, but it pukes on the db name portion. :-(

 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --tables
 member_sessions.users_last_login support.tickets mydb1.clear_passwords
 mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
 mydb1.customers_free_tracking mydb1.customers_log
 mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
 mydb1content.actors_comments mydb1content.actor_collections
 mydb1content.actor_likes_users mydb1content.collections
 mydb1content.dvd_likes_users mydb1content.free_videos
 mydb1content.genre_collections mydb1content.playlists
 mydb1content.poll_votes mydb1content.scenes_comments
 mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
 mydb1content.scene_likes_users mydb1content.videos_downloaded
 mydb1content.videos_viewed  merge_backup.sql

 -- Connecting to localhost...
 mysqldump: Got error: 1049: Unknown database
 'member_sessions.users_last_login' when selecting the database
 -- Disconnecting from localhost...

 I searched a bit and found that it seems I have to split this into multiple
 statements and append like I'm back in 1980. *sigh*

 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 member_sessions --tables users_last_login  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 support --tables tickets  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 mydb1 --tables clear_passwords customers customers_free
 customers_free_tracking customers_log customers_subscriptions
 customers_transactions players  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 content --tables actors_comments actor_collections actor_likes_users
 collections dvd_likes_users free_videos genre_collections playlists
 poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
 scene_likes_users videos_downloaded videos_viewed  merge_backup.sql

 The critical flaw here is that the mysqldump program does NOT put the
 necessary USE DATABASE statement in each of these dumps since there is
 only one DB after the -database apparently. UGH. Nor do I see a command
 line
 option to force it to output this seemingly obvious statement.

 It's a pretty significant shortcoming of mysqldump if you ask me that I
 can't do it the way I had it in the first example since that's pretty much
 standard SQL convetion of db.table.column format. And even more baffling is
 why it wouldn't dump out the USE statement always even if there is only
 one DB. It's a few characters and would save a lot of headaches in case
 someone tried to dump their .sql file into the wrong DB on accident.

 Plus it's not easy to edit a 2.6GB file to manually insert these USE
 lines.

 Is there a way to do this with some command line option I'm not seeing in
 the man page?




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


RE: How do I mysqldump different database tables to the same .sql file?

2013-11-21 Thread Daevid Vincent
Except that it outputs the USE statement if you have more than one
database, so your theory doesn't hold a lot of water IMHO. Not to mention
it's near the very top of the output so it's pretty easy to trim it off if
you REALLY needed to move the DB (which I presume is not as frequently as
simply wanting a backup/dump of a database to restore).

Thanks for the shell script suggestion, that is what I've done already to
work around this silliness.

 -Original Message-
 From: Michael Dykman [mailto:mdyk...@gmail.com]
 Sent: Thursday, November 21, 2013 1:59 PM
 To: MySql
 Subject: Re: How do I mysqldump different database tables to the same .sql
 file?
 
 There is a good reason that the USE database is not output in those
dumps..
  it would make the tool very difficult to use for moving data around.
 
 If I might suggest, a simple workaround is to create a shell script along
 these lines..  you might to do something a little more sophisticated.
 
 #
 #!/bin/sh
 
 echo  USE `database1`;  outflfile.sql
 mysqldump -(firstsetofoptions)  outfile.sql
 echo  USE `database2`;  outflfile.sql
 mysqldump -(secondsetofoptions)  outfile.sql
 
 
 
 
 On Thu, Nov 21, 2013 at 4:44 PM, Daevid Vincent dae...@daevid.com wrote:
 
  I'm working on some code where I am trying to merge two customer
accounts
  (we get people signing up under different usernames, emails, or just
 create
  a new account sometimes). I want to test it, and so I need a way to
 restore
  the data in the particular tables. Taking a dump of all the DBs and
tables
  is not feasible as it's massive, and importing (with indexes) takes
HOURS.
  I
  just want only the tables that are relevant. I can find all the tables
 that
  have `customer_id` in them with this magic incantation:
 
  SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS`
  WHERE
  `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
 
  Then I crafted this, but it pukes on the db name portion. :-(
 
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose --tables
  member_sessions.users_last_login support.tickets mydb1.clear_passwords
  mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
  mydb1.customers_free_tracking mydb1.customers_log
  mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
  mydb1content.actors_comments mydb1content.actor_collections
  mydb1content.actor_likes_users mydb1content.collections
  mydb1content.dvd_likes_users mydb1content.free_videos
  mydb1content.genre_collections mydb1content.playlists
  mydb1content.poll_votes mydb1content.scenes_comments
  mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
  mydb1content.scene_likes_users mydb1content.videos_downloaded
  mydb1content.videos_viewed  merge_backup.sql
 
  -- Connecting to localhost...
  mysqldump: Got error: 1049: Unknown database
  'member_sessions.users_last_login' when selecting the database
  -- Disconnecting from localhost...
 
  I searched a bit and found that it seems I have to split this into
 multiple
  statements and append like I'm back in 1980. *sigh*
 
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  member_sessions --tables users_last_login  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  support --tables tickets  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  mydb1 --tables clear_passwords customers customers_free
  customers_free_tracking customers_log customers_subscriptions
  customers_transactions players  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  content --tables actors_comments actor_collections actor_likes_users
  collections dvd_likes_users free_videos genre_collections playlists
  poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
  scene_likes_users videos_downloaded videos_viewed  merge_backup.sql
 
  The critical flaw here is that the mysqldump program does NOT put the
  necessary USE DATABASE statement in each of these dumps since there is
  only one DB after the -database apparently. UGH. Nor do I see a command
  line
  option to force it to output this seemingly

How do I mysqldump different database tables to the same .sql file?

2013-11-21 Thread Daevid Vincent
I'm working on some code where I am trying to merge two customer accounts
(we get people signing up under different usernames, emails, or just create
a new account sometimes). I want to test it, and so I need a way to restore
the data in the particular tables. Taking a dump of all the DBs and tables
is not feasible as it's massive, and importing (with indexes) takes HOURS. I
just want only the tables that are relevant. I can find all the tables that
have `customer_id` in them with this magic incantation:
 
SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS` WHERE
`COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
 
Then I crafted this, but it pukes on the db name portion. :-(
 
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote-names --comments --verbose --tables
member_sessions.users_last_login support.tickets mydb1.clear_passwords
mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
mydb1.customers_free_tracking mydb1.customers_log
mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
mydb1content.actors_comments mydb1content.actor_collections
mydb1content.actor_likes_users mydb1content.collections
mydb1content.dvd_likes_users mydb1content.free_videos
mydb1content.genre_collections mydb1content.playlists
mydb1content.poll_votes mydb1content.scenes_comments
mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
mydb1content.scene_likes_users mydb1content.videos_downloaded
mydb1content.videos_viewed  merge_backup.sql
 
-- Connecting to localhost...
mysqldump: Got error: 1049: Unknown database
'member_sessions.users_last_login' when selecting the database
-- Disconnecting from localhost...
 
I searched a bit and found that it seems I have to split this into multiple
statements and append like I'm back in 1980. *sigh*
 
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote-names --comments --verbose --database
member_sessions --tables users_last_login  merge_backup.sql
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote-names --comments --verbose --database
support --tables tickets  merge_backup.sql
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote-names --comments --verbose --database
mydb1 --tables clear_passwords customers customers_free
customers_free_tracking customers_log customers_subscriptions
customers_transactions players  merge_backup.sql
mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
--complete-insert --insert-ignore --create-options --quick --force
--set-charset --disable-keys --quote-names --comments --verbose --database
content --tables actors_comments actor_collections actor_likes_users
collections dvd_likes_users free_videos genre_collections playlists
poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
scene_likes_users videos_downloaded videos_viewed  merge_backup.sql
 
The critical flaw here is that the mysqldump program does NOT put the
necessary USE DATABASE statement in each of these dumps since there is
only one DB after the -database apparently. UGH. Nor do I see a command line
option to force it to output this seemingly obvious statement.
 
It's a pretty significant shortcoming of mysqldump if you ask me that I
can't do it the way I had it in the first example since that's pretty much
standard SQL convetion of db.table.column format. And even more baffling is
why it wouldn't dump out the USE statement always even if there is only
one DB. It's a few characters and would save a lot of headaches in case
someone tried to dump their .sql file into the wrong DB on accident.
 
Plus it's not easy to edit a 2.6GB file to manually insert these USE
lines.
 
Is there a way to do this with some command line option I'm not seeing in
the man page?


Mysqldump routines dump, problem with lock tables.

2013-02-04 Thread Rafał Radecki
Hi All.

I use:

# rpm -qa | grep -i percona-server-server
Percona-Server-server-55-5.5.28-rel29.3.388.rhel6.x86_64

My system:

# uname -a;cat /etc/redhat-release
Linux prbc01.mg.local 2.6.32-279.19.1.el6.centos.plus.x86_64 #1 SMP
Wed Dec 19 06:20:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 6.3 (Santiago)

I have a backup script which at some point calls:

mysqldump --default-character-set=utf8 --routines --no-data
--no-create-info --skip-triggers -S /mysql/database.sock -u backup
-pxxx database

and I have error:

mysqldump: Got error: 1045: Access denied for user 'yyy'@'zzz' (using
password: YES) when using LOCK TABLES

So I thinke that mysqldump locks the table (--add-locks) by default.

But for this user:

mysql show grants for yyy@'zzz';
++
| Grants for backup@localhost

   |
++
| GRANT SELECT, RELOAD, LOCK TABLES, SHOW VIEW ON *.* TO 'yyy'@'zzz'
IDENTIFIED BY PASSWORD ... |
  |
++
2 rows in set (0.00 sec)

So why is this error showing?
When I add --single-transaction to mysqldump everything is ok. But I
would like to have this table locked because:

mysql SELECT ENGINE
- FROM information_schema.TABLES
- WHERE TABLE_SCHEMA = 'information_schema'
- AND TABLE_NAME = 'routines';
++
| ENGINE |
++
| MyISAM |
++

so information_schema.tables is myisam.

So why do I get the error about LOCK TABLES?

Best regards,
Rafal Radecki.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mysqldump routines dump, problem with lock tables.

2013-02-04 Thread Rick James
Do not try to dump or reload information_schema.  It is derived meta 
information, not real tables.

 -Original Message-
 From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
 Sent: Monday, February 04, 2013 12:17 AM
 To: mysql@lists.mysql.com
 Subject: Mysqldump routines dump, problem with lock tables.
 
 Hi All.
 
 I use:
 
 # rpm -qa | grep -i percona-server-server
 Percona-Server-server-55-5.5.28-rel29.3.388.rhel6.x86_64
 
 My system:
 
 # uname -a;cat /etc/redhat-release
 Linux prbc01.mg.local 2.6.32-279.19.1.el6.centos.plus.x86_64 #1 SMP Wed
 Dec 19 06:20:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Red Hat
 Enterprise Linux Server release 6.3 (Santiago)
 
 I have a backup script which at some point calls:
 
 mysqldump --default-character-set=utf8 --routines --no-data --no-
 create-info --skip-triggers -S /mysql/database.sock -u backup -pxxx
 database
 
 and I have error:
 
 mysqldump: Got error: 1045: Access denied for user 'yyy'@'zzz' (using
 password: YES) when using LOCK TABLES
 
 So I thinke that mysqldump locks the table (--add-locks) by default.
 
 But for this user:
 
 mysql show grants for yyy@'zzz';
 +--
 ---
 ---+
 | Grants for backup@localhost
 
|
 +--
 ---
 ---+
 | GRANT SELECT, RELOAD, LOCK TABLES, SHOW VIEW ON *.* TO 'yyy'@'zzz'
 IDENTIFIED BY PASSWORD ... |
   |
 +--
 ---
 ---+
 2 rows in set (0.00 sec)
 
 So why is this error showing?
 When I add --single-transaction to mysqldump everything is ok. But I
 would like to have this table locked because:
 
 mysql SELECT ENGINE
 - FROM information_schema.TABLES
 - WHERE TABLE_SCHEMA = 'information_schema'
 - AND TABLE_NAME = 'routines';
 ++
 | ENGINE |
 ++
 | MyISAM |
 ++
 
 so information_schema.tables is myisam.
 
 So why do I get the error about LOCK TABLES?
 
 Best regards,
 Rafal Radecki.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump Got error 1034 Incorrect key file for table

2012-11-20 Thread Inigo Medina


On Tue, 20 Nov 2012, Ricardo Barbosa wrote:


Hi all.

I'm trying to do a recover on a table for a client, with the following message

root@falcon:~# mysqldump -u root -pXXX database
-- MySQL dump 10.13  Distrib 5.1.30, for pc-linux-gnu (i686)
--
-- Host: localhost    Database: database
-- --
-- Server version   5.1.30

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 
*/;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
mysqldump: Got error: 1034: Incorrect key file for table 'table1'; try to 
repair it when using LOCK TABLES
root@falcon:~#

I'm trying recover with

mysql check table table1;
+-+---+--+---+
| Table   | Op    | Msg_type | 
Msg_text  |
+-+---+--+---+
| database.table1 | check | Error    | Incorrect key file for table 'table1'; 
try to repair it |
| database.table1 | check | error    | 
Corrupt   |
+-+---+--+---+
2 rows in set (0.00 sec)

mysql repair table table1;
+-++--+---+
| Table   | Op | Msg_type | 
Msg_text  |
+-++--+---+
| database.table1 | repair | Error    | Incorrect key file for table 'table1'; 
try to repair it |
| database.table1 | repair | error    | 
Corrupt   |
+-++--+---+
2 rows in set (0.00 sec)

mysql lock table table1 write;
ERROR 1034 (HY000): Incorrect key file for table 'table1'; try to repair it
mysql


Trying repair with myisamchk and mysqlcheck

root@Falcon:~# mysqlcheck -r database table1 -u root -p database.table1
Error    : Incorrect key file for table 'table1'; try to repair it
error    : Corrupt
root@falcon:~#

root@Falcon:~# cd /data/mysql/database
root@Falcon:/data/mysql/database# myisamchk -r *.MYI
- recovering (with sort) MyISAM-table 'table1.MYI'
Data records: 0
- Fixing index 1

-

Any idea.


Try to start with MySQL advices for such cases:
http://dev.mysql.com/doc/refman/5.6/en/myisam-repair.html

iñ



Regards.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: How to verify mysqldump files

2012-11-07 Thread Claudio Nanni
Gary,

It is always a good practice to test the whole solution backup/restore.
So nothing is better than testing a restore, actually it should be a
periodic procedure.
As for the validity of the file usually is delegated to the operating
system.
If you want to check it yourself you may create an algorithm that analyses
some patterns in the dump file to recognize that it is correct,
starting may be from one that is working as 'valid' sample.

Cheers

Claudio



2012/11/7 Gary listgj-my...@yahoo.co.uk

 Can anyone suggest how I could verify that the files created by
 mysqldump are okay? They are being created for backup purposes, and
 the last thing I want to do is find out that the backups themselves are
 in some way corrupt.

 I know I can check the output of the command itself, but what if.. I
 don't know... if there are problems with the disc it writes to, or
 something like that. Is there any way to check whether the output file
 is valid in the sense that it is complete and syntactically correct?

 --
 GaryPlease do NOT send me 'courtesy' replies off-list.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Claudio


Re: How to verify mysqldump files

2012-11-07 Thread Ananda Kumar
you can use checksum to make sure there are not corruption in the file

On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gary,

 It is always a good practice to test the whole solution backup/restore.
 So nothing is better than testing a restore, actually it should be a
 periodic procedure.
 As for the validity of the file usually is delegated to the operating
 system.
 If you want to check it yourself you may create an algorithm that analyses
 some patterns in the dump file to recognize that it is correct,
 starting may be from one that is working as 'valid' sample.

 Cheers

 Claudio



 2012/11/7 Gary listgj-my...@yahoo.co.uk

  Can anyone suggest how I could verify that the files created by
  mysqldump are okay? They are being created for backup purposes, and
  the last thing I want to do is find out that the backups themselves are
  in some way corrupt.
 
  I know I can check the output of the command itself, but what if.. I
  don't know... if there are problems with the disc it writes to, or
  something like that. Is there any way to check whether the output file
  is valid in the sense that it is complete and syntactically correct?
 
  --
  GaryPlease do NOT send me 'courtesy' replies off-list.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 


 --
 Claudio



Re: How to verify mysqldump files

2012-11-07 Thread Manuel Arostegui
2012/11/7 Ananda Kumar anan...@gmail.com

 you can use checksum to make sure there are not corruption in the file



That would work for the file integrity itself not for the data integrity
_in_ the file.

As Claudio suggested, probably going thru the whole recovery process from
time to time is the best way to make sure the backup'ed data is correct.

Manuel.


RE: How to verify mysqldump files

2012-11-07 Thread Stillman, Benjamin
In the past when I used mysqldump, I used a slave database for backups and 
periodically testing restores.

My process for testing:
- Stop the slave process (so the db doesn't get updated).
- Run the backup.
- Create restore_test database.
- Restore the backup to the restore_test database.
- Use mysqldbcompare to compare the two databases.
- Drop restore_test database.
- Start the slave process.

I have this scripted so it just runs and emails me the results.

Useful link:
http://dev.mysql.com/doc/workbench//en/mysqldbcompare.html




-Original Message-
From: Gary [mailto:listgj-my...@yahoo.co.uk]
Sent: Wednesday, November 07, 2012 7:52 AM
To: mysql@lists.mysql.com
Subject: How to verify mysqldump files

Can anyone suggest how I could verify that the files created by mysqldump are 
okay? They are being created for backup purposes, and the last thing I want 
to do is find out that the backups themselves are in some way corrupt.

I know I can check the output of the command itself, but what if.. I don't 
know... if there are problems with the disc it writes to, or something like 
that. Is there any way to check whether the output file is valid in the sense 
that it is complete and syntactically correct?

--
GaryPlease do NOT send me 'courtesy' replies off-list.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to verify mysqldump files

2012-11-07 Thread Rick James
A variant on that...
1. pre-validate slave's consistency using pt-table-checksum
2. dump slave, wipe clean, restore
3. RE-validate slave's consistency using pt-table-checksum

 -Original Message-
 From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
 Sent: Wednesday, November 07, 2012 7:09 AM
 To: 'Gary'; mysql@lists.mysql.com
 Subject: RE: How to verify mysqldump files
 
 In the past when I used mysqldump, I used a slave database for backups and
 periodically testing restores.
 
 My process for testing:
 - Stop the slave process (so the db doesn't get updated).
 - Run the backup.
 - Create restore_test database.
 - Restore the backup to the restore_test database.
 - Use mysqldbcompare to compare the two databases.
 - Drop restore_test database.
 - Start the slave process.
 
 I have this scripted so it just runs and emails me the results.
 
 Useful link:
 http://dev.mysql.com/doc/workbench//en/mysqldbcompare.html
 
 
 
 
 -Original Message-
 From: Gary [mailto:listgj-my...@yahoo.co.uk]
 Sent: Wednesday, November 07, 2012 7:52 AM
 To: mysql@lists.mysql.com
 Subject: How to verify mysqldump files
 
 Can anyone suggest how I could verify that the files created by mysqldump
 are okay? They are being created for backup purposes, and the last thing
 I want to do is find out that the backups themselves are in some way
 corrupt.
 
 I know I can check the output of the command itself, but what if.. I don't
 know... if there are problems with the disc it writes to, or something like
 that. Is there any way to check whether the output file is valid in the
 sense that it is complete and syntactically correct?
 
 --
 GaryPlease do NOT send me 'courtesy' replies off-list.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 Notice: This communication may contain privileged and/or confidential
 information. If you are not the intended recipient, please notify the
 sender by email, and immediately delete the message and any attachments
 without copying or disclosing them. LBI may, for any reason, intercept,
 access, use, and disclose any information that is communicated by or
 through, or which is stored on, its networks, applications, services, and
 devices.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysqldump warning

2012-09-18 Thread nixofortune
Hello everybody.
I'm trying to create a backup of mysql database:

mysqldump --all-databases --routines --master-data=2  all_databases_`date
+'%y%m%d-%H%M'`.sql

It looks like backup has been created but I've got this Warning:

Warning: mysqldump: ignoring option '--databases' due to invalid value
'temp_fwd'

Nothing in the error logs, just curious what this warning means.
Does anybody had similar thing?

Many thanks.


Re: mysqldump not escaping single quotes in field data

2012-06-16 Thread Ananda Kumar
 mysqldump --databases test --tables ananda  test.dmp
mysql show create table ananda\G;
*** 1. row ***
   Table: ananda
Create Table: CREATE TABLE `ananda` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)


On Sat, Jun 16, 2012 at 3:36 AM, Rick James rja...@yahoo-inc.com wrote:

 Are you using an abnormal CHARACTER SET or COLLATION?
 SHOW CREATE TABLE
 Show us the args to mysqldump.

  -Original Message-
  From: James W. McNeely [mailto:j...@newcenturydata.com]
  Sent: Friday, June 15, 2012 10:19 AM
  To: mysql@lists.mysql.com
  Subject: mysqldump not escaping single quotes in field data
 
  My backups from a mysqldump process are useless, because the dump files
  are not escaping single quotes in the data in the fields.
 
  So, O'Brien kills it - instead of spitting out 'O\'Brien'
  it spits out
  'O'Brien'
 
  I don't see anywhere in the documentation about mysqldump where you can
  tweak this kind of thing.
 
  We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.
 
  I tried to enter this into the Oracle support center but all of the
  navigational and SR tabs are gone. Maybe our accounting dept. forgot to
  pay the bill or something.
 
  Thanks,
 
  Jim McNeely
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




mysqldump not escaping single quotes in field data

2012-06-15 Thread James W. McNeely
My backups from a mysqldump process are useless, because the dump files are not 
escaping single quotes in the data in the fields. 

So, O'Brien kills it - instead of spitting out 
'O\'Brien' 
it spits out 
'O'Brien'

I don't see anywhere in the documentation about mysqldump where you can tweak 
this kind of thing. 

We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.

I tried to enter this into the Oracle support center but all of the 
navigational and SR tabs are gone. Maybe our accounting dept. forgot to pay the 
bill or something.

Thanks,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump not escaping single quotes in field data

2012-06-15 Thread Ananda Kumar
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below

LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(5,'O\'Brien');
/*!4 ALTER TABLE `ananda` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;


Import it back to database

Database changed
mysql drop table ananda;
Query OK, 0 rows affected (0.00 sec)

mysql --database test  test.dmp

mysql select * from ananda;
+--+-+
| id   | name|
+--+-+
|1 | ananda  |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|5 | O'Brien |
+--+-+

May be u want to upgrade you database

On Fri, Jun 15, 2012 at 10:48 PM, James W. McNeely
j...@newcenturydata.comwrote:

 My backups from a mysqldump process are useless, because the dump files
 are not escaping single quotes in the data in the fields.

 So, O'Brien kills it - instead of spitting out
 'O\'Brien'
 it spits out
 'O'Brien'

 I don't see anywhere in the documentation about mysqldump where you can
 tweak this kind of thing.

 We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.

 I tried to enter this into the Oracle support center but all of the
 navigational and SR tabs are gone. Maybe our accounting dept. forgot to pay
 the bill or something.

 Thanks,

 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




RE: mysqldump not escaping single quotes in field data

2012-06-15 Thread Rick James
Are you using an abnormal CHARACTER SET or COLLATION?
SHOW CREATE TABLE
Show us the args to mysqldump.

 -Original Message-
 From: James W. McNeely [mailto:j...@newcenturydata.com]
 Sent: Friday, June 15, 2012 10:19 AM
 To: mysql@lists.mysql.com
 Subject: mysqldump not escaping single quotes in field data
 
 My backups from a mysqldump process are useless, because the dump files
 are not escaping single quotes in the data in the fields.
 
 So, O'Brien kills it - instead of spitting out 'O\'Brien'
 it spits out
 'O'Brien'
 
 I don't see anywhere in the documentation about mysqldump where you can
 tweak this kind of thing.
 
 We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.
 
 I tried to enter this into the Oracle support center but all of the
 navigational and SR tabs are gone. Maybe our accounting dept. forgot to
 pay the bill or something.
 
 Thanks,
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



killing mysqldump

2012-05-31 Thread Roland Roland

Is it safe to kill a mysqldump while it's in process ?
i mean aside loosing the dumped file, would it affect the running  DB 
being dumped?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: killing mysqldump

2012-05-31 Thread Singer X.J. Wang
Yes, killing a mysqldump is perfectly safe. Caveat being that the dump file
produced may be pretty useless.

Singer

On Thu, May 31, 2012 at 7:41 AM, Roland Roland r_o_l_a_...@hotmail.comwrote:

 Is it safe to kill a mysqldump while it's in process ?
 i mean aside loosing the dumped file, would it affect the running  DB
 being dumped?



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



-- 


--

Try Pythian managed services risk-free for operational support, 
upgrades/migrations, special projects or increased performance.


How to split a mysqldump file of multiple databases to singlefile databases... SOLVED with script

2012-02-20 Thread Andrés Tello
Today I needed to split a mysqldump -A into it several databases.
I didn't have access to the original source, so I only had the texr file to
work.
It was a webhosting server dump, so there was a LOT of databases...

I split the file with this little script I made:

file=myqdl dump file
nextTable=
nextStart=0
nextEnd=0
lastTable=
lastStart=0

for i in `grep -n ^CREATE DATABASE $file | awk '{print $1:$7}' | sed
s/CREATE://g`
do
i=`echo $i | sed s/\\\`//g`
nextTable=`echo $i | cut -d : -f 2`
nextStart=`echo $i | cut -d : -f 1`

nextEnd=$(( $nextStart -1 ))


if [ $lastTable !=  ]
then
  echo Tabla: $lastTable from: $lastStart to $nextEnd
  sed -n ${lastStart},${nextEnd}p  $file  new/$lastTable.portabla.sql
fi

lastTable=$nextTable
lastStart=$(( $nextStart ))

done


Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2012-01-04 Thread Hal�sz S�ndor
; 2012/01/03 11:52 -0500, Govinda 
...which strikes me as odd (again, showing how new I am to driving from the 
CL), because I do NOT see any entry like this:
/usr/local/mysql/bin/mysqldump

Is mysql a symbolic link?


..which I just (earlier this morning) changed to this:
export 
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin/mysqldump:$PATH

You are missing a point, that the proper thing for PATH is directory (or 
effective directory), not runfile in directory. This, therefore, is more right:

PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin:$PATH

After this, surely, you can run mysqldump or mysql or mysqlbinlog or 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2012-01-04 Thread Govinda
 ..which I just (earlier this morning) changed to this:
 export 
 PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin/mysqldump:$PATH

 You are missing a point, that the proper thing for PATH is directory (or 
 effective directory), not runfile in directory. This, therefore, is more 
 right:
 
 PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin:$PATH
 
 After this, surely, you can run mysqldump or mysql or mysqlbinlog or 
 

OK, yes, that makes sense.  Thanks.  Note though, then that 
'/usr/local/mysql/bin' path is redundant; it was already there (the preceding 
path).

I also discovered from researching the '/etc/paths.d' dir...  where one can 
store files named after commands one wants to run, whose contents is the paths 
to those commands.

I am now able to run mysqldump or mysql directly.  
Thanks to everyone who replied!

For the archives, should any newbie actually find this thread on topic for 
where he/she is stuck, please note this (another great resource for driving 
mysql, and everything else CL related) - 
http://superuser.com/

-Govinda
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2012-01-03 Thread Govinda
Jan, thanks so much for taking the time with me. 
I hesitate to spend much of everyone's inbox space on this as it is getting off 
topic, and I am newbie enough with all things CL that folks will tire of me 
before I am near ready to complete the thread.

Like it took me a bit of time to think/research/experiment ... to even discover 
to the point of responding, that:
- (see below, intermingling text)


 If you're using MacOS X Server, it should be in /usr/bin, which should be 
 in your default $PATH, or else you couldn't do ANYTHING, including ls.
 
 I have notes somewhere in my stuff about how to get $PATH to include where 
 mysql actually lives, but once I realized what the issue was (in my OP this 
 thread) then I was fine with just using a full path for now.  The 
 convenience of a 'fixed' $PATH will be nice, sooner or later (when I get to 
 it), but for now it is just as well that I let it beat into my head how the 
 CL is actually working (working out the full paths)
 
 You should fix the $PATH, as you'll need it for utilities (such as mysqldump) 
 and such.

well , yes, it will be nice to no how to manipulate the $PATH ... and meanwhile 
using full paths when invoking a command does work.. and forces me to at least 
remember that is how any command works, right?  I mean the shell is always 
(AFAIK) resolving full paths.

 
 You need to edit your shell startup file. For bash, it's .bash_profile in 
 your home directory. Other shells will have their own startup script.

I am using tcsh.  
I found that my $PATH (apparently) lives here:
~/.profile

 My .bash_profile includes:
 
 export 
 PATH=$HOME/bin:/Developer/Tools:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/opt/local/sbin

mine was this:
export PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH

..which I just (earlier this morning) changed to this:
export 
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/mysql/bin/mysqldump:$PATH

 
 Do echo $SHELL to see which shell you're using.

/bin/tcsh

 
 Do printenv to see all your global shell variables, including $SHELL and 
 $PATH.

Govind% printenv
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin
[snip]
SHELL=/bin/tcsh
HOME=/Users/Govind
USER=Govind
LOGNAME=Govind
[snip]


PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin
I don't understand why this ^^^ is different than what is shown in the 
'~/.profile' file

 
 What does locate mysqldump tell you?
 
 Govind% locate mysqldump
 
 WARNING: The locate database (/var/db/locate.database) does not exist.
 To create the database, run the following command:
 
 sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist
 
 [message repeated after running the suggested command]
 
 What that does is tells the system launcher to index your disks in the 
 background, so it's no surprise that it would not immediately create a 
 working database.
 
 It should have finished by now, and you should now be able to run the locate 
 command.

right, yes.  Now it works:

Govind% locate mysqldump
[snip]
/usr/local/mysql-5.5.15-osx10.6-x86_64/bin/mysqldump
/usr/local/mysql-5.5.15-osx10.6-x86_64/bin/mysqldumpslow
/usr/local/mysql-5.5.15-osx10.6-x86_64/man/man1/mysqldump.1
/usr/local/mysql-5.5.15-osx10.6-x86_64/man/man1/mysqldumpslow.1
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/include/mysqldump.inc
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/mysqldump-compat.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/mysqldump-max.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/mysqldump-no-binlog.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/mysqldump.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/mysqldump_restore.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/r/rpl_mysqldump_slave.result
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-compat.opt
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-compat.test
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-max-master.opt
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-max.test
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-no-binlog-master.opt
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump-no-binlog.test
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump.test
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/mysqldump_restore.test
/usr/local/mysql-5.5.15-osx10.6-x86_64/mysql-test/t/rpl_mysqldump_slave.test

...which strikes me as odd (again, showing how new I am to driving from the 
CL), because I do NOT see any entry like this:
/usr/local/mysql/bin/mysqldump

...which I know is here (and is what I use (AFAICT), when I successfully use 
the full path to call mysqldump, like so:
Govind% /usr/local/mysql/bin/mysqldump -uroot -p myDBname myTableName  
/Users/Govind/myTestDumpedTable.sql
):

Govind% pwd
/usr/local/mysql

Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2012-01-02 Thread Jan Steinman

On 31 Dec 11, at 20:36, Govinda wrote:
 If you're using MacOS X Server, it should be in /usr/bin, which should be in 
 your default $PATH, or else you couldn't do ANYTHING, including ls.
 
 I have notes somewhere in my stuff about how to get $PATH to include where 
 mysql actually lives, but once I realized what the issue was (in my OP this 
 thread) then I was fine with just using a full path for now.  The convenience 
 of a 'fixed' $PATH will be nice, sooner or later (when I get to it), but for 
 now it is just as well that I let it beat into my head how the CL is actually 
 working (working out the full paths)

You should fix the $PATH, as you'll need it for utilities (such as mysqldump) 
and such.

You need to edit your shell startup file. For bash, it's .bash_profile in 
your home directory. Other shells will have their own startup script. My 
.bash_profile includes:

export 
PATH=$HOME/bin:/Developer/Tools:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/opt/local/sbin

Do echo $SHELL to see which shell you're using.

Do printenv to see all your global shell variables, including $SHELL and 
$PATH.

 What does locate mysqldump tell you?
 
 Govind% locate mysqldump
 
 WARNING: The locate database (/var/db/locate.database) does not exist.
 To create the database, run the following command:
 
  sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist

[message repeated after running the suggested command]

What that does is tells the system launcher to index your disks in the 
background, so it's no surprise that it would not immediately create a working 
database.

It should have finished by now, and you should now be able to run the locate 
command.

 How about echo $PATH?
 
 Govind% echo $PATH
 /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin

I don't have the official binary distribution in front of me, but once you get 
locate working, you can add the path of your MySQL binaries to the $PATH 
variable by appending it (preceded by a colon) to the $PATH declaration in your 
shell's startup script.


Do you often think about difficulties, failure and disasters? Do you keep 
thinking about the negative news you have seen on the TV or read in the 
newspapers? Do you see yourself stuck and unable to improve your life or your 
health? Do you frequently think that you do not deserve happiness or money, or 
that it is too difficult to get them? If you do, then you will close your mind, 
see no opportunities, and behave and react in such ways as to repel people and 
opportunities. You let the power of negative thinking rule your life. -- Ramon 
Sasson
 Jan Steinman, EcoReality Co-op 





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-31 Thread Jan Steinman
 So then I try (in Mac OS X Terminal, while logged in as me (not root)):
 mysqldump -uroot -p myDBname myTableName  ~/myTestDumpedTable.sql
 ...and again it produces:
 sh: mysqldump: command not found..
 
 that is because Mac OSX is missing a package-managment and so you need
 a little knowledge about your OS to fix the PATH or you have to use
 full-qualified calls or configure/install your software to locations.

How did you get your copy of MySQL?

If you're using MacOS X Server, it should be in /usr/bin, which should be in 
your default $PATH, or else you couldn't do ANYTHING, including ls.

And for the record, there are at least two excellent package managers available 
for Mac OS, and either MacPorts or Fink should append the proper path to their 
binaries to the $PATH variable so they can be found. (Although you need to log 
out and log back in to have your shell's .rc file executed.)

Or else you built from source, in which case, you should know how to fix your 
$PATH.

What does locate mysqldump tell you? How about echo $PATH?


A gentleman of our days is one who has money enough to do what every fool would 
do if he could afford it: that is, consume without producing. -- George Bernard 
Shaw
 Jan Steinman, EcoReality Co-op 





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-31 Thread Reindl Harald


Am 31.12.2011 23:53, schrieb Jan Steinman:
 And for the record, there are at least two excellent package managers 
 available for Mac OS, and either MacPorts or Fink 

if you call this package-managment from the view of a operating
system you have never seen a real one - this are ADDITIONAL
program/managers TRYING to do things OSX does not support



signature.asc
Description: OpenPGP digital signature


Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-31 Thread Govinda
 So then I try (in Mac OS X Terminal, while logged in as me (not root)):
 mysqldump -uroot -p myDBname myTableName  ~/myTestDumpedTable.sql
 ...and again it produces:
 sh: mysqldump: command not found..
 
 that is because Mac OSX is missing a package-managment and so you need
 a little knowledge about your OS to fix the PATH or you have to use
 full-qualified calls or configure/install your software to locations.
 
 How did you get your copy of MySQL?

I don't remember for sure.. but think I just went to the MySQL site and 
downloaded whatever looked like the right version for my environment.  I used 
to use the one included with Mac OS 10.5, but when I upgraded to 10.6, then it 
no longer worked.. so I had to re-install MySQL.

 If you're using MacOS X Server, it should be in /usr/bin, which should be in 
 your default $PATH, or else you couldn't do ANYTHING, including ls.
 
 And for the record, there are at least two excellent package managers 
 available for Mac OS, and either MacPorts or Fink should append the proper 
 path to their binaries to the $PATH variable so they can be found. (Although 
 you need to log out and log back in to have your shell's .rc file executed.)
 
 Or else you built from source, in which case, you should know how to fix your 
 $PATH.

You may have guessed I am pretty much in over my head with running servers..  
so I am just glad I have so far managed to do everything I need to develop, if 
not look smart on lists like this  ;-)

I have notes somewhere in my stuff about how to get $PATH to include where 
mysql actually lives, but once I realized what the issue was (in my OP this 
thread) then I was fine with just using a full path for now.  The convenience 
of a 'fixed' $PATH will be nice, sooner or later (when I get to it), but for 
now it is just as well that I let it beat into my head how the CL is actually 
working (working out the full paths)


 What does locate mysqldump tell you?

Govind% locate mysqldump

WARNING: The locate database (/var/db/locate.database) does not exist.
To create the database, run the following command:

  sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist

Please be aware that the database can take some time to generate; once
the database has been created, this message will no longer appear.

Govind% sudo launchctl load -w 
/System/Library/LaunchDaemons/com.apple.locate.plist
Password:
Govind% locate mysqldump

WARNING: The locate database (/var/db/locate.database) does not exist.
To create the database, run the following command:

  sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist

Please be aware that the database can take some time to generate; once
the database has been created, this message will no longer appear.

Huh?  
Password was right.. but   'sudo launchctl load -w 
/System/Library/LaunchDaemons/com.apple.locate.plist'   seemed to have no 
effect.
(Again, way over my head for now).

 How about echo $PATH?

Govind% echo $PATH
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin


Thanks for poking :-)
-Govinda
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-29 Thread Govinda
 [snip]

 that is because Mac OSX is missing a package-managment and so you need
 a little knowledge about your OS to fix the PATH or you have to use
 full-qualified calls or configure/install your software to locations
 which are already in the path
 
 which mysqldump as normal user wil tell you where it is really
 [harry@srv-rhsoft:~]$ which mysqldump
 /usr/bin/mysqldump

Thank you Richard, Andy, and Reindl !

Of course you all nailed it.  
Reindl, I especially appreciate you addressing each point, as it pointed me in 
the right direction to fill in the understanding-holes on all those topics!  
Some things I (partially) knew, but did not remember today because I don't have 
to deal in these areas much.

For right now, I just used full paths both for the command and for the output.

Just a side note, that:

Govind% which mysqldump
mysqldump: Command not found.
Govind% which /usr/local/mysql/bin/mysqldump
/usr/local/mysql/bin/mysqldump

kind of defeats the purpose of having to know the path in advance in order to 
use the command to detect the path  ;-)

-Govinda


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-29 Thread Reindl Harald


Am 29.12.2011 19:21, schrieb Govinda:
 Just a side note, that:
 
 Govind% which mysqldump
 mysqldump: Command not found.
 Govind% which /usr/local/mysql/bin/mysqldump
 /usr/local/mysql/bin/mysqldump
 
 kind of defeats the purpose of having to know the path in advance in order 
 to use the command to detect the path  ;-)

/usr/local/mysql/bin/ is nowhere in the path

which can only help you if your standard-user has the
directory in his PATH and another user not

in my opinion this is a configure/compile/install-problem
/usr/local/ is intended to have the stahtdard unix-hirarchy
like /usr/loca/bin, /usr/local/share, /usr/local/lib and
normally /usr/local/bin IS in the PATH

so the problem here is that mysql has the unix-hirarchy inside
instead directly install into /usr/lcoal/

on the other hand doing this without a package-managment
it would be better over the long to keep it chaind all in
one directory as it is because you can uninstall it with
simply remove the folder

for the hadnful things on my linux-machines where such non-default
locations are existing i usually set symlinks unter /usr/local/bin/
to the binarys, so they are seperated and from the user point
of view in the PATh and all wroks fine

additionally a ls -l /usr/local/bin/ shows where all the stuff
is physically installed instead haveing all details in mind
or notice them somewhere you forget also :-)




signature.asc
Description: OpenPGP digital signature


why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-29 Thread Govinda
Hi Everyone

This should be quick and simple, but after researching on Google quite a bit I 
am still stumped. I am mostly newbie with: server admin, CLI, MySQL.

I am developing my PHP site locally, and now need to move some new MySQL tables 
from my local dev setup to the remote testing site. First step for me is just 
to dump the tables, one at a time.

I successfully login to my local MySQL like so:
Govind% /usr/local/mysql/bin/mysql -uroot

but while in this dir (and NOT logged into MySQL):
/usr/local/mysql/bin

...when I try this:
mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql

..then I keep getting this:
myTestDumpedTable.sql: Permission denied.

Same result if I do any variation on that (try to dump the whole db, drop the 
'-p', etc.)

On StackOverflow I asked this question [1], and replies there led me to trying 
being logged in as root user, and then (the same):
mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql

produces:
sh: mysqldump: command not found

...which is odd because it does produce a zero-KB file named 
myTestDumpedTable.sql in that dir. 

So then I try (in Mac OS X Terminal, while logged in as me (not root)):
mysqldump -uroot -p myDBname myTableName  ~/myTestDumpedTable.sql

...and again it produces:
sh: mysqldump: command not found... 

and again a zero-KB file named myTestDumpedTable.sql, in ~/

I am embarrassed as I am sure this is going to be incredibly simple, or just 
reveal a gaping (basic) hole in my knowledge. .. but I would appreciate any 
help ;-)


[1] 
http://stackoverflow.com/questions/8663454/why-does-basic-mysqldump-on-db-table-fail-with-permission-denied

Thanks
-Govinda
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-29 Thread Andy Wallace

I would suggest trying:

mysqldump -uroot -p myDBname myTableName  /tmp/myTestDumpedTable.sql

Maybe you don't have permission (or space) to write into /usr/local/mysql/bin. 
That would
be an unusual place for such files.


On 12/29/11 9:15 AM, Govinda wrote:

Hi Everyone

This should be quick and simple, but after researching on Google quite a bit I 
am still stumped. I am mostly newbie with: server admin, CLI, MySQL.

I am developing my PHP site locally, and now need to move some new MySQL tables 
from my local dev setup to the remote testing site. First step for me is just 
to dump the tables, one at a time.

I successfully login to my local MySQL like so:
Govind% /usr/local/mysql/bin/mysql -uroot

but while in this dir (and NOT logged into MySQL):
/usr/local/mysql/bin

...when I try this:
mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql

..then I keep getting this:
myTestDumpedTable.sql: Permission denied.

Same result if I do any variation on that (try to dump the whole db, drop the 
'-p', etc.)

On StackOverflow I asked this question [1], and replies there led me to trying 
being logged in as root user, and then (the same):
mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql

produces:
sh: mysqldump: command not found

...which is odd because it does produce a zero-KB file named 
myTestDumpedTable.sql in that dir.

So then I try (in Mac OS X Terminal, while logged in as me (not root)):
mysqldump -uroot -p myDBname myTableName  ~/myTestDumpedTable.sql

...and again it produces:
sh: mysqldump: command not found...

and again a zero-KB file named myTestDumpedTable.sql, in ~/

I am embarrassed as I am sure this is going to be incredibly simple, or just 
reveal a gaping (basic) hole in my knowledge. .. but I would appreciate any 
help ;-)


[1] 
http://stackoverflow.com/questions/8663454/why-does-basic-mysqldump-on-db-table-fail-with-permission-denied

Thanks
-Govinda


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: why does basic MySQLdump on db table fail with Permission denied, or else it creates a zero KB file

2011-12-29 Thread Reindl Harald


Am 29.12.2011 18:15, schrieb Govinda:
 ...when I try this:
 mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql
 
 ..then I keep getting this:
 myTestDumpedTable.sql: Permission denied.

your unix-user has no write permissions to myTestDumpedTable.sql
this has nothing to do wirh mysql

what about considering use a target-folder your user owns
and generally use a full-qualified path for the dump-file
instead spit it randomly in the folder where you are

 Same result if I do any variation on that (try to dump the whole db, drop the 
 '-p', etc.)

because no parameter can change your folder-pmermissions

 On StackOverflow I asked this question [1], and replies there led me to 
 trying being logged in as root user, and then (the same):
 mysqldump -uroot -p myDBname myTableName  myTestDumpedTable.sql
 
 produces:
 sh: mysqldump: command not found

mysqldump is not in the path of your root-user
change the PATH-variable or call mysqldump full-qualified

 ...which is odd because it does produce a zero-KB file named 
 myTestDumpedTable.sql in that dir. 

it is not odd it is normal that  myTestDumpedTable.sql creates the
file since what you are doing is output redirection

 So then I try (in Mac OS X Terminal, while logged in as me (not root)):
 mysqldump -uroot -p myDBname myTableName  ~/myTestDumpedTable.sql
 ...and again it produces:
 sh: mysqldump: command not found..

that is because Mac OSX is missing a package-managment and so you need
a little knowledge about your OS to fix the PATH or you have to use
full-qualified calls or configure/install your software to locations
which are already in the path

which mysqldump as normal user wil tell you where it is really
[harry@srv-rhsoft:~]$ which mysqldump
/usr/bin/mysqldump




signature.asc
Description: OpenPGP digital signature


Re: FULL mysqldump

2011-12-24 Thread Igor Shevtsov
Hi Reindl,
what do you delete by 
 rm -f /Volumes/dune/mysql_data/bin*
and why?

Many thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: FULL mysqldump

2011-12-24 Thread Reindl Harald


Am 24.12.2011 23:13, schrieb Igor Shevtsov:
 Hi Reindl,
 what do you delete by 
 rm -f /Volumes/dune/mysql_data/bin*
 and why?

this should be /mysql_data/bin* to match to the rest of the sample

why?
because this is my script to make a new backup of a mysqld-master to
re-init the slave and in this context binary-logs have to be removed
and they are normally also for backups not needed and very large over
the time





signature.asc
Description: OpenPGP digital signature


Re: FULL mysqldump

2011-12-23 Thread Reindl Harald


Am 23.12.2011 21:14, schrieb Jim McNeely:
 Hello all, happy holidays!
 
 What is the best way to run a mysqldump to get the tables, the data, the 
 triggers, the views, the procedures, the privileges and users, everything? It 
 seems confusing in the online documentation, or is that just me?

echo Prepare im laufenden Betrieb
rsync --times --perms --owner --group --recursive --delete-after /mysql_data/ 
/mysql_backup/
echo Offline_sync
/sbin/service mysqld stop
cd /Volumes/dune/mysql_data/
rm -f /Volumes/dune/mysql_data/bin*
rsync --progress --times --perms --owner --group --recursive --delete-after 
/mysql_data/ //mysql_backup/
/sbin/service mysqld start

so you have a REAL consistent backup with minimal downtime you can restore
on any machine and pull dumps of whatever you really need instead of
breaindead hughe dumps with long locking time while they are done or
withut locking inconsistent state

the first rsync runs while the server is online and the second one
after mysqld is stopped takes a few moemnts because only changed data
in the meantime have to be synced again

this way you can backup many GB of mysql-data with minimal downtime
and 100% consistence



signature.asc
Description: OpenPGP digital signature


Re: FULL mysqldump

2011-12-23 Thread Claudio Nanni
Hi Jim happy holidays to you!

actually you just need to add the --routines trigger

mysqldump --all-databases --*routines*  fulldump.sql

with this you get all databases including the system one with privileges
(mysql), triggers is on by default, you enable routines with the flag --*
routines*
*
*
Keep in mind that this method needs any application to be stopped from
writing either by shutting it down, blocking it at network level or locking
the database with something like  FLUSH TABLES WITH READ LOCK;
Depending on your application, your SLA, etc.

Keep also in mind that for database larger than a few GB it is not
recommended to use mysqldump (text dump) but any binary method, among which
 Percona XtraBackup in my opinion is the golden tool,
derived from InnoBackup allows hot backups.

Cheers

Claudio

2011/12/23 Jim McNeely jmcne...@nwrads.com

 Hello all, happy holidays!

 What is the best way to run a mysqldump to get the tables, the data, the
 triggers, the views, the procedures, the privileges and users, everything?
 It seems confusing in the online documentation, or is that just me?

 Thanks,

 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Claudio


Re: FULL mysqldump

2011-12-23 Thread Wm Mussatto
On Fri, December 23, 2011 12:27, Reindl Harald wrote:


 Am 23.12.2011 21:14, schrieb Jim McNeely:
 Hello all, happy holidays!

 What is the best way to run a mysqldump to get the tables, the data, the
 triggers, the views, the procedures, the privileges and users,
 everything? It seems confusing in the online documentation, or is that
 just me?

 echo Prepare im laufenden Betrieb
 rsync --times --perms --owner --group --recursive --delete-after
 /mysql_data/ /mysql_backup/
 echo Offline_sync
 /sbin/service mysqld stop
 cd /Volumes/dune/mysql_data/
 rm -f /Volumes/dune/mysql_data/bin*
 rsync --progress --times --perms --owner --group --recursive
 --delete-after /mysql_data/ //mysql_backup/
 /sbin/service mysqld start

 so you have a REAL consistent backup with minimal downtime you can restore
 on any machine and pull dumps of whatever you really need instead of
 breaindead hughe dumps with long locking time while they are done or
 withut locking inconsistent state

 the first rsync runs while the server is online and the second one
 after mysqld is stopped takes a few moemnts because only changed data
 in the meantime have to be synced again

 this way you can backup many GB of mysql-data with minimal downtime
 and 100% consistence

This is true if the problem is many relatively small tables.  Not sure how
well it would work if the problem was one or more very large tables.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: FULL mysqldump

2011-12-23 Thread Reindl Harald


Am 23.12.2011 22:42, schrieb Wm Mussatto:
 so you have a REAL consistent backup with minimal downtime you can restore
 on any machine and pull dumps of whatever you really need instead of
 breaindead hughe dumps with long locking time while they are done or
 withut locking inconsistent state

 the first rsync runs while the server is online and the second one
 after mysqld is stopped takes a few moemnts because only changed data
 in the meantime have to be synced again

 this way you can backup many GB of mysql-data with minimal downtime
 and 100% consistence

 This is true if the problem is many relatively small tables.  Not sure how
 well it would work if the problem was one or more very large tables.

does not matter if you look how rsync works

i am using rsync ober the WAN each day for backups or some TB real data
with only 3-6 GB traffic each day to give an specifiy example how good
this works

a part of this backups is a mysql-server for dbmail with
one table-file (innodb)  10 GB, the whole backup over a
22Mbit wire limited to 800KB/Sec. takes ususally around
30 minutes



signature.asc
Description: OpenPGP digital signature


Re: Maximum line length or statement length for mysqldump

2011-10-22 Thread Alex Schaft



On 2011/10/21 10:26 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

Got my app reading in a dump created with extended-inserts off, and
lumping all of the insert statements together. Works like a charm

Just for laughs, would you mind posting the on-disk size of your database, and 
the restore time with both extended and single inserts?


ibdata1 currently sitting at 6 gigs. Without ext inserts about a minute 
and a half and with a couple of seconds. I'm well aware of the speed 
differences. That's why I'm now reading in the non extended and joining 
the values together into big sql statements. This now takes about 10 
seconds, but I'm still optimizing that.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-21 Thread Alex Schaft

On 2011/10/20 03:43 PM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I realize that, I'm just trying to stop the phone calls saying I
started a restore, and my pc just froze

I might just read all the single insert lines, and get a whole lot of
values clauses together before passing it on to get around the
performance issue while having some idea of progress.

Wouldn't it be better to educate your users, then? :-) Much less trouble for 
you.

Either that, or just do the windows thing: print a progress bar that goes to 
95% in ten seconds, then run the entire restore and then progress the remaining 
5% :-p

You could probably write a sed script that intersperses the INSERT INTO lines 
with some form of progress printing. I remain convinced that users simply need 
to learn patience, though.


Got my app reading in a dump created with extended-inserts off, and 
lumping all of the insert statements together. Works like a charm



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-21 Thread Johan De Meersman
- Original Message -
 From: Alex Schaft al...@quicksoftware.co.za
 
 Got my app reading in a dump created with extended-inserts off, and
 lumping all of the insert statements together. Works like a charm

Just for laughs, would you mind posting the on-disk size of your database, and 
the restore time with both extended and single inserts?


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

Hi,

I'm monitoring a mysqldump via stdout, catching the create table 
commands prior to flushing them to my own text file. Then on the restore 
side, I'm trying to feed these to mysql via the c api so I can monitor 
progress (no of lines in the dump file vs no of lines sent to mysql), 
but the lines are as much as 16k long in the text file times about 110 
of those for one huge insert statement.


What can I pass to mysqldump to get more sane statement lengths?

Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 10:53 AM, Alex Schaft wrote:


What can I pass to mysqldump to get more sane statement lengths?


+1 for extended-inserts...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Johan De Meersman
- Original Message -
 From: Alex Schaft al...@quicksoftware.co.za
 
 I'm monitoring a mysqldump via stdout, catching the create table
 commands prior to flushing them to my own text file. Then on the
 restore side, I'm trying to feed these to mysql via the c api so I can
 monitor progress (no of lines in the dump file vs no of lines sent to mysql),
 but the lines are as much as 16k long in the text file times about
 110 of those for one huge insert statement.
 
 What can I pass to mysqldump to get more sane statement lengths?

That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.

If you really want to go to single-record inserts, you can pass 
--skip-extended-insert. I'm not sure you can control the maximum length of a 
statement beyond one or lots.




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Alex Schaft

On 2011/10/20 11:54 AM, Johan De Meersman wrote:

- Original Message -

From: Alex Schaftal...@quicksoftware.co.za

I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed these to mysql via the c api so I can
monitor progress (no of lines in the dump file vs no of lines sent to mysql),
but the lines are as much as 16k long in the text file times about
110 of those for one huge insert statement.

What can I pass to mysqldump to get more sane statement lengths?

That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.
I realize that, I'm just trying to stop the phone calls saying I 
started a restore, and my pc just froze


I might just read all the single insert lines, and get a whole lot of 
values clauses together before passing it on to get around the 
performance issue while having some idea of progress.


Alex



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Johan De Meersman
- Original Message -
 From: Alex Schaft al...@quicksoftware.co.za
 
 I realize that, I'm just trying to stop the phone calls saying I
 started a restore, and my pc just froze
 
 I might just read all the single insert lines, and get a whole lot of
 values clauses together before passing it on to get around the
 performance issue while having some idea of progress.

Wouldn't it be better to educate your users, then? :-) Much less trouble for 
you.

Either that, or just do the windows thing: print a progress bar that goes to 
95% in ten seconds, then run the entire restore and then progress the remaining 
5% :-p

You could probably write a sed script that intersperses the INSERT INTO lines 
with some form of progress printing. I remain convinced that users simply need 
to learn patience, though.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Maximum line length or statement length for mysqldump

2011-10-20 Thread Johnny Withers
I remain convinced that users simply need to learn patience, though.

HAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!!!

Good one!


Sent from my iPad

On Oct 20, 2011, at 8:44 AM, Johan De Meersman vegiv...@tuxera.be wrote:

 I remain convinced that users simply need to learn patience, though.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-25 Thread Shafi AHMED
Thanks dear everyone for sharing your views!

Let me try the workarounds and keep you posted.

 

Best Rgs,

Shafi AHMED

 

 

  _  

From: Suresh Kuna [mailto:sureshkumar...@gmail.com] 
Sent: Saturday, September 24, 2011 8:56 PM
To: Prabhat Kumar
Cc: Dan Nelson; Shafi AHMED; mysql@lists.mysql.com
Subject: Re: mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

 

Hello Shafi, 

Adding to Prabhat alternatives, you can use --force to the mysqldump command
to ignore the errors and continue taking backup.

Regarding the error, we need to check whether the table is present or not
and the engine type specifically.

Thanks 
Suresh Kuna

On Sat, Sep 24, 2011 at 3:31 AM, Prabhat Kumar aim.prab...@gmail.com
wrote:

correct.  mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
opened.

Workarounds: --skip-lock-tables will disable such a locking completely.
Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES
WITH READ LOCK which locks all tables in all databases (without opening
them). In this case mysqldump will automatically disable --lock-tables
because it makes no sense when --lock-all-tables is used. or  try with add
--single_transaction to your mysqldump command


On Fri, Sep 23, 2011 at 9:49 AM, Dan Nelson dnel...@allantgroup.com wrote:

 In the last episode (Sep 23), Shafi AHMED said:
  I have a mysql database of 200G size and the backup fails due to the
 foll.
  Issue.
 
  mysqldump: Got error: 1017: Can't find file:
  './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
 
  Can someone assist pls.?

 $ perror 24
 OS error code  24:  Too many open files

 You need to bump up the max files limit in your OS.  It may be defaulting
 to
 a small number like 1024.  If you can't change that limit, edit your
my.cnf
 and lower the table_open_cache number.  You'll lose performance though,
 since mysql will have to stop accessing some tables to open others.

 http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html

 --
Dan Nelson
dnel...@allantgroup.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:

 http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com




--
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat




-- 
Thanks
Suresh Kuna
MySQL DBA




Get your world in your inbox!

Mail, widgets, documents, spreadsheets, organizer and much more with your 
Sifymail WIYI id!
Log on to http://www.sify.com

** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to 
Sify Technologies Limited and is intended for use only by the individual or 
entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If this is a 
forwarded message, the content of this E-MAIL may not have been sent with 
the authority of the Company. If you are not the intended recipient, an 
agent of the intended recipient or a  person responsible for delivering the 
information to the named recipient,  you are notified that any use, 
distribution, transmission, printing, copying or dissemination of this 
information in any way or in any manner is strictly prohibited. If you have 
received this communication in error, please delete this mail  notify us 
immediately at ad...@sifycorp.com


Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-24 Thread Suresh Kuna
Hello Shafi,

Adding to Prabhat alternatives, you can use --force to the mysqldump command
to ignore the errors and continue taking backup.

Regarding the error, we need to check whether the table is present or not
and the engine type specifically.

Thanks
Suresh Kuna

On Sat, Sep 24, 2011 at 3:31 AM, Prabhat Kumar aim.prab...@gmail.comwrote:

 correct.  mysqldump by default has --lock-tables enabled, which means it
 tries to lock all tables to be dumped before starting the dump. And doing
 LOCK TABLES t1, t2, ... for really big number of tables will inevitably
 exhaust all available file descriptors, as LOCK needs all tables to be
 opened.

 Workarounds: --skip-lock-tables will disable such a locking completely.
 Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES
 WITH READ LOCK which locks all tables in all databases (without opening
 them). In this case mysqldump will automatically disable --lock-tables
 because it makes no sense when --lock-all-tables is used. or  try with add
 --single_transaction to your mysqldump command

 On Fri, Sep 23, 2011 at 9:49 AM, Dan Nelson dnel...@allantgroup.com
 wrote:

  In the last episode (Sep 23), Shafi AHMED said:
   I have a mysql database of 200G size and the backup fails due to the
  foll.
   Issue.
  
   mysqldump: Got error: 1017: Can't find file:
   './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
  
   Can someone assist pls.?
 
  $ perror 24
  OS error code  24:  Too many open files
 
  You need to bump up the max files limit in your OS.  It may be defaulting
  to
  a small number like 1024.  If you can't change that limit, edit your
 my.cnf
  and lower the table_open_cache number.  You'll lose performance though,
  since mysql will have to stop accessing some tables to open others.
 
  http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html
 
  --
 Dan Nelson
 dnel...@allantgroup.com
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
 
 


 --
 Best Regards,

 Prabhat Kumar
 MySQL DBA

 My Blog: http://adminlinux.blogspot.com
 My LinkedIn: http://www.linkedin.com/in/profileprabhat




-- 
Thanks
Suresh Kuna
MySQL DBA


mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Shafi AHMED
 

Folks

 

I have a mysql database of 200G size and the backup fails due to the foll.
Issue.

 

mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

 

Can someone assist pls.?

 

Best Rgs,

Shafi AHMED

 

 

 



Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Johan De Meersman
Someone seems to have deleted that file, which contains the description of the 
corresponding table. Recreate the exact same table (EXACT, including keys, 
indices, datatypes, encoding, the lot) and copy that tables's .frm file to 
replace the lost one. Then pray to the elder gods and restart your mysqld to 
see if it works. 

- Original Message -

 From: Shafi AHMED shafi.ah...@sifycorp.com
 To: mysql@lists.mysql.com
 Sent: Friday, 23 September, 2011 1:42:26 PM
 Subject: mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK
 TABLES

 Folks

 I have a mysql database of 200G size and the backup fails due to the
 foll. Issue.

 mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK
 TABLES

 Can someone assist pls.?

 Best Rgs,
 Shafi AHMED

-- 
Bier met grenadyn 
Is als mosterd by den wyn 
Sy die't drinkt, is eene kwezel 
Hy die't drinkt, is ras een ezel 


Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Dan Nelson
In the last episode (Sep 23), Shafi AHMED said:
 I have a mysql database of 200G size and the backup fails due to the foll.
 Issue.
 
 mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
 
 Can someone assist pls.?

$ perror 24
OS error code  24:  Too many open files

You need to bump up the max files limit in your OS.  It may be defaulting to
a small number like 1024.  If you can't change that limit, edit your my.cnf
and lower the table_open_cache number.  You'll lose performance though,
since mysql will have to stop accessing some tables to open others.

http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Prabhat Kumar
correct.  mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
opened.

Workarounds: --skip-lock-tables will disable such a locking completely.
Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES
WITH READ LOCK which locks all tables in all databases (without opening
them). In this case mysqldump will automatically disable --lock-tables
because it makes no sense when --lock-all-tables is used. or  try with add
--single_transaction to your mysqldump command

On Fri, Sep 23, 2011 at 9:49 AM, Dan Nelson dnel...@allantgroup.com wrote:

 In the last episode (Sep 23), Shafi AHMED said:
  I have a mysql database of 200G size and the backup fails due to the
 foll.
  Issue.
 
  mysqldump: Got error: 1017: Can't find file:
  './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
 
  Can someone assist pls.?

 $ perror 24
 OS error code  24:  Too many open files

 You need to bump up the max files limit in your OS.  It may be defaulting
 to
 a small number like 1024.  If you can't change that limit, edit your my.cnf
 and lower the table_open_cache number.  You'll lose performance though,
 since mysql will have to stop accessing some tables to open others.

 http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html

 --
Dan Nelson
dnel...@allantgroup.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: Questions about Mysqldump

2011-09-15 Thread Chris Tate-Davies

Adarsh,

1)

When restoring a mysqldump you have the option of which database to restore.

mysql database1  backup.sql

2)

You might be able to use the --ignore-table command. I'm not sure if 
this would work


mysqldump --all-databases -q  --single-transaction 
--ignore-table=databasetoignore.* | gzip  
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz


3)

The docs are here for mysqldump, might be worth a read:

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html


HTH, Chris



On 15/09/11 06:29, Adarsh Sharma wrote:

Dear all,

Today i backup my all databases (25) by using the below command :-

mysqldump --all-databases -q  --single-transaction | gzip  
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz


Now I have some doubts or problems that I need to handle in future :

1. Is there any option in restore command ( I use mysql  backup.sql 
)  to store only specific 1 or 2 databases out of this big backup file.
2. While taking mysqldump of all databases , is there any way to leave 
specific databases , I know there is --databases option , but we have 
to name other 23 databases then.
3. What are the settings that are need to changed in my.cnf to make 
backup  restore faster.



Thanks




--

*Chris Tate-Davies*

*Software Development*
Inflight Productions Ltd
Telephone: 01295 269 680
15 Stukeley Street | London | WC2B 5LT
*Email:*chris.tatedav...@inflightproductions.com 
mailto:chris.tatedav...@inflightproductions.com

*Web:*www.inflightproductions.com http://www.inflightproductions.com/





-


Registered Office: 15 Stukeley Street, London WC2B 5LT, England.
Registered in England number 1421223

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the email by you is prohibited. Please note that the information 
provided in this e-mail is in any case not legally binding; all committing 
statements require legally binding signatures.


http://www.inflightproductions.com



Re: Questions about Mysqldump

2011-09-15 Thread Carsten Pedersen

On 15-09-2011 10:31, Chris Tate-Davies wrote:

Adarsh,

1)

When restoring a mysqldump you have the option of which database to
restore.

mysql database1  backup.sql


Admittedly, it's been a few years since I last used mysqldump, but I 
suspect that it will contain USE commands - as such, it will restore to 
whatever database data was dumped from.


You'll want to have --one-database on the cmd line too.



2)

You might be able to use the --ignore-table command. I'm not sure if
this would work

mysqldump --all-databases -q --single-transaction
--ignore-table=databasetoignore.* | gzip 
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz


or create a short script that asks mysql for all databases, greps away 
those you don't want to dump, and runs mysqldump on the rest.


/ Carsten



3)

The docs are here for mysqldump, might be worth a read:

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html


HTH, Chris



On 15/09/11 06:29, Adarsh Sharma wrote:

Dear all,

Today i backup my all databases (25) by using the below command :-

mysqldump --all-databases -q --single-transaction | gzip 
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz


Now I have some doubts or problems that I need to handle in future :

1. Is there any option in restore command ( I use mysql  backup.sql )
to store only specific 1 or 2 databases out of this big backup file.
2. While taking mysqldump of all databases , is there any way to leave
specific databases , I know there is --databases option , but we have
to name other 23 databases then.
3. What are the settings that are need to changed in my.cnf to make
backup  restore faster.


Thanks






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Questions about Mysqldump

2011-09-15 Thread Ananda Kumar
or u can use for loop, have only the database to be exported and use that
variable in --database and do mysqldump of each database.

On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen cars...@bitbybit.dkwrote:

 On 15-09-2011 10:31, Chris Tate-Davies wrote:

 Adarsh,

 1)

 When restoring a mysqldump you have the option of which database to
 restore.

 mysql database1  backup.sql


 Admittedly, it's been a few years since I last used mysqldump, but I
 suspect that it will contain USE commands - as such, it will restore to
 whatever database data was dumped from.

 You'll want to have --one-database on the cmd line too.



 2)

 You might be able to use the --ignore-table command. I'm not sure if
 this would work

 mysqldump --all-databases -q --single-transaction
 --ignore-table=**databasetoignore.* | gzip 
 /media/disk-1/Server11_**MysqlBackup_15September2011/**
 mysql_15sep2011backup.sql.gz


 or create a short script that asks mysql for all databases, greps away
 those you don't want to dump, and runs mysqldump on the rest.

 / Carsten



 3)

 The docs are here for mysqldump, might be worth a read:

 http://dev.mysql.com/doc/**refman/5.1/en/mysqldump.htmlhttp://dev.mysql.com/doc/refman/5.1/en/mysqldump.html


 HTH, Chris



 On 15/09/11 06:29, Adarsh Sharma wrote:

 Dear all,

 Today i backup my all databases (25) by using the below command :-

 mysqldump --all-databases -q --single-transaction | gzip 
 /media/disk-1/Server11_**MysqlBackup_15September2011/**
 mysql_15sep2011backup.sql.gz


 Now I have some doubts or problems that I need to handle in future :

 1. Is there any option in restore command ( I use mysql  backup.sql )
 to store only specific 1 or 2 databases out of this big backup file.
 2. While taking mysqldump of all databases , is there any way to leave
 specific databases , I know there is --databases option , but we have
 to name other 23 databases then.
 3. What are the settings that are need to changed in my.cnf to make
 backup  restore faster.


 Thanks




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?**unsub=anan...@gmail.comhttp://lists.mysql.com/mysql?unsub=anan...@gmail.com




Questions about Mysqldump

2011-09-14 Thread Adarsh Sharma

Dear all,

Today i backup my all databases (25) by using the below command :-

mysqldump --all-databases -q  --single-transaction | gzip  
/media/disk-1/Server11_MysqlBackup_15September2011/mysql_15sep2011backup.sql.gz


Now I have some doubts or problems that I need to handle in future :

1. Is there any option in restore command ( I use mysql  backup.sql )  
to store only specific 1 or 2 databases out of this big backup file.
2. While taking mysqldump of all databases , is there any way to leave 
specific databases , I know there is --databases option , but we have to 
name other 23 databases then.
3. What are the settings that are need to changed in my.cnf to make 
backup  restore faster.



Thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



System Reboot while mysqldump

2011-07-27 Thread Adarsh Sharma

Dear all,

Today I got stucked  around a strange problem.
Don't know why me linux reboot automatically.

*Server Info :
*Linux Server-5 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC 2007 
x86_64 x86_64 x86_64 GNU/Linux

Welcome to SUSE Linux Enterprise Server 10 SP1 (x86_64) - Kernel \r (\l).

Mysql version is  mysql  Ver 14.12 Distrib 5.0.45, for unknown-linux-gnu 
(x86_64) using readline 5.0


I researched a lot  try to find the reasons :

Last time Reboot :-
reboot   system boot  2.6.16.46-0.12-s Thu Jul 28 10:33  
(00:06)   


I read my /var/log/messages but not able to find any pointers for that.
I attached the file too.


Thanks



Jul 28 09:53:25 Server-5 nmbd[2609]:   *
Jul 28 09:58:44 Server-5 sshd[6938]: Accepted keyboard-interactive/pam for 
varsha from port 33193 ssh2
Jul 28 09:59:07 Server-5 sshd[6989]: Accepted keyboard-interactive/pam for 
isha.garg from port 50208 ssh2
Jul 28 10:01:08 Server-5 sshd[7086]: Accepted keyboard-interactive/pam for 
pankaj.kumari from 11.11.11.11 port 40576 ssh2
Jul 28 10:01:17 Server-5 sshd[7113]: Accepted keyboard-interactive/pam for 
pankaj.kumari from 11.11.11.11 port 40587 ssh2
Jul 28 10:01:44 Server-5 sshd[7142]: Accepted keyboard-interactive/pam for 
pankaj.kumari from 11.11.11.11 port 40602 ssh2
Jul 28 10:02:07 Server-5 sshd[7170]: Accepted keyboard-interactive/pam for 
pankaj.kumari from 11.11.11.11 port 40639 ssh2
Jul 28 10:02:50 Server-5 sshd[7245]: Accepted keyboard-interactive/pam for root 
from 11.11.11.11 port 53912 ssh2
Jul 28 10:25:36 Server-5 sshd[7439]: Accepted keyboard-interactive/pam for 
pankaj.kumari from 11.11.11.11 port 45834 ssh2
Jul 28 10:27:35 Server-5 kernel: ata4: CPB flags CMD err, flags=0x11
Jul 28 10:28:35 Server-5 kernel: ata4: EH in ADMA mode, notifier 0x0 
notifier_error 0x0 gen_ctl 0x1501000 status 0x400
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 0: ctl_flags 0x1f, resp_flags 0x0
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 1: ctl_flags 0x1f, resp_flags 0x11
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 2: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 3: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 4: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 5: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 6: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 7: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 8: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 9: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 10: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 11: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 12: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 13: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 14: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 15: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 16: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 17: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 18: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 19: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 20: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 21: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 22: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 23: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 24: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 25: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 26: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 27: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 28: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 29: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: CPB 30: ctl_flags 0x1f, resp_flags 0x1
Jul 28 10:28:35 Server-5 kernel: ata4: Resetting port
Jul 28 10:28:35 Server-5 kernel: ata4.00: exception Emask 0x0 SAct 0x3 SErr 0x0 
action 0x2 frozen
Jul 28 10:28:35 Server-5 kernel: ata4.00: cmd 
60/00:00:c7:22:5e/04:00:2e:00:00/40 tag 0 cdb 0x0 data 524288 in
Jul 28 10:28:35 Server-5 kernel:  res 
40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jul 28 10:28:35 Server-5 kernel: ata4.00: cmd 
60/00:08:c7:1e:5e/04:00:2e:00:00/40 tag 1 cdb 0x0 data 524288 in
Jul 28 10:28:35 Server-5 kernel:  res 
41/00:08:c7:1e:5e/00:00:00:00:00/40 Emask 0x1 (device error)
Jul 28 10:28:35 Server-5 kernel: ata4: soft resetting port
Jul 28 10:28:35 Server-5 kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 
SControl 300)
Jul 28 10:28:35 

Re: System Reboot while mysqldump

2011-07-27 Thread Adarsh Sharma

Thanks Jon,

I couldn't locate my error.log.

But i found one clue :
My server reboots at :

reboot   system boot  2.6.16.46-0.12-s Thu Jul 28 10:32  (00:24)   
reboot   system boot  2.6.16.46-0.12-s Wed Jul 27 18:47  (16:10)   
reboot   system boot  2.6.16.46-0.12-s Wed Jul 27 17:40  (17:17)   
reboot   system boot  2.6.16.46-0.12-s Wed Jul 27 16:58  (17:59)   
reboot   system boot  2.6.16.46-0.12-s Wed Jul 27 16:41  (00:14)   
reboot   system boot  2.6.16.46-0.12-s Tue Jul 26 10:50 (1+06:05) 


And below shows that

Server-5:/var/log # grep CPU /var/log/messages | more

Jul 25 10:54:31 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 26 10:52:04 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 27 16:42:43 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 27 16:58:57 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 27 17:41:09 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 27 18:47:55 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.
Jul 28 10:33:49 Server-5 rcpowersaved: enter 'powernow_k8' into 
CPUFREQD_MODULE in /etc/powersave/cpufreq.


I tried to find its explaination.

jon.siebe...@gmail.com wrote:

anything in error log by chance?

On Jul 28, 2011 1:07am, Adarsh Sharma adarsh.sha...@orkash.com wrote:





 Dear all,



 Today I got stucked  around a strange problem.

 Don't know why me linux reboot automatically.



 Server Info :

 Linux Server-5 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC
 2007 x86_64 x86_64 x86_64 GNU/Linux

 Welcome to SUSE Linux Enterprise Server 10 SP1 (x86_64) - Kernel \r
 (\l).



 Mysql version is  mysql  Ver 14.12 Distrib 5.0.45, for
 unknown-linux-gnu (x86_64) using readline 5.0



 I researched a lot  try to find the reasons :



 Last time Reboot :-

  reboot   system boot  2.6.16.46-0.12-s Thu Jul 28 10:33 
 (00:06)   




 I read my /var/log/messages but not able to find any pointers for that.

 I attached the file too.





 Thanks








 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysqldump --ignore-table

2011-06-08 Thread zia mohaddes
Dear all,

I am currently trying to figure-out how I could ignore multiple tables in
mysql using a simple a regex. For example I have multiple tables which have
the following structure: mytable1, mytable2, ..,mytable100. And I
would like these tables to be ignore when doing mysqldump by doing something
like this:
mysqldump --ignore-table = mydb.table*

 I am wondering if there is any way do something like this in mysql!


Thank you kindly for the help,
regards,
daniel


Re: mysqldump --ignore-table

2011-06-08 Thread Claudio Nanni
Hi Daniel,

you can use a workaround from the shell,

cd /path/to/your/database  (e.g.: cd /var/lib/mysql/mydb)

ls -al *table** | awk '{print $8}' | awk -F. '{print --ignore-table=*mydb
*.$1}' | xargs mysqldump -u*root* -p*toor* *--your-flags **mydb*

It's not that beautiful but it should work.

Claudio

2011/6/8 zia mohaddes zia.si...@gmail.com

 Dear all,

 I am currently trying to figure-out how I could ignore multiple tables in
 mysql using a simple a regex. For example I have multiple tables which have
 the following structure: mytable1, mytable2, ..,mytable100. And I
 would like these tables to be ignore when doing mysqldump by doing
 something
 like this:
 mysqldump --ignore-table = mydb.table*

  I am wondering if there is any way do something like this in mysql!


 Thank you kindly for the help,
 regards,
 daniel




-- 
Claudio


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-06 Thread Johan De Meersman

I haven't bothered to look for the bug, but it seems to me to be quite 
reasonable default behaviour to lock the whole lot when you're dumping 
transactional tables - it ensures you dump all tables from the same consistent 
view.

I would rather take this up with the ZRM people - it should just work. 3.3 
came out last week, you may want to have a look at wether it's already been 
adressed there.

Harald's solution is, as usual, technically superior; but he keeps having 
trouble understanding people not wanting to change their entire setup because 
their solution isn't optimal :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-06 Thread agd85


On Mon, 06 Jun 2011 12:44 +0200, Johan De Meersman
vegiv...@tuxera.be wrote:
 
 I haven't bothered to look for the bug, but it seems to me to be quite
 reasonable default behaviour to lock the whole lot when you're dumping
 transactional tables - it ensures you dump all tables from the same
 consistent view.

thanks for the comment.

 I would rather take this up with the ZRM people - it should just work.

this,

  http://bugs.mysql.com/bug.php?id=61414

suggests the same.  so, i've already started that discussion as well,

  http://forums.zmanda.com/showthread.php?t=3703

Excluding 'performance_schema' appears to eliminate the error. And it
seems does NOT cause a reliability-of-the-backup problem.

 3.3 came out last week, you may want to have a look at wether it's
 already been adressed there.

I believe that's an Amanda 3.3 release you're referring to. ZRM is still
at 2.2,

 http://www.zmanda.com/download-zrm.php
 ZRM for MySQL, Version 2.2 is the Latest Stable Release

and, i've MySQL-zrm-2.2.0-1.noarch installed.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-06 Thread Johan De Meersman

- Original Message -
 From: ag...@airpost.net
 
 Excluding 'performance_schema' appears to eliminate the error. And it
 seems does NOT cause a reliability-of-the-backup problem.

Hah, no, backing that up is utterly pointless. Never noticed it doing that.

It's basically a virtual schema that contains realtime information about the 
database, intended to replace a lot of show tables parsing and similar mayhem 
with simple select statements.


 I believe that's an Amanda 3.3 release you're referring to. ZRM is
 still at 2.2,

No, I do mean 3.3. Apparently the free downloadable version is quite a bit 
behind the commercial one. Maybe that's why I never noticed it backing up the 
performance schema, too :-)

They're not really that expensive, either, it might well be worth it to grab 
licenses and support.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-06 Thread agd85


On Mon, 06 Jun 2011 18:54 +0200, Johan De Meersman
vegiv...@tuxera.be wrote:
  Excluding 'performance_schema' appears to eliminate the error. And it
  seems does NOT cause a reliability-of-the-backup problem.
 
 Hah, no, backing that up is utterly pointless.

that's a useful/final confirmation. thx.

 No, I do mean 3.3. Apparently the free downloadable version is quite a
 bit behind the commercial one. Maybe that's why I never noticed it
 backing up the performance schema, too :-)

i didn't catch that ZRM's commercial version was at 3.3!  thx.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85
i use ZRM to do backups of my databases.

with v5.1.x, this works fine,

  mysql-zrm-scheduler --now --backup-set manual --backup-level 0

to execute a manual backup.

i recently upgraded from v5.1.x - v5.5.12,

 mysqladmin -V
  mysqladmin  Ver 8.42 Distrib 5.5.12, for Linux on i686

now, at exec of that backup cmd, i see an ERROR @ console,

...
manual:backup:INFO: PHASE START: Creating raw backup
manual:backup:INFO: Command used for raw backup is
/usr/share/mysql-zrm/plugins/socket-copy.pl
--mysqlhotcopy=/usr/bin --host=localhost --port=3306
--socket=/var/cache/mysql/mysql.sock --quiet  mysql 
/var/mysql-bkup/manual/20110605131003 
/var/cache/tmp/bZvaQFwQY2 21
manual:backup:INFO: raw-databases=mysql
manual:backup:INFO: PHASE END: Creating raw backup
manual:backup:INFO: PHASE START: Creating logical backup
manual:backup:WARNING: The database(s) drupal6
performance_schema will be backed up in logical mode since they
contain tables that use a transactional engine.
manual:backup:INFO: Command used for logical backup is
/usr/bin/mysqldump --opt --extended-insert --create-options
--default-character-set=utf8 --routines --host=localhost
--port=3306 --socket=/var/cache/mysql/mysql.sock --databases
drupal6 performance_schema  
/var/mysql-bkup/manual/20110605131003/backup.sql
mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to
user 'root'@'localhost' for table 'cond_instances' when using
LOCK TABLES
-- manual:backup:ERROR: mysqldump did not succeed.
 Command used is /usr/bin/mysqldump --opt --extended-insert
 --create-options --default-character-set=utf8 --routines
 --host=localhost --port=3306
 --socket=/var/cache/mysql/mysql.sock --databases drupal6
 performance_schema  
 /var/mysql-bkup/manual/20110605131003/backup.sqlmanual:backup:INFO:
 PHASE START: Cleanup
manual:backup:INFO: backup-status=Backup failed
...


reading up on the error at,

 http://bugs.mysql.com/bug.php?id=33762
 http://bugs.mysql.com/bug.php?id=49633

it looks to do with mysqldump itself.  i modified in /etc/my.cnf

...
[mysqldump]
quick
quote-names
max_allowed_packet = 8M
+   skip-lock-tables
...

but that doesn't seem to make any difference.

something's changed between 5.1.x  5.5.x.

what do i need to modify to get past this error?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald
have you checked you permissions-table if all privileges are active for root
and have you started ymsql_upgrade after all updates?

Am 05.06.2011 22:20, schrieb ag...@airpost.net:
 i use ZRM to do backups of my databases.
 
 with v5.1.x, this works fine,
 
   mysql-zrm-scheduler --now --backup-set manual --backup-level 0
 
 to execute a manual backup.
 
 i recently upgraded from v5.1.x - v5.5.12,
 
  mysqladmin -V
   mysqladmin  Ver 8.42 Distrib 5.5.12, for Linux on i686
 
 now, at exec of that backup cmd, i see an ERROR @ console,
 
   ...
   manual:backup:INFO: PHASE START: Creating raw backup
   manual:backup:INFO: Command used for raw backup is
   /usr/share/mysql-zrm/plugins/socket-copy.pl
   --mysqlhotcopy=/usr/bin --host=localhost --port=3306
   --socket=/var/cache/mysql/mysql.sock --quiet  mysql 
   /var/mysql-bkup/manual/20110605131003 
   /var/cache/tmp/bZvaQFwQY2 21
   manual:backup:INFO: raw-databases=mysql
   manual:backup:INFO: PHASE END: Creating raw backup
   manual:backup:INFO: PHASE START: Creating logical backup
   manual:backup:WARNING: The database(s) drupal6
   performance_schema will be backed up in logical mode since they
   contain tables that use a transactional engine.
   manual:backup:INFO: Command used for logical backup is
   /usr/bin/mysqldump --opt --extended-insert --create-options
   --default-character-set=utf8 --routines --host=localhost
   --port=3306 --socket=/var/cache/mysql/mysql.sock --databases
   drupal6 performance_schema  
   /var/mysql-bkup/manual/20110605131003/backup.sql
   mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to
   user 'root'@'localhost' for table 'cond_instances' when using
   LOCK TABLES
 -- manual:backup:ERROR: mysqldump did not succeed.
Command used is /usr/bin/mysqldump --opt --extended-insert
--create-options --default-character-set=utf8 --routines
--host=localhost --port=3306
--socket=/var/cache/mysql/mysql.sock --databases drupal6
performance_schema  
/var/mysql-bkup/manual/20110605131003/backup.sqlmanual:backup:INFO:
PHASE START: Cleanup
   manual:backup:INFO: backup-status=Backup failed
   ...
 
 
 reading up on the error at,
 
  http://bugs.mysql.com/bug.php?id=33762
  http://bugs.mysql.com/bug.php?id=49633
 
 it looks to do with mysqldump itself.  i modified in /etc/my.cnf
 
   ...
   [mysqldump]
   quick
   quote-names
   max_allowed_packet = 8M
 +   skip-lock-tables
   ...
 
 but that doesn't seem to make any difference.
 
 something's changed between 5.1.x  5.5.x.
 
 what do i need to modify to get past this error?
 

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/

http://www.thelounge.net/signature.asc.what.htm



signature.asc
Description: OpenPGP digital signature


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85

hi,

On Sun, 05 Jun 2011 22:24 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
 have you checked you permissions-table if all privileges are active for root

i've got,

 mysql  show grants for 'root'@'localhost';

++
| Grants for root@localhost 

   |

++
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED
BY PASSWORD '*3...4' WITH GRANT OPTION  
 |
| GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION  

   |

++

and,

 mysql  show grants for 'drupal_admin'@'localhost';

+--+
| Grants for drupal_admin@localhost 

 |

+--+
| GRANT USAGE ON *.* TO 'drupal_admin'@'localhost' IDENTIFIED BY
PASSWORD '*D...D'   
 |
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX,
ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON `drupal6`.* TO
'drupal_admin'@'localhost'  |

+--+
17 rows in set (0.00 sec)

are these sufficient?  these permissions worked fine as far as i could
tell for the v5.1.x install i had.

 and have you started ymsql_upgrade after all updates?

yes, i'd already executed 'mysql_upgrade', following the instructions
here:
 http://dev.mysql.com/doc/refman/5.5/en/upgrading-from-previous-series.html

checking,

 mysql_upgrade
  Looking for 'mysql' as: mysql
  Looking for 'mysqlcheck' as: mysqlcheck
  This installation of MySQL is already upgraded to 5.5.12, use --force
  if you still need to run mysql_upgrade


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald
the grant statements does nobody interest
maybe use phpmyadmin for a clearer display

mysql select * from mysql.user where user='root' limit 1;
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
| Host  | User | Password  | Select_priv | 
Insert_priv | Update_priv |
Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | 
Process_priv | File_priv | Grant_priv |
References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | 
Create_tmp_table_priv | Lock_tables_priv |
Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | 
Show_view_priv | Create_routine_priv |
Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | 
Create_tablespace_priv | ssl_type | ssl_cipher
| x509_issuer | x509_subject | max_questions | max_updates | max_connections | 
max_user_connections | plugin |
authentication_string |
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
| localhost | root | * | Y   | Y   | Y   | Y
   | Y   | Y |
Y   | Y | Y| Y | Y  | Y 
  | Y  | Y  | Y
   | Y  | Y | Y| Y  
  | Y   | Y
| Y| Y  | Y   | Y   
   | Y| Y
| Y| Y  |  || | 
 | 0 |
  0 |   0 |0 || 
  |
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
1 row in set (0.00 sec)



Am 05.06.2011 23:05, schrieb ag...@airpost.net:
 
 hi,
 
 On Sun, 05 Jun 2011 22:24 +0200, Reindl Harald
 h.rei...@thelounge.net wrote:
 have you checked you permissions-table if all privileges are active for root
 
 i've got,
 
  mysql  show grants for 'root'@'localhost';
   
 ++
   | Grants for root@localhost 
   
  |
   
 ++
   | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED
   BY PASSWORD '*3...4' WITH GRANT OPTION  
|
   | GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION  
   
  |
   
 ++



signature.asc
Description: OpenPGP digital signature


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85
fwiw, others are seeing this. e.g., in addition to the two bugs i'd
already referenced,

 http://www.directadmin.com/forum/showthread.php?p=202053

and one

 
http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command

claims a solution

 Add --skip-add-locks to your mysqldump command

which, having added as i mentioned above, to the [mysqldump] section of
/etc/my.cnf, does NOT make a difference for me.


On Sun, 05 Jun 2011 23:19 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
 the grant statements does nobody interest
 mysql select * from mysql.user where user='root' limit 1;


and, my result for your cmd,


mysql select * from mysql.user where user='root' limit 1;
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
| Host  | User | Password  |
Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv |
Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv |
Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv |
Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv |
Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv |
Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv
| Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher |
x509_issuer | x509_subject | max_questions | max_updates |
max_connections | max_user_connections | plugin | authentication_string
|
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
| localhost | root | *3..4 | Y  
| Y   | Y   | Y   | Y   | Y
| Y   | Y | Y| Y | Y  |
Y   | Y  | Y  | Y| Y  |
Y | Y| Y| Y 
 | Y| Y| Y  | Y 
 | Y  | Y| Y  | Y   
| Y  |  || |
 | 0 |   0 |   0 |  
 0 || NULL  |
+---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
1 row in set (0.06 sec)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald
hm - bad

i would use a replication slave and stop him for consistent backups
because dumb locks are not really a good solution independent
if this works normally

Am 05.06.2011 23:26, schrieb ag...@airpost.net:
 fwiw, others are seeing this. e.g., in addition to the two bugs i'd
 already referenced,
 
  http://www.directadmin.com/forum/showthread.php?p=202053
 
 and one
 
  
 http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
 
 claims a solution
 
  Add --skip-add-locks to your mysqldump command
 
 which, having added as i mentioned above, to the [mysqldump] section of
 /etc/my.cnf, does NOT make a difference for me.
 
 
 On Sun, 05 Jun 2011 23:19 +0200, Reindl Harald
 h.rei...@thelounge.net wrote:
 the grant statements does nobody interest
 mysql select * from mysql.user where user='root' limit 1;
 
 
 and, my result for your cmd,
 
 
 mysql select * from mysql.user where user='root' limit 1;
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 | Host  | User | Password  |
 Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv |
 Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv |
 Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv |
 Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv |
 Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv |
 Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv
 | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher |
 x509_issuer | x509_subject | max_questions | max_updates |
 max_connections | max_user_connections | plugin | authentication_string
 |
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 | localhost | root | *3..4 | Y  
 | Y   | Y   | Y   | Y   | Y
 | Y   | Y | Y| Y | Y  |
 Y   | Y  | Y  | Y| Y  |
 Y | Y| Y| Y 
  | Y| Y| Y  | Y 
  | Y  | Y| Y  | Y   
 | Y  |  || |
  | 0 |   0 |   0 |  
  0 || NULL  |
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 1 row in set (0.06 sec)
 
 

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/

http://www.thelounge.net/signature.asc.what.htm



signature.asc
Description: OpenPGP digital signature


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald
BTW
WHY is everybody ansering to the list AND the author of the last post?
this reults in get every message twice :-(

Am 05.06.2011 23:26, schrieb ag...@airpost.net:
 fwiw, others are seeing this. e.g., in addition to the two bugs i'd
 already referenced,
 
  http://www.directadmin.com/forum/showthread.php?p=202053
 
 and one
 
  
 http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
 
 claims a solution
 
  Add --skip-add-locks to your mysqldump command
 
 which, having added as i mentioned above, to the [mysqldump] section of
 /etc/my.cnf, does NOT make a difference for me.
 
 
 On Sun, 05 Jun 2011 23:19 +0200, Reindl Harald
 h.rei...@thelounge.net wrote:
 the grant statements does nobody interest
 mysql select * from mysql.user where user='root' limit 1;
 
 
 and, my result for your cmd,
 
 
 mysql select * from mysql.user where user='root' limit 1;
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 | Host  | User | Password  |
 Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv |
 Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv |
 Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv |
 Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv |
 Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv |
 Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv
 | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher |
 x509_issuer | x509_subject | max_questions | max_updates |
 max_connections | max_user_connections | plugin | authentication_string
 |
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 | localhost | root | *3..4 | Y  
 | Y   | Y   | Y   | Y   | Y
 | Y   | Y | Y| Y | Y  |
 Y   | Y  | Y  | Y| Y  |
 Y | Y| Y| Y 
  | Y| Y| Y  | Y 
  | Y  | Y| Y  | Y   
 | Y  |  || |
  | 0 |   0 |   0 |  
  0 || NULL  |
 +---+--+---+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-++--++--++--++-+--+---+-+-+--++---+
 1 row in set (0.06 sec)
 
 

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/

http://www.thelounge.net/signature.asc.what.htm



signature.asc
Description: OpenPGP digital signature


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85
On Sun, 05 Jun 2011 23:30 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
 BTW
 WHY is everybody ansering to the list AND the author of the last post?
 this reults in get every message twice :-(

Reply - sends to ONLY the From == h.rei...@thelounge.net
Reply to all sends to BOTH the From == h.rei...@thelounge.net AND the
list.

I suppose if the list manager software, or your client were configured
differently ...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85

On Sun, 05 Jun 2011 23:29 +0200, Reindl Harald
h.rei...@thelounge.net wrote:
 i would use a replication slave and stop him for consistent backups
 because dumb locks are not really a good solution independent
 if this works normally

unfortunately, i have no idea what that means.

something's apparently broken with mysqldump -- enough so that lots of
people are seeing and reporting this same error after the 5.1 - 5.5
upgrade.

why would setting up a replication slave be necessary or a good solution
to the problem?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald


Am 05.06.2011 23:49, schrieb ag...@airpost.net:
 
 On Sun, 05 Jun 2011 23:29 +0200, Reindl Harald
 h.rei...@thelounge.net wrote:
 i would use a replication slave and stop him for consistent backups
 because dumb locks are not really a good solution independent
 if this works normally
 
 unfortunately, i have no idea what that means.
 
 something's apparently broken with mysqldump -- enough so that lots of
 people are seeing and reporting this same error after the 5.1 - 5.5
 upgrade.
 
 why would setting up a replication slave be necessary or a good solution
 to the problem?

because there is no lock on any production table?
have fun using mysqldump with really hughe databases :-)

a replication slave is synchron, you can stop the slave, copy
the whole datadir and after starting the slave it will
make all changes from the binary log



signature.asc
Description: OpenPGP digital signature


Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread agd85
i still have no idea why this is necessary.

there seems to be a but, problem, misconfiguration, etc.

wouldn't it make some sense to try to FIX it, rather than setting up a
completely different server?

perhaps someone with an idea of the problem and its solution will be
able to chime in.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgraded from 5.1-5.5. now getting a mysqldump ERROR 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES. a bug or my config?

2011-06-05 Thread Reindl Harald


Am 05.06.2011 23:55, schrieb ag...@airpost.net:
 i still have no idea why this is necessary.

take it or not

it is a professional solution which works for
databses with 20 GB every day here with rsync
without interrupt/lock mysqld a second

and it is much faster

 there seems to be a but, problem, misconfiguration, etc.
 wouldn't it make some sense to try to FIX it, rather than setting up a
 completely different server?

it takes 5 minutes starting a replication salve on the same machine
with its own socket/port

 perhaps someone with an idea of the problem and its solution will be
 able to chime in.

wait until it is fixed or think about a better solution
which will work in the future



signature.asc
Description: OpenPGP digital signature


Re: getting procedure code via mysqldump

2011-03-30 Thread Claudio Nanni
In case you use a linux or unix system, to strip off the comments in linux
bash is very easy, you can use this simple bash command:

grep -v ^/\* yourdumpfile.sql  yourdumpfilewithoutcomments.sql

this will create a new dump without comments.

Cheers

Claudio


2011/3/30 Shawn Green (MySQL) shawn.l.gr...@oracle.com

 On 3/29/2011 19:09, John G. Heim wrote:

 I would like to use mysqldump to get a copy of the code for a stored
 procedure in a format that is similar to the code I used to create it.
 The problem is that I'm blind and I have to listen to the code to debug
 it. I think I have a file containing the code that I used to create the
 stored procedure but I want to make absolutely sure.

 This is what I've tried:

 mysqldump --p --routines --no-create-info --no-data --no-create-db
 --skip-opt --skip-comments --compatible=ansi --result=routines.sql
 database

 My problem is that generates a file with a lot of lines I don't
 understand. for example:

 /*!50003 CREATE*/ /*!50020 DEFINER=`root`@`localhost`*/ /*!50003
 PROCEDURE `TIMETABLE_SYNC`()

 That appears to be the line to create the stored procedure
 'timetable_sync'. But what's with all the other stuff on that line? Can
 i get rid of it?




 As Claudio mentioned, those are version-sensitive comments. In order for a
 MySQL server to not ignore the comment, it must be a version equal to or
 greater than the value tagged in the comment.

 For example, stored procedures did not exist before version 5.0.3 . So all
 of the stored procedure will be enclosed with comments that look like

 /*!50003   */

 We enhanced the security of the stored procedures themselves by adding the
 DEFINER= option to the definition. We did this in version 5.0.20. That is
 why that part of the stored procedure was dumped using the comment tags

 /*!50020   */

 Unfortunately, I have no way at this time to separate the version-specific
 comments from the rest of the dump. Perhaps someone better than I at using
 grep, sed, or awk could produce a script to strip those comments and share
 with the list?

 Yours,
 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com




-- 
Claudio


Re: getting procedure code via mysqldump

2011-03-30 Thread Johan De Meersman
Might it not be easier to use something like show create procedure instead? 
Given that the purpose is debugging, I would assume you want the exact text 
used to create the procedure, not the one with version-specifics removed.

You can still pump that into a file by using mysql -e 'show create procedure 
procname\G' dbname  outputfile.sql. There will still be a bit of superfluous 
information as this is an information request, but that should be easily 
removed with some sed hacking.


- Original Message -
 From: Claudio Nanni claudio.na...@gmail.com
 To: Shawn Green (MySQL) shawn.l.gr...@oracle.com
 Cc: John G. Heim jh...@math.wisc.edu, mysql@lists.mysql.com
 Sent: Wednesday, 30 March, 2011 9:01:06 AM
 Subject: Re: getting procedure code via mysqldump
 
 In case you use a linux or unix system, to strip off the comments in
 linux
 bash is very easy, you can use this simple bash command:
 
 grep -v ^/\* yourdumpfile.sql  yourdumpfilewithoutcomments.sql
 
 this will create a new dump without comments.
 
 Cheers
 
 Claudio
 
 
 2011/3/30 Shawn Green (MySQL) shawn.l.gr...@oracle.com
 

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: getting procedure code via mysqldump

2011-03-30 Thread John G. Heim

From: Claudio Nanni claudio.na...@gmail.com
To: Shawn Green (MySQL) shawn.l.gr...@oracle.com
Cc: John G. Heim jh...@math.wisc.edu; my...@lists.mysql.com
Sent: Wednesday, March 30, 2011 2:01 AM
Subject: Re: getting procedure code via mysqldump



In case you use a linux or unix system, to strip off the comments in linux
bash is very easy, you can use this simple bash command:

grep -v ^/\* yourdumpfile.sql  yourdumpfilewithoutcomments.sql


That didn't work because there are comments embedded in the line that 
creates the procedure. For example:


/*!50003 CREATE*/ /*!50020 DEFINER=`root`@`localhost`*/ /*!50003 PROCEDURE 
`TIMETABLE_SYNC`()


That's all one line. It will probably wrap in my  message. But the line 
begins with a /* so its removed by your grep command.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: getting procedure code via mysqldump

2011-03-30 Thread Joerg Bruehe
Hi all!


John G. Heim wrote:
 From: Claudio Nanni claudio.na...@gmail.com
 [[...]]
 
 
 In case you use a linux or unix system, to strip off the comments in
 linux
 bash is very easy, you can use this simple bash command:

 grep -v ^/\* yourdumpfile.sql  yourdumpfilewithoutcomments.sql
 
 That didn't work because there are comments embedded in the line that
 creates the procedure. For example:
 
 /*!50003 CREATE*/ /*!50020 DEFINER=`root`@`localhost`*/ /*!50003
 PROCEDURE `TIMETABLE_SYNC`()
 
 That's all one line. It will probably wrap in my  message. But the line
 begins with a /* so its removed by your grep command.

No member of the grep family is the right tool for this, as they
always take (or ignore) whole lines.


This is a job for sed - try this (tested just on that single line):

  sed '1,$s=/\*![3-6][0-9]* \([^*]*\)\*/=\1 =g' dumpfile.sql noversion.sql


HTH,
Joerg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



getting procedure code via mysqldump

2011-03-29 Thread John G. Heim
I would like to use mysqldump to get a copy of the code for a stored 
procedure in a format that is similar to the code I used to create it. The 
problem is that I'm blind and I have to listen to the code to debug it. I 
think I have a file containing the code that I used to create the stored 
procedure but I want to make absolutely sure.


This is what I've tried:

mysqldump --p --routines --no-create-info --no-data --no-create-db --skip-opt 
--skip-comments --compatible=ansi --result=routines.sql  database


My problem is that generates a file with a lot of lines I don't understand. 
for example:


/*!50003 CREATE*/ /*!50020 DEFINER=`root`@`localhost`*/ /*!50003 PROCEDURE 
`TIMETABLE_SYNC`()


That appears to be the line to create the stored procedure 'timetable_sync'. 
But what's with all the other stuff on that line? Can i get rid of it?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   3   4   5   6   7   8   9   10   >