Re: mysqldump with single-transaction option.

2014-10-08 Thread Andrew Moore
We will tend to use binary backups (Xtrabackup) for full consistent dataset
restore (think slave provisioning and disaster recovery) and logical
backups to perform single table restores in the event that a rollback may
need to occur if someone drops a table or carries out an insane update. We
will also use mydumper instead of mysqldump due to the features of
compression and encryption. Mysqldump stops being useful on full|large
datasets due to it's single-threaded-ness.



On Tue, Oct 7, 2014 at 8:35 AM, yoku ts. yoku0...@gmail.com wrote:

 Maybe no, as you knew.

  It means that after lock is released, dump is made while the read and
 write
  activity is going on.   This dump then, would be inconsistent.

 Not only binary logs, each tables in your dump is based the time when
 mysqldump began to dump *each* tables.
 It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
 and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

 I don't have a motivation for restoring its consistency..


 Regards,


 2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

  So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump
 be
  of any useful?
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:
 
   Hello,
  
   If you use any *NOT InnoDB* storage engine, you're right.
   mysqldump with --single-transaction doesn't have any consistent as you
  say.
  
   If you use InnoDB all databases and tables, your dumping process is
   protected by transaction isolation level REPEATABLE-READ.
  
  
  
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
  
   Regards,
  
  
   2014-10-07 12:52 GMT+09:00 geetanjali mehra 
 mailtogeetanj...@gmail.com
  :
  
   It seems to me that once the read lock is acquired, only the binary
 log
   coordinates are read. Soon after binary log coordinates are read, lock
  is
   released.  Is there anything else that happens here?
  
   It means that after lock is released, dump is made while the read and
   write
   activity is going on.   This dump then, would be inconsistent.  So, to
   make
   this dump a consistent one when restoring it, binary log will be
 applied
   starting from the binary log  coordinates that has been read earlier.
  
   This is what I understand. Please correct me if my understanding is
  wrong.
  
   Best Regards,
   Geetanjali Mehra
   Senior Oracle and MySQL DBA Corporate Consultant and Database Security
   Specialist
  
  
   On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green 
 shawn.l.gr...@oracle.com
  
   wrote:
  
Hello Geetanjali,
   
On 9/23/2014 7:14 AM, geetanjali mehra wrote:
   
Can anybody please mention the internals that works when we use
   mysqldump
as follows:
   
   
*mysqldump --single-transaction --all-databases 
   backup_sunday_1_PM.sql*
   
MySQL manual says:
   
This backup operation acquires a global read lock on all tables at
  the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
  this
lock
has been acquired, the binary log coordinates are read and the lock
  is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
   issued,
the backup operation may stall until those statements finish. After
   that,
the dump becomes lock-free and does not disturb reads and writes on
  the
tables.
   
Can anyone explain it more? Please.
   
   
Which part would you like to address first?
   
I have a feeling it's more about how FLUSH TABLES WITH READ LOCK
 works
   but
I want to be certain before answering.
   
Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work
 Together.
Office: Blountville, TN
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
   
  
  
  
 



Re: Need a short directive

2014-10-08 Thread Trianon 33

A big thanks to Johan, who pointed me in the right direction.

I had indeed to fix a where clause, but in the end al went very well.

BR, hans.



Johan De Meersman schreef op 07-10-14 om 16:10:

- Original Message -

From: Trianon 33 triano...@gmail.com
Subject: Need a short directive

Values from yearanddate look like this: 2013-12-11 00:00:00. I want to

That's only a display format; internally it's an integer (well, presumably a 
struct time_t) counting the seconds since epoch. Not especially relevant except 
to say that, since it's only an output format, it can easily be changed.


copy the 2013 and put that into the yearfield, for each record.

Can that be done by just using SQL statements?

I believe the year() function is pretty much what you're looking for; complexer 
things can be handled through date_format().

Something along the lines of

  UPDATE table
SET year = year(yearanddate);

should do nicely; I'm sure you can fix up a where clause as appropriate.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query with variable number of columns?

2014-10-08 Thread Jan Steinman
I often use CASE WHEN ... to pivot tables. For example, splitting sales data by 
year:

  SELECT
s_product.name AS `Product`,
SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS 
`2007`,
SUM(CASE WHEN YEAR(sales.Date)='2008' THEN sales.Total ELSE NULL END) AS 
`2008`,
SUM(CASE WHEN YEAR(sales.Date)='2009' THEN sales.Total ELSE NULL END) AS 
`2009`,
...
  WHERE dynamic predicate that only has results in one year

However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted.

What techniques do *you* use for avoiding this anti-pattern? Am I limited to 
using a separate programming language (PHP, in this case) with a separate 
COUNT(*) query for each possible column, then CASEing the generation of the 
column SQL? Seems awfully ugly!

Thanks in advance for any insight offered!

(And the following came up at random... perhaps I'll just live with a bunch of 
empty columns...)

 In attempting to fix any system, we may damage another that is working 
perfectly well. -- David Holmgren
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query with variable number of columns?

2014-10-08 Thread Peter Brawley

On 2014-10-08 1:38 PM, Jan Steinman wrote:

I often use CASE WHEN ... to pivot tables. For example, splitting sales data by 
year:

   SELECT
 s_product.name AS `Product`,
 SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS 
`2007`,
 SUM(CASE WHEN YEAR(sales.Date)='2008' THEN sales.Total ELSE NULL END) AS 
`2008`,
 SUM(CASE WHEN YEAR(sales.Date)='2009' THEN sales.Total ELSE NULL END) AS 
`2009`,
 ...
   WHERE dynamic predicate that only has results in one year

However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted.

What techniques do *you* use for avoiding this anti-pattern?


Non-procedural SQL is an incomplete computer language; it can't do that. 
MySQL stored procedures are less incomplete, and can do it, but they're 
awkward. I use the app language (eg PHP) to implement such logic.


PB

-


Am I limited to using a separate programming language (PHP, in this case) with 
a separate COUNT(*) query for each possible column, then CASEing the generation 
of the column SQL? Seems awfully ugly!

Thanks in advance for any insight offered!

(And the following came up at random... perhaps I'll just live with a bunch of 
empty columns...)

 In attempting to fix any system, we may damage another that is working 
perfectly well. -- David Holmgren
 Jan Steinman, EcoReality Co-op 





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query with variable number of columns?

2014-10-08 Thread hsv
 2014/10/08 11:38 -0700, Jan Steinman 
However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted. 


 2014/10/08 16:42 -0500, Peter Brawley 
MySQL stored procedures are less incomplete, and can do it, but they're 
awkward. 

From a webpage-link on this very list posted, I learnt of a means of (yes, 
clumsily) using SQL procedure to build PREPAREd statements that pivot. It 
entails twice reckoning, once to find good fields, once to pivot and show them.

One selects from a virtual table:
(SELECT DISTINCT YEAR(sales.Date) AS Y FROM ...) AS g

For each good Y one wants this generated (I use ANSI mode, with more PL1 than 
C):
'SUM(IF(YEAR(sales.Date)=' || Y || ', sales.Total, NULL)) AS ' || Y || ''

The outcome is something like this:
set @yearSal = (SELECT 'SELECT s_product.name AS Product, ' || 
GROUP_CONCAT('SUM(IF(YEAR(sales.Date)=' || Y || ', sales.Total, NULL)) AS ' || 
Y || '') || '
FROM ...'
FROM (SELECT DISTINCT YEAR(sales.Date) AS Y FROM ...) AS g;
PREPARE YearSal FROM @YearSal;
EXECUTE YearSal;

Unhappily, PREPARE takes only user-defined variables, and its prepared 
statement, too, is exposed to the procedure s caller. If the prepared statement 
is SELECT ... INTO ..., only user-defined variables are allowed after INTO. 
One who knows the names can learn something about the procedure s working.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql