Re: Can't get my query to return wanted data

2018-01-19 Thread Chris Roy-Smith
On Fri, 19 Jan 2018 11:25:42 -0500, shawn l.green wrote:

> Hello Chris,
> 
> On 1/19/2018 12:50 AM, Chris Roy-Smith wrote:
>> Hi I am running mysql 5.7.20 in ubuntu linux 17.10
>>
>> I have 2 tables, member and status with contents like
>>
>> member:
>> ident,   given,  surname 1   fredjones 2 johnhoward 
3   henry   wales 4
>> jennybrown
>>
>> status:
>> identyear 1  2017 2  2017 3  2017 4  2017 1  2018 3  2018
>>
>> I want my query to return the name and ident from the member table for
>> all members that has not got an entry in status with year=2018
>>
>> I have been working on the following query to achieve this, but it only
>> returns data when there is no `year` entries for a selected year.
>>
>> select details.ident, given, surname from details left join status on
>> details.ident = status.ident where NOT EXISTS (select year from status
>> where (status.year = 2018) and (details.ident = status.ident) )
>>
>>
>> Thank you for looking at this.
>> regards, Chris Roy-Smith
>>
>>
>>
> try this...
> 
> SELECT
>d.ident, d.given, d.surname
> FROM details d LEFT JOIN (
>SELECT DISTINCT ident FROM status WHERE year=2018
> ) s
>ON s.ident = d.ident
> WHERE
>s.ident is NULL;
> 
> How it works
> #
> Start by building a list of unique `ident` values that match the
> condition you do NOT want to find. (you will see why in a moment)
> 
> LEFT JOIN that list to your list of members (with your list on the right
> side of the LEFT JOIN).  Where that join's ON condition is satisfied, a
> value for the column s.ident will exist. Where it isn't satisfied, there
> will be a NULL value in s.ident.
> 
> Finally, filter the combination of the s and d tables (I'm referring to
> their aliases) to find all the rows where s.ident was not given a value
> because it did not satisfy the ON condition of your outer join.
> #
> 
> Yours,
> --
> Shawn Green MySQL Senior Principal Technical Support Engineer Oracle
> USA, Inc. - Integrated Cloud Applications & Platform Services Office:
> Blountville, TN
> 
> Become certified in MySQL! Visit https://www.mysql.com/certification/
> for details.

Hi Shawn,
That works a treat! Thank you for a solution with an explanation, which I 
appreciate, because it helps me learn.
Regards, Chris Roy-Smith


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't get my query to return wanted data

2018-01-19 Thread shawn l.green

Hello Chris,

On 1/19/2018 12:50 AM, Chris Roy-Smith wrote:

Hi
I am running mysql 5.7.20 in ubuntu linux 17.10

I have 2 tables, member and status with contents like

member:
ident,  given,  surname
1   fredjones
2   johnhoward
3   henry   wales
4   jenny   brown

status:
ident   year
1   2017
2   2017
3   2017
4   2017
1   2018
3   2018

I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018

I have been working on the following query to achieve this, but it only
returns data when there is no `year` entries for a selected year.

select details.ident, given, surname from details left join status on
details.ident = status.ident where NOT EXISTS (select year from status
where (status.year = 2018) and (details.ident = status.ident) )


Thank you for looking at this.
regards, Chris Roy-Smith




try this...

SELECT
  d.ident, d.given, d.surname
FROM details d
LEFT JOIN (
  SELECT DISTINCT ident
  FROM status
  WHERE year=2018
) s
  ON s.ident = d.ident
WHERE
  s.ident is NULL;

How it works
#
Start by building a list of unique `ident` values that match the 
condition you do NOT want to find. (you will see why in a moment)


LEFT JOIN that list to your list of members (with your list on the right 
side of the LEFT JOIN).  Where that join's ON condition is satisfied, a 
value for the column s.ident will exist. Where it isn't satisfied, there 
will be a NULL value in s.ident.


Finally, filter the combination of the s and d tables (I'm referring to 
their aliases) to find all the rows where s.ident was not given a value 
because it did not satisfy the ON condition of your outer join.

#

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Can't get my query to return wanted data

2018-01-18 Thread Chris Roy-Smith
Hi
I am running mysql 5.7.20 in ubuntu linux 17.10

I have 2 tables, member and status with contents like

member:
ident,  given,  surname
1   fredjones
2   johnhoward
3   henry   wales
4   jenny   brown

status:
ident   year
1   2017
2   2017
3   2017
4   2017
1   2018
3   2018

I want my query to return the name and ident from the member table for all 
members that has not got an entry in status with year=2018  

I have been working on the following query to achieve this, but it only 
returns data when there is no `year` entries for a selected year. 

select details.ident, given, surname from details left join status on 
details.ident = status.ident where NOT EXISTS (select year from status 
where (status.year = 2018) and (details.ident = status.ident) )


Thank you for looking at this.
regards, Chris Roy-Smith


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Omit another where clause from original query

2017-09-20 Thread Hal.sz S.ndor

2017/09/19 17:19 ... Don Wieland:

Of these found rows, I want to omit those rows where there are rows found after 
the END TimeStamp based on ?below ?where clause:

  WHERE 1 AND apt.appt_status_id IN (16) AND apt.user_id IN (3) AND apt.time_start 
 > ‘1504238399'

We are trying to find Former Clients who have not been seen after the date 
range - Lapsed Client Report


What are you getting that you want not to see? Certainly if you simply 
left off the upper half of the 'BETWEEN' you would get all those that 
you now get and all that have been left out by being dated over 
‘1504238399'.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Omit another where clause from original query

2017-09-19 Thread Don Wieland
I have a working query:

/* start */

SELECT 
u.user_id, 
u.first_name AS u_first_name, 
u.last_name AS u_last_name, 
c.client_id AS c_client_id, 
c.first_name AS c_first_name, 
c.middle_name AS c_middle_name, 
c.last_name AS c_last_name, 
c.address AS c_address, 
c.city AS c_city, 
c.state AS c_state, 
c.zip AS c_zip, 
c.dob AS dob_ymd, 
c.phone_home AS c_phone_home, 
c.phone_cell AS c_phone_cell, 
c.phone_work AS c_phone_work, 
c.email AS c_email, 
c.other_contacts AS c_other_contacts, 
count(*) AS apt_qty

FROM tl_appt apt 

JOIN tl_clients c ON c.client_id = apt.client_id 
JOIN tl_rooms r ON r.room_id = apt.room_id 
JOIN tl_users u ON u.user_id = apt.user_id 

WHERE 1 AND apt.appt_status_id IN (16) AND apt.user_id IN (3) AND 
apt.time_start BETWEEN '150156' AND '1504238399' 
GROUP BY c.client_id 

HAVING count(*) > 0 

ORDER BY u.first_name, u.last_name, c.last_name, c.first_name;

/* end */

I want to add another condition:

Of these found rows, I want to omit those rows where there are rows found after 
the END TimeStamp based on below where clause:

 WHERE 1 AND apt.appt_status_id IN (16) AND apt.user_id IN (3) AND 
apt.time_start  > ‘1504238399'

We are trying to find Former Clients who have not been seen after the date 
range - Lapsed Client Report


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: kill query and prepared statements

2017-01-11 Thread Sebastien FLAESCH

Sorry for the spam, and thanks for you suggestion Johan!

http://bugs.mysql.com/bug.php?id=84470

Seb

On 01/11/2017 11:21 AM, Johan De Meersman wrote:

Seb,

You should log a bug at http://bugs.mysql.com - this is not a developer list.

/Johan

- Original Message -

From: "Sebastien FLAESCH" <s...@4js.com>
To: "MySql" <mysql@lists.mysql.com>
Sent: Tuesday, 10 January, 2017 14:55:42
Subject: kill query and prepared statements



Hi all,

I have reported this problem before, but I raise it again, since I still get
this problem with 5.7.17

See attached code:

I want to interrupt a long running statement with CTRL-C by starting a new
connect to make a KILL QUERY.

I am using the same technique as the mysql client code.

The difference here is that my code is using PREPARED STATEMENTS with
mysql_stmt_prepare() etc.

Problem: After interrupting the first query with CTRL-C, the call to
mysql_stmt_close() hangs...

Maybe I am missing some new connection or statement option...?!?

IMPORTANT: This problem appeared in a 5.7.x, same code is working fine with
5.6(.16) 

Please can someone from MySQL C API team try to reproduce and confirm?

Thanks!
Seb


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: kill query and prepared statements

2017-01-11 Thread Johan De Meersman
Seb,

You should log a bug at http://bugs.mysql.com - this is not a developer list.

/Johan

- Original Message -
> From: "Sebastien FLAESCH" <s...@4js.com>
> To: "MySql" <mysql@lists.mysql.com>
> Sent: Tuesday, 10 January, 2017 14:55:42
> Subject: kill query and prepared statements

> Hi all,
> 
> I have reported this problem before, but I raise it again, since I still get
> this problem with 5.7.17
> 
> See attached code:
> 
> I want to interrupt a long running statement with CTRL-C by starting a new
> connect to make a KILL QUERY.
> 
> I am using the same technique as the mysql client code.
> 
> The difference here is that my code is using PREPARED STATEMENTS with
> mysql_stmt_prepare() etc.
> 
> Problem: After interrupting the first query with CTRL-C, the call to
> mysql_stmt_close() hangs...
> 
> Maybe I am missing some new connection or statement option...?!?
> 
> IMPORTANT: This problem appeared in a 5.7.x, same code is working fine with
> 5.6(.16) 
> 
> Please can someone from MySQL C API team try to reproduce and confirm?
> 
> Thanks!
> Seb
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: kill query and prepared statements

2017-01-10 Thread Sebastien FLAESCH

Just to be clear:

This is a hint, to find the real problem, this is NOT A WORKAROUND for us!

Seb

On 01/10/2017 03:53 PM, Sebastien FLAESCH wrote:

Investigating different API calls, to see if it makes a difference...

Seems that the problem is related to the read-only cursor type option!

If you comment out:

unsigned long ct = (unsigned long) CURSOR_TYPE_READ_ONLY;
s = mysql_stmt_attr_set(stmt, STMT_ATTR_CURSOR_TYPE, (const void *) );

Then it works ...

Seb


On 01/10/2017 02:55 PM, Sebastien FLAESCH wrote:

Hi all,

I have reported this problem before, but I raise it again, since I still get 
this problem with 5.7.17

See attached code:

I want to interrupt a long running statement with CTRL-C by starting a new 
connect to make a KILL QUERY.

I am using the same technique as the mysql client code.

The difference here is that my code is using PREPARED STATEMENTS with 
mysql_stmt_prepare() etc.

Problem: After interrupting the first query with CTRL-C, the call to 
mysql_stmt_close() hangs...

Maybe I am missing some new connection or statement option...?!?

IMPORTANT: This problem appeared in a 5.7.x, same code is working fine with 
5.6(.16) 

Please can someone from MySQL C API team try to reproduce and confirm?

Thanks!
Seb









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: kill query and prepared statements

2017-01-10 Thread Sebastien FLAESCH

Investigating different API calls, to see if it makes a difference...

Seems that the problem is related to the read-only cursor type option!

If you comment out:

unsigned long ct = (unsigned long) CURSOR_TYPE_READ_ONLY;
s = mysql_stmt_attr_set(stmt, STMT_ATTR_CURSOR_TYPE, (const void *) );

Then it works ...

Seb


On 01/10/2017 02:55 PM, Sebastien FLAESCH wrote:

Hi all,

I have reported this problem before, but I raise it again, since I still get 
this problem with 5.7.17

See attached code:

I want to interrupt a long running statement with CTRL-C by starting a new 
connect to make a KILL QUERY.

I am using the same technique as the mysql client code.

The difference here is that my code is using PREPARED STATEMENTS with 
mysql_stmt_prepare() etc.

Problem: After interrupting the first query with CTRL-C, the call to 
mysql_stmt_close() hangs...

Maybe I am missing some new connection or statement option...?!?

IMPORTANT: This problem appeared in a 5.7.x, same code is working fine with 
5.6(.16) 

Please can someone from MySQL C API team try to reproduce and confirm?

Thanks!
Seb






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



kill query and prepared statements

2017-01-10 Thread Sebastien FLAESCH

Hi all,

I have reported this problem before, but I raise it again, since I still get 
this problem with 5.7.17

See attached code:

I want to interrupt a long running statement with CTRL-C by starting a new 
connect to make a KILL QUERY.

I am using the same technique as the mysql client code.

The difference here is that my code is using PREPARED STATEMENTS with 
mysql_stmt_prepare() etc.

Problem: After interrupting the first query with CTRL-C, the call to 
mysql_stmt_close() hangs...

Maybe I am missing some new connection or statement option...?!?

IMPORTANT: This problem appeared in a 5.7.x, same code is working fine with 
5.6(.16) 

Please can someone from MySQL C API team try to reproduce and confirm?

Thanks!
Seb
#include 
#include 
#include 
#include 

#include 
#include 


static char * c_host = "orion";
static char * c_user = "mysuser";
static char * c_auth = "fourjs";
static intc_port = 3308;
static char * c_sock = NULL;
static char * c_dbnm = "test1";

static int executing_query;
static unsigned long thread_id;

static void kill_query(void)
{
char cmd[50];
MYSQL *h;
h = mysql_init(NULL);
if (!mysql_real_connect(h, c_host, c_user, c_auth,
c_dbnm, c_port, c_sock,
CLIENT_FOUND_ROWS)) {
fprintf(stderr, "kill_query: Could not connect (err=%d)\n", mysql_errno(h));
return;
}
sprintf(cmd, "KILL QUERY %ld", thread_id);
if (mysql_query(h, cmd) != 0) {
fprintf(stderr, "Could not execute %s.", cmd);
}
mysql_close(h);
}

static void handle_ctrlc_signal(int sig)
{
fprintf(stdout, "SIGINT caught! executing_query = %d\n", executing_query);
if (executing_query) {
executing_query = 0;
kill_query();
}
return;
}


int main(int argc, char ** argv)
{
MYSQL * conn;
MYSQL_STMT * stmt;
int i, s;
unsigned long ct = (unsigned long) CURSOR_TYPE_READ_ONLY;
const char * sqlstmt = "select benchmark(10, md5('when will it end?'))";

signal(SIGINT, handle_ctrlc_signal);

conn = mysql_init(NULL);

if (!mysql_real_connect(conn, c_host, c_user, c_auth,
c_dbnm, c_port, c_sock,
CLIENT_FOUND_ROWS)) {
fprintf(stderr, "Could not connect (err=%d)\n", mysql_errno(conn));
return -1;
}

thread_id = mysql_thread_id(conn);
fprintf(stdout, "MySQL thread ID: %ld\n", thread_id);

for (i=0; i<3; i++) {

fprintf(stdout, "\nRound %d:\n", i+1);

fprintf(stdout, "Allocating statement handle...");
stmt = mysql_stmt_init(conn);
if (stmt==NULL) {
fprintf(stderr, "Could not create statement handle (err=%d)\n", mysql_errno(conn));
return -1;
}
fprintf(stdout, "   handle = %p\n", (void*) stmt);

fprintf(stdout, "Preparing statement %p ...\n", (void*) stmt);
s = mysql_stmt_prepare(stmt, sqlstmt, (unsigned long) strlen(sqlstmt));
if (s!=0) {
fprintf(stderr, "Could not prepare statement (err=%d)\n", mysql_errno(conn));
return -1;
}

fprintf(stdout, "Setting cursor type to read only for %p ...\n", (void*) stmt);
s = mysql_stmt_attr_set(stmt, STMT_ATTR_CURSOR_TYPE, (const void *) );
if (s!=0) {
fprintf(stderr, "Could not set cursor type (err=%d)\n", mysql_errno(conn));
return -1;
}

fprintf(stdout, "Executing statement %p ...\n", (void*) stmt);
executing_query = 1;
s = mysql_stmt_execute(stmt);
if (s!=0) {
if (mysql_errno(conn) == 1317) {
fprintf(stdout, "Statement interrupted by user...\n");
} else {
fprintf(stderr, "Could not execute the query (err=%d)\n", mysql_errno(conn));
return -1;
}
}

fprintf(stdout, "Closing/freeing statement handle %p ...\n", (void*) stmt);
s = mysql_stmt_close(stmt);
if (s!=0) {
fprintf(stderr, "Could not close statement handle (err=%d)\n", mysql_errno(conn));
return -1;
}

fprintf(stdout, "Round done.\n");
}

return 0;
}

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Query interruption with MySQL 5.7 using KILL QUERY in signal handler does no more work

2016-07-12 Thread Sebastien FLAESCH

I did the following test:

My program still uses MySQL 5.7 libmysqlclient.so, but I connect now to a 
5.6.16 server.

And the SQL interrupt works fine... so I suspect there is a MySQL server issue 
in 5.7.

Seb

On 07/12/2016 01:01 PM, Sebastien FLAESCH wrote:

About:

 > B) For some reason, the program does not want to exit() - (must investigate)

In fact we use prepared statements with a sequence of mysql_stmt_init,
mysql_stmt_prepare, mysql_stmt_execute, mysql_stmt_fetch, ... and
mysql_stmt_close.

After the statement was interrupted, we try to free the MySQL statement hanlde 
with:

mysql_stmt_close();

But this API call hangs... (below the strace output)

This was working fine in older versions...

Seb

strace log:

...

sendto(3, "s\0\0\0\26select COUNT(*) from   t2 a"..., 119, 0, NULL, 0) = 119
recvfrom(3, "\f\0\0\1\0\1\0\0\0\1\0\0\0\0\0\0\36\0\0\2\3def\0\0\0\10COUN"..., 
16384, 0, NULL, NULL) = 50
sendto(3, "\n\0\0\0\27\1\0\0\0\1\1\0\0\0", 14, 0, NULL, 0) = 14
recvfrom(3, 0x62905220, 16384, 0, 0, 0) = ? ERESTARTSYS (To be restarted if 
SA_RESTART is set)

...

When mysql_stmt_close() is called, hangs in recvfrom():

recvfrom(3,





On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:

Hi all,

I use the following technique to cancel a long running query:

In the SIGINT signal handler, I restart a connection and I perform a

KILL QUERY mysql-process-id-of-running-query

This was working find with MySQL 5.6.

But with 5.7 (5.7.11), we get now a different result:

A) The query is still interrupted, but we no longer get an SQL error -1317.

B) For some reason, the program does not want to exit() - (must investigate)

Any clues?


With mysql it's working fine:

mysql> select sleep(10);
^C^C -- query aborted
+---+
| sleep(10) |
+---+
+---+
1 row in set (2.79 sec)

mysql> \q
Bye


Thanks!
Seb







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query interruption with MySQL 5.7 using KILL QUERY in signal handler does no more work

2016-07-12 Thread Sebastien FLAESCH

About:

> B) For some reason, the program does not want to exit() - (must investigate)

In fact we use prepared statements with a sequence of mysql_stmt_init,
mysql_stmt_prepare, mysql_stmt_execute, mysql_stmt_fetch, ... and
mysql_stmt_close.

After the statement was interrupted, we try to free the MySQL statement hanlde 
with:

mysql_stmt_close();

But this API call hangs... (below the strace output)

This was working fine in older versions...

Seb

strace log:

...

sendto(3, "s\0\0\0\26select COUNT(*) from   t2 a"..., 119, 0, NULL, 0) = 119
recvfrom(3, "\f\0\0\1\0\1\0\0\0\1\0\0\0\0\0\0\36\0\0\2\3def\0\0\0\10COUN"..., 
16384, 0, NULL, NULL) = 50
sendto(3, "\n\0\0\0\27\1\0\0\0\1\1\0\0\0", 14, 0, NULL, 0) = 14
recvfrom(3, 0x62905220, 16384, 0, 0, 0) = ? ERESTARTSYS (To be restarted if 
SA_RESTART is set)

...

When mysql_stmt_close() is called, hangs in recvfrom():

recvfrom(3,





On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:

Hi all,

I use the following technique to cancel a long running query:

In the SIGINT signal handler, I restart a connection and I perform a

KILL QUERY mysql-process-id-of-running-query

This was working find with MySQL 5.6.

But with 5.7 (5.7.11), we get now a different result:

A) The query is still interrupted, but we no longer get an SQL error -1317.

B) For some reason, the program does not want to exit() - (must investigate)

Any clues?


With mysql it's working fine:

mysql> select sleep(10);
^C^C -- query aborted
+---+
| sleep(10) |
+---+
+---+
1 row in set (2.79 sec)

mysql> \q
Bye


Thanks!
Seb




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query interruption with MySQL 5.7 using KILL QUERY in signal handler does no more work

2016-07-03 Thread Sebastien FLAESCH

Hi all,

I use the following technique to cancel a long running query:

In the SIGINT signal handler, I restart a connection and I perform a

   KILL QUERY mysql-process-id-of-running-query

This was working find with MySQL 5.6.

But with 5.7 (5.7.11), we get now a different result:

A) The query is still interrupted, but we no longer get an SQL error -1317.

B) For some reason, the program does not want to exit() - (must investigate)

Any clues?


With mysql it's working fine:

mysql> select sleep(10);
^C^C -- query aborted
+---+
| sleep(10) |
+---+
+---+
1 row in set (2.79 sec)

mysql> \q
Bye


Thanks!
Seb

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Memory leak caused by query table meta data?

2016-05-09 Thread Zhaobang Liu
Hi there,



We know that normally Mysql is good at controlling memory usage but the
problem we are seeing is a bit suspicious. I want to ask for help to see
whether somebody can help on debugging the issue. Feel free to let me know
if there are more details needed.


The databases we have are all in InnoDB. There are around 4400 tables in a
database. Lots of tables are partitioned by yearweek and having more than
50 partitions.


How to reproduce the issue:

1) We have a script to monitor table schema and create partitions. While
running it, we found running 'SHOW CREATE TABLE xxx' on each table will
make Mysql take more and more memory. After scanning all of the tables,
mysql has started using more than 1GB swap.

2) We had a migration recently to add a column to half of the tables we
have. The query is like 'ALTER ONLINE TABLE table_name ADD COLUMN IF NOT
EXISTS (`col` smallint(3) DEFAULT NULL)' and it was in one thread to
migration the tables one by one. The memory usage keeps increasing and
start to swap as well.



Env:

Mariadb 10.0.20 running on 64 bit CentOS6.7. 7GB RAM, 8GB
swap. vm.swappiness = 30.


innodb-buffer-pool-size = 2G

innodb-buffer-pool-instances= 2

innodb-additional-mem-pool-size = 20M

innodb-log-buffer-size  = 4M

innodb-thread-concurrency   = 4

innodb-file-format  = Barracuda

innodb-file-per-table   = 1


query-cache-type= 1

query-cache-size= 16M

thread-cache-size   = 64

table-open-cache= 1024

table-definition-cache  = 2048



Thanks,

Zhaobang


Re: mysql query for current date accounting returns NULL

2016-03-26 Thread shawn l.green



On 3/26/2016 4:36 PM, shawn l.green wrote:



On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:

I have Freeradius 2.x with MySQL 5.5 in Ubuntu.

I want to query user quota for current date. I am using following code

SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM
radacct where (acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d')
AND NOW() AND acctstoptime  between  DATE_FORMAT(NOW() ,'%Y-%m-%d')
AND NOW()) AND radacct.username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user
have not disconnected yet (and have no previous session for today) it
returns NULL.

So how can i can get the value even if user acttstoptime is null?




Try this...(using an earlier suggestion to the thread)

SELECT
  SUM(acctinputoctets + acctoutputoctets) AS Total
FROM radacct
WHERE
   radacct.username='%{User-Name}'
   AND acctstarttime BETWEEN CURDATE() AND NOW()
   AND (
 acctstoptime  <= NOW()
 OR acctstoptime IS NULL
   )

But in reality, can you have an acctstarttime that is >= NOW()? If not,
then you can also simplify that term to just



oops! one too many AND's

   AND AND acctstarttime >= CURDATE()


I meant to write
AND acctstarttime >= CURDATE()



and lose the BETWEEN comparison.



--
Shawn

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql query for current date accounting returns NULL

2016-03-26 Thread shawn l.green



On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:

I have Freeradius 2.x with MySQL 5.5 in Ubuntu.

I want to query user quota for current date. I am using following code

SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where 
(acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND 
acctstoptime  between  DATE_FORMAT(NOW() ,'%Y-%m-%d') AND NOW()) AND 
radacct.username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user have not 
disconnected yet (and have no previous session for today) it returns NULL.

So how can i can get the value even if user acttstoptime is null?




Try this...(using an earlier suggestion to the thread)

SELECT
 SUM(acctinputoctets + acctoutputoctets) AS Total
FROM radacct
WHERE
  radacct.username='%{User-Name}'
  AND acctstarttime BETWEEN CURDATE() AND NOW()
  AND (
acctstoptime  <= NOW()
OR acctstoptime IS NULL
  )

But in reality, can you have an acctstarttime that is >= NOW()? If not, 
then you can also simplify that term to just


  AND AND acctstarttime >= CURDATE()

and lose the BETWEEN comparison.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql query for current date accounting returns NULL

2016-03-25 Thread Hal.sz S.ndor

2016/03/25 06:39 ... JAHANZAIB SYED:

I want to query user quota for current date. I am using following code

SELECT SUM(acctinputoctets)+SUM(acctoutputoctets) AS Total FROM radacct where 
(acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND 
acctstoptime between DATE_FORMAT(NOW() ,'%Y-%m-%d') AND NOW()) AND 
username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user have not 
disconnected yet (and have no previous session for today) it returns NULL.


That expression has problems. Not only it works only when both 
acctstarttime and acctstoptime are good, but only if they are on the 
same day, today.


> So how can i can get the value even if user acctstoptime is null?
Really, it is best to omit the test on "acctstoptime".

I don't like the form of the test, either. If "acctstarttime" is of 
DATETIME (or TIMESTAMP) type I like this better:

acctstarttime BETWEEN CURDATE() AND NOW()
otherwise
CAST(acctstarttime AS DATETIME) BETWEEN CURDATE() AND NOW()

You are also not GROUPing BY anything, which, strictly speakind, with 
SUM is bad SQL, but, of course, it works because only one value of 
"username" is sought.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysql query for current date accounting returns NULL

2016-03-25 Thread JAHANZAIB SYED
I have Freeradius 2.x with MySQL 5.5 in Ubuntu. 

I want to query user quota for current date. I am using following code

SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where 
(acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND 
acctstoptime  between  DATE_FORMAT(NOW() ,'%Y-%m-%d') AND NOW()) AND 
radacct.username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user have not 
disconnected yet (and have no previous session for today) it returns NULL.

So how can i can get the value even if user acttstoptime is null?

  

Query Statistics...

2016-02-04 Thread Chris Knipe
Hi All,

Perhaps a bit of a trivial question, but in terms of query statistics (i.e.
how many SELECT / INSERT / DELETE / etc. queries has been ran against the
server)...

When you take an INSERT ... ON DUPLICATE KEY UPDATE ...

Under the two conditions (i.e. either INSERT, or UPDATE if the record
already exist),  how is this query logged in the statistics?

When the ON DUPLICATE KEY UPDATE runs (i.e. it's updating a record), is it
still logged as a INSERT query, or is it logged as a UPDATE query?

Thnx.



-- 

Regards,
Chris Knipe


Re: help with query to count rows while excluding certain rows

2016-01-02 Thread Larry Martell
On Fri, Jan 1, 2016 at 9:31 PM, Peter Brawley
<peter.braw...@earthlink.net> wrote:
> On 1/1/2016 19:24, Larry Martell wrote:
>>
>> On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
>> <peter.braw...@earthlink.net> wrote:
>>>
>>> On 12/31/2015 0:51, Larry Martell wrote:
>>>>
>>>> I need to count the number of rows in a table that are grouped by a
>>>> list of columns, but I also need to exclude rows that have more then
>>>> some count when grouped by a different set of columns. Conceptually,
>>>> this is not hard, but I am having trouble doing this efficiently.
>>>>
>>>> My first counting query would be this:
>>>>
>>>> SELECT count(*)
>>>> FROM cst_rollup
>>>> GROUP BY target_name_id, ep, roiname, recipe_process,
>>>> recipe_product, recipe_layer, f_tag_bottom,
>>>> measname, recipe_id
>>>>
>>>> But from this count I need to subtract the count of rows that have
>>>> more then 50 rows with a different grouping:
>>>>
>>>> SELECT count(*)
>>>> FROM cst_rollup
>>>> GROUP BY target_name_id, ep, wafer_id
>>>> HAVING count(*) >= 50
>>>>
>>>> As you can see, the second query has wafer_id, but the first query does
>>>> not.
>>>>
>>>> Currently I am doing this in python, and it's slow. In my current
>>>> implementation I have one query, and it selects the columns (i.e.
>>>> doesn't just count), and I have added wafer_id:
>>>>
>>>> SELECT target_name_id, ep, roiname, recipe_process,
>>>> recipe_product, recipe_layer, f_tag_bottom,
>>>> measname, recipe_id, wafer_id
>>>> FROM cst_rollup
>>>>
>>>> Then I go through the result set (which can be over 200k rows) and I
>>>> count the number of rows with matching (target_name_id, ep, wafer_id).
>>>> Then I go through the rows again and regroup them without wafer_id,
>>>> but skipping the rows that have more then 50 rows for that row's
>>>> (target_name_id, ep, wafer_id).
>>>>
>>>> Is this clear to everyone what I am trying to do?
>>>
>>>
>>> If I've understand this correctly, the resultset you wish to aggregate on
>>> is
>>> ...
>>>
>>> select target_name_id, ep, wafer_id
>>> from cst_rollup a
>>> left join (   -- exclude rows for which wafer_id count >= 50
>>>select name_id, ep, wafer, count(*) n
>>>from cst_rollup
>>>group by target_name_id, ep, wafer_id
>>>having n >= 50
>>> ) b using ( target_name_id, ep, wafer_id )
>>> where b.target_name is null ;
>>>
>>> If that's so, you could assemble that resultset in a temp table then run
>>> the
>>> desired aggregate query on it, or you could aggregate on it directly as a
>>> subquery.
>>
>> That query gives:
>>
>> ERROR 1137 (HY000): Can't reopen table: 'a'
>
>
> So, it's a temporary table, and you'll need to make that not so.

Yes, cst_rollup is a temp table. The underlying table is millions of
rows (with 300 columns) so for efficiency a subset of the rows and
columns are selected into the temp table based on some user input.
It's just the rows in the temp table that are of interest for the
current report.

I was able to get this working with a second temp table:

CREATE TEMPORARY TABLE rollup_exclude
SELECT target_name_id, ep, wafer_id, count(*) n
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING n >= 50

And then:

SELECT count(*)
FROM cst_rollup
LEFT JOIN(
SELECT target_name_id, ep, wafer_id
FROM rollup_exclude) b
USING (target_name_id, ep, wafer_id)
WHERE b.target_name_id IS NULL
GROUP by target_name_id, ep, roiname, recipe_process, recipe_product,
recipe_layer, f_tag_bottom, measname, recipe_id

And the rowcount from that query gave me what I needed.

Thanks very much for the help Peter, you gave me a push toward the right path.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Larry Martell
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
<peter.braw...@earthlink.net> wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
>> some count when grouped by a different set of columns. Conceptually,
>> this is not hard, but I am having trouble doing this efficiently.
>>
>> My first counting query would be this:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id
>>
>> But from this count I need to subtract the count of rows that have
>> more then 50 rows with a different grouping:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, wafer_id
>> HAVING count(*) >= 50
>>
>> As you can see, the second query has wafer_id, but the first query does
>> not.
>>
>> Currently I am doing this in python, and it's slow. In my current
>> implementation I have one query, and it selects the columns (i.e.
>> doesn't just count), and I have added wafer_id:
>>
>> SELECT target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id, wafer_id
>> FROM cst_rollup
>>
>> Then I go through the result set (which can be over 200k rows) and I
>> count the number of rows with matching (target_name_id, ep, wafer_id).
>> Then I go through the rows again and regroup them without wafer_id,
>> but skipping the rows that have more then 50 rows for that row's
>> (target_name_id, ep, wafer_id).
>>
>> Is this clear to everyone what I am trying to do?
>
>
> If I've understand this correctly, the resultset you wish to aggregate on is
> ...
>
> select target_name_id, ep, wafer_id
> from cst_rollup a
> left join (   -- exclude rows for which wafer_id count >= 50
>   select name_id, ep, wafer, count(*) n
>   from cst_rollup
>   group by target_name_id, ep, wafer_id
>   having n >= 50
> ) b using ( target_name_id, ep, wafer_id )
> where b.target_name is null ;
>
> If that's so, you could assemble that resultset in a temp table then run the
> desired aggregate query on it, or you could aggregate on it directly as a
> subquery.

That query gives:

ERROR 1137 (HY000): Can't reopen table: 'a'

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Peter Brawley

On 12/31/2015 0:51, Larry Martell wrote:

I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?


If I've understand this correctly, the resultset you wish to aggregate 
on is ...


select target_name_id, ep, wafer_id
from cst_rollup a
left join (   -- exclude rows for which wafer_id count >= 50
  select name_id, ep, wafer, count(*) n
  from cst_rollup
  group by target_name_id, ep, wafer_id
  having n >= 50
) b using ( target_name_id, ep, wafer_id )
where b.target_name is null ;

If that's so, you could assemble that resultset in a temp table then run 
the desired aggregate query on it, or you could aggregate on it directly 
as a subquery.


PB

-


I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



help with query to count rows while excluding certain rows

2015-12-30 Thread Larry Martell
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?

I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Summary Help...

2015-10-24 Thread Mogens Melander

You need to GROUP BY those fields NOT in the aggregate function. Like:

SELECT f.id,f.name,sum(p.price)
FROM fruit f
left join purchase p on f.id = p.fruit
where p.price is not null
group by f.id,f.name;

1, 'Apples', 2
2, 'Grapes', 6.5
4, 'Kiwis', 4

On 2015-10-23 04:15, Don Wieland wrote:

Hi gang,

I have a query:

SELECT
p.pk_ProductID,
p.Description,
i.Quantity

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)

WHERE p.pk_ProductID IN (1,2,3);

It produces a list like the following:

1,Banana,3
2,Orange,1
2,Orange,4
3,Melon,3
3,Melon,3

I want to SUM the i.Quantity per ProductID, but I am unable to get the
scope/syntax correct. I was expecting the following would work:

SELECT
p.pk_ProductID,
p.Description,
SUM(i.Quantity)

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)

WHERE p.pk_ProductID IN (1,2,3)
GROUP BY i.fk_ProductID;

but it is not working.


Little help please. Thanks!


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band


--
Mogens
+66 8701 33224


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-22 Thread shawn l.green



On 10/22/2015 11:48 AM, Don Wieland wrote:



On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:

Which release of MySQL are you using?


Version 5.5.45-cll


How many rows do you get if you remove the GROUP_CONCAT operator? We don't need 
to see the results. (sometimes it is a good idea to look at the raw, 
unprocessed results)

Is it possible that you are attempting to concat more values than allowed by 
--group-concat-max-len ?


When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys.

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND "2015-12-31 
23:59:59"
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band




Thank you for sharing your solution.

Best wishes,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Summary Help...

2015-10-22 Thread Michael Dykman
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.

On Thu, Oct 22, 2015, 5:18 PM Don Wieland <d...@pointmade.net> wrote:

> Hi gang,
>
> I have a query:
>
> SELECT
> p.pk_ProductID,
> p.Description,
> i.Quantity
>
> FROM invoice_invoicelines_Product p
> JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
> i.fk_InvoiceID IN (1,2,3)
>
> WHERE p.pk_ProductID IN (1,2,3);
>
> It produces a list like the following:
>
> 1,Banana,3
> 2,Orange,1
> 2,Orange,4
> 3,Melon,3
> 3,Melon,3
>
> I want to SUM the i.Quantity per ProductID, but I am unable to get the
> scope/syntax correct. I was expecting the following would work:
>
> SELECT
> p.pk_ProductID,
> p.Description,
> SUM(i.Quantity)
>
> FROM invoice_invoicelines_Product p
> JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
> i.fk_InvoiceID IN (1,2,3)
>
> WHERE p.pk_ProductID IN (1,2,3)
> GROUP BY i.fk_ProductID;
>
> but it is not working.
>
>
> Little help please. Thanks!
>
>
> Don Wieland
> d...@pointmade.net
> http://www.pointmade.net
> https://www.facebook.com/pointmade.band
>
>
>
>
>


Re: Query Summary Help...

2015-10-22 Thread Don Wieland

> On Oct 22, 2015, at 2:41 PM, Michael Dykman <mdyk...@gmail.com> wrote:
> 
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.

Yeah I tried that - actually the SUM I need is on the JOIN relationship - 
results should be:

1,Banana,3
2,Orange,5
3,Melon,6

Thanks!

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Query Summary Help...

2015-10-22 Thread Don Wieland
Hi gang,

I have a query:

SELECT 
p.pk_ProductID, 
p.Description, 
i.Quantity  

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND 
i.fk_InvoiceID IN (1,2,3)  

WHERE p.pk_ProductID IN (1,2,3);

It produces a list like the following:

1,Banana,3
2,Orange,1
2,Orange,4
3,Melon,3
3,Melon,3

I want to SUM the i.Quantity per ProductID, but I am unable to get the 
scope/syntax correct. I was expecting the following would work:

SELECT 
p.pk_ProductID, 
p.Description, 
SUM(i.Quantity)  

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND 
i.fk_InvoiceID IN (1,2,3)  

WHERE p.pk_ProductID IN (1,2,3)
GROUP BY i.fk_ProductID;

but it is not working.


Little help please. Thanks!


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: Query Summary Help...

2015-10-22 Thread Michael Dykman
One more guess:

Try explicitly aliasing the fields of interest and using those aliases
exclusively throughout the rest of the expression.

SELECT
p.pk_ProductID as pid,
p.Description as dsc,
SUM(i.Quantity) as totl

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON pid = i.fk_ProductID

WHERE pid IN (1,2,3)
AND i.fk_InvoiceID IN (1,2,3)
GROUP BY pid;

Note that I moved the invoiceID clause out of the join condition into the
where filter. The ON clause should only contain expressions of relational
interest.

On Thu, Oct 22, 2015, 6:00 PM Don Wieland <d...@pointmade.net> wrote:

>
> > On Oct 22, 2015, at 2:41 PM, Michael Dykman <mdyk...@gmail.com> wrote:
> >
> > I'm not at a terminal but have you tried grouping by p.pk_ProductID
> instead
> > of i.fk...? It is the actual value you are selecting as well as being on
> > the primary table in the query.
>
> Yeah I tried that - actually the SUM I need is on the JOIN relationship -
> results should be:
>
> 1,Banana,3
> 2,Orange,5
> 3,Melon,6
>
> Thanks!
>
> Don Wieland
> d...@pointmade.net
> http://www.pointmade.net
> https://www.facebook.com/pointmade.band
>
>
>
>
>


Re: Query Help...

2015-10-22 Thread Don Wieland

> On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:
> 
> Which release of MySQL are you using?

Version 5.5.45-cll

> How many rows do you get if you remove the GROUP_CONCAT operator? We don't 
> need to see the results. (sometimes it is a good idea to look at the raw, 
> unprocessed results)
> 
> Is it possible that you are attempting to concat more values than allowed by 
> --group-concat-max-len ?

When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys. 

SELECT 
ht.*, 
CONCAT(o.first_name, " ", o.last_name) AS orphan, 
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc 
FROM hiv_transactions ht 
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id 
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id 
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND 
"2015-12-31 23:59:59" 
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-20 Thread Johan De Meersman
- Original Message -
> From: "Shawn Green" <shawn.l.gr...@oracle.com>
> Subject: Re: Query optimizer-miss with unqualified expressions, bug or 
> feature?
> 
> On a more serious note, indexes with limited cardinality are less useful
> than those with excellent cardinality. Cardinality is an approximation
> (or calculation. It depends on your storage engine) of how many unique
> values there are in the index.

On a related note, are there any plans (and could you offer a rough timeframe?) 
to include bitmap indices in MySQL?


Thanks,
Johan

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-20 Thread Peter Brawley

On 2015-10-20 12:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr <http://ht.tr/>_date BETWEEN "2014-01-01 00:00:00" AND 
"2014-12-31 23:59:59"
ORDER BY ht.tr <http://ht.tr/>_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Group_Concat() is an aggregating function, so you need to Group By the 
column(s) on which you wish to aggregate, and for valid results you need 
to limit Selected columns to those on which you're aggregating plus 
those columns that have unique values for your aggregating columns..


PB





Don Wieland
D W   D a t a



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query Help...

2015-10-20 Thread Don Wieland
Hi all,

Trying to get a query working:

SELECT 
ht.*, 
CONCAT(o.first_name, " ", o.last_name) AS orphan, 
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc 

FROM hiv_transactions ht 

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id 
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id 

WHERE ht.donor_id = 730 AND ht.tr <http://ht.tr/>_date BETWEEN "2014-01-01 
00:00:00" AND "2014-12-31 23:59:59" 
ORDER BY ht.tr <http://ht.tr/>_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland
D W   D a t a  

Re: Query Help...

2015-10-20 Thread shawn l.green



On 10/20/2015 1:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2014-01-01 00:00:00" AND "2014-12-31 
23:59:59"
ORDER BY ht.tr_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland



Which release of MySQL are you using?

How many rows do you get if you remove the GROUP_CONCAT operator? We 
don't need to see the results. (sometimes it is a good idea to look at 
the raw, unprocessed results)


Is it possible that you are attempting to concat more values than 
allowed by --group-concat-max-len ?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread Ben Clewett


I have noticed that an unqualified boolean expression cannot be 
optimized by MySQL to use an index in 5.6.24.


For example:

CREATE TABLE t (
  i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
  a BOOLEAN NOT NULL,
  KEY a (a)
) ENGINE=InnoDB;

This will hit key 'a':

SELECT * FROM t WHERE a = TRUE;

This will return the same results, but not use key 'a':

SELECT * FROM t WHERE a;

Is this a bug, or deliberate behaviour, or a missing feature, or perhaps 
something else?


Thanks,

Ben Clewett.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread Ben Clewett

Hi Roy,

Thanks for the clear explanation.

I guess (hypothetically) the optimizer could see if it has a key, and 
then use two starts: one on 'a > 0' and one on 'a < 0', taking a union 
of the result?  Which might make a significant result to something?


Ben.


On 2015-10-19 14:19, Roy Lyseng wrote:

Hi Ben,

On 19.10.15 15.10, Ben Clewett wrote:


I have noticed that an unqualified boolean expression cannot be 
optimized by

MySQL to use an index in 5.6.24.

For example:

CREATE TABLE t (
   i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   a BOOLEAN NOT NULL,
   KEY a (a)
) ENGINE=InnoDB;

This will hit key 'a':

SELECT * FROM t WHERE a = TRUE;

This will return the same results, but not use key 'a':

SELECT * FROM t WHERE a;

Is this a bug, or deliberate behaviour, or a missing feature, or perhaps
something else?


MySQL does not have a true boolean type, so this is actually 
interpreted as


  SELECT * FROM t WHERE a <> 0;

The optimizer is not able to see that "a <> 0" means "a = 1", and 
hence no index will be used.


Thanks,
Roy



Thanks,

Ben Clewett.








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread Roy Lyseng

Hi Ben,

On 19.10.15 15.10, Ben Clewett wrote:


I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.

For example:

CREATE TABLE t (
   i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   a BOOLEAN NOT NULL,
   KEY a (a)
) ENGINE=InnoDB;

This will hit key 'a':

SELECT * FROM t WHERE a = TRUE;

This will return the same results, but not use key 'a':

SELECT * FROM t WHERE a;

Is this a bug, or deliberate behaviour, or a missing feature, or perhaps
something else?


MySQL does not have a true boolean type, so this is actually interpreted as

  SELECT * FROM t WHERE a <> 0;

The optimizer is not able to see that "a <> 0" means "a = 1", and hence no index 
will be used.


Thanks,
Roy



Thanks,

Ben Clewett.







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread Roy Lyseng

Hi Shawn,

On 19.10.15 22.33, shawn l.green wrote:



On 10/19/2015 3:48 PM, Roy Lyseng wrote:

Hi Ben,

On 19.10.15 16.07, Ben Clewett wrote:

Hi Roy,

Thanks for the clear explanation.

I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which might make a significant result to something?


That is correct. However, if the substitution type for BOOLEAN was
UNSIGNED TINYINT instead of TINYINT, the range 'a < 0' would not even be
needed, and the quite cumbersome UNION would be avoided. But the best
solution would of course be a two-valued boolean type, where 'a <> 0'
would easily be transformed to a = 1. It would also mean that statistics
for the columns would be better, with TINYINT each value has the
estimated probability 1/256, whereas a boolean value would have
probability 1/2.




256 possible values of BOOLEAN? I've heard of fuzzy logic but this is awesome!
Some new literal value names to consider: maybe, sort_of, nearly_always,
certainly, practically_never, likely, ...
*


Well, it is the practical consequence of using TINYINT as the substitution type 
for BOOLEAN...





On a more serious note, indexes with limited cardinality are less useful than
those with excellent cardinality. Cardinality is an approximation (or
calculation. It depends on your storage engine) of how many unique values there
are in the index.

If the Optimizer estimates (based on a calculation based on the Cardinality)
that more than about 30% of a table would need to be retrieved in random order
based on an index, then that index is disallowed.  Why? Because the physical
disk overhead of doing random access averages just slightly more than 3x the
overhead used to scan a much larger block of data.

http://dev.mysql.com/doc/refman/5.6/en/how-to-avoid-table-scan.html

An index on just a Boolean value would have at best a cardinality of 2. So, any
indexes on Boolean values should include other columns to help the index become
more selective.

http://dev.mysql.com/doc/refman/5.6/en/multiple-column-indexes.html


You are right about the index use, so it would be interesting only with a 
significant skew, say 10% TRUE values. However, the optimizer is not only about 
indexing, but also about calculating the filtering effect of a predicate. Using 
a true BOOLEAN rather than a TINYINT would give a better estimate of the 
filtering effect, and thus of the estimated number of rows as the outcome of a 
query.





*Actually, fuzzy logic has lots of practical application in real world
situations. They are just not using the MySQL BOOLEAN data type to store the
value for comparison.



Thanks,
Roy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread shawn l.green



On 10/19/2015 3:48 PM, Roy Lyseng wrote:

Hi Ben,

On 19.10.15 16.07, Ben Clewett wrote:

Hi Roy,

Thanks for the clear explanation.

I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which might make a significant result to something?


That is correct. However, if the substitution type for BOOLEAN was
UNSIGNED TINYINT instead of TINYINT, the range 'a < 0' would not even be
needed, and the quite cumbersome UNION would be avoided. But the best
solution would of course be a two-valued boolean type, where 'a <> 0'
would easily be transformed to a = 1. It would also mean that statistics
for the columns would be better, with TINYINT each value has the
estimated probability 1/256, whereas a boolean value would have
probability 1/2.




256 possible values of BOOLEAN? I've heard of fuzzy logic but this is 
awesome! Some new literal value names to consider: maybe, sort_of, 
nearly_always, certainly, practically_never, likely, ...

*


On a more serious note, indexes with limited cardinality are less useful 
than those with excellent cardinality. Cardinality is an approximation 
(or calculation. It depends on your storage engine) of how many unique 
values there are in the index.


If the Optimizer estimates (based on a calculation based on the 
Cardinality) that more than about 30% of a table would need to be 
retrieved in random order based on an index, then that index is 
disallowed.  Why? Because the physical disk overhead of doing random 
access averages just slightly more than 3x the overhead used to scan a 
much larger block of data.


http://dev.mysql.com/doc/refman/5.6/en/how-to-avoid-table-scan.html

An index on just a Boolean value would have at best a cardinality of 2. 
So, any indexes on Boolean values should include other columns to help 
the index become more selective.


http://dev.mysql.com/doc/refman/5.6/en/multiple-column-indexes.html


*Actually, fuzzy logic has lots of practical application in real world 
situations. They are just not using the MySQL BOOLEAN data type to store 
the value for comparison.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread Roy Lyseng

Hi Ben,

On 19.10.15 16.07, Ben Clewett wrote:

Hi Roy,

Thanks for the clear explanation.

I guess (hypothetically) the optimizer could see if it has a key, and then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the result?
Which might make a significant result to something?


That is correct. However, if the substitution type for BOOLEAN was UNSIGNED 
TINYINT instead of TINYINT, the range 'a < 0' would not even be needed, and the 
quite cumbersome UNION would be avoided. But the best solution would of course 
be a two-valued boolean type, where 'a <> 0' would easily be transformed to a = 
1. It would also mean that statistics for the columns would be better, with 
TINYINT each value has the estimated probability 1/256, whereas a boolean value 
would have probability 1/2.


Thanks,
Roy



Ben.


On 2015-10-19 14:19, Roy Lyseng wrote:

Hi Ben,

On 19.10.15 15.10, Ben Clewett wrote:


I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.

For example:

CREATE TABLE t (
   i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   a BOOLEAN NOT NULL,
   KEY a (a)
) ENGINE=InnoDB;

This will hit key 'a':

SELECT * FROM t WHERE a = TRUE;

This will return the same results, but not use key 'a':

SELECT * FROM t WHERE a;

Is this a bug, or deliberate behaviour, or a missing feature, or perhaps
something else?


MySQL does not have a true boolean type, so this is actually interpreted as

  SELECT * FROM t WHERE a <> 0;

The optimizer is not able to see that "a <> 0" means "a = 1", and hence no
index will be used.

Thanks,
Roy



Thanks,

Ben Clewett.









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Relational query question

2015-10-01 Thread Divesh Kamra
It better to LEFT join rather then NOT IN




On Wed, Sep 30, 2015 at 6:00 PM, Mogens Melander <mog...@fumlersoft.dk>
wrote:

> Maybe not the most optimal, but (probably) the most simple:
>
> SELECT * FROM fruit
> where id not in (select fruit from purchase
> where customer=1);
>
> 1, 'Apples'
> 3, 'Oranges'
>
>
> On 2015-09-30 00:01, Richard Reina wrote:
>
>> If I have three simple tables:
>>
>> mysql> select * from customer;
>> +++
>> | ID | NAME   |
>> +++
>> |  1 | Joey   |
>> |  2 | Mike   |
>> |  3 | Kellie |
>> +++
>> 3 rows in set (0.00 sec)
>>
>> mysql> select * from fruit;
>> ++-+
>> | ID | NAME|
>> ++-+
>> |  1 | Apples  |
>> |  2 | Grapes  |
>> |  3 | Oranges |
>> |  4 | Kiwis   |
>> ++-+
>> 4 rows in set (0.00 sec)
>>
>> mysql> select * from purchases;
>> ++-+--+
>> | ID | CUST_ID | FRUIT_ID |
>> ++-+--+
>> |  2 |  3 |   2   |
>> |  3 |  1 |   4   |
>> |  4 |  1 |   2   |
>> |  5 |  2 |   1   |
>> ++-+--+
>>
>> I am having trouble understanding a relational query. How can I select
>> those fruits that Joey has not purchased?
>>
>
> --
> Mogens
> +66 8701 33224
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Relational query question

2015-09-30 Thread Mogens Melander

Maybe not the most optimal, but (probably) the most simple:

SELECT * FROM fruit
where id not in (select fruit from purchase
where customer=1);

1, 'Apples'
3, 'Oranges'

On 2015-09-30 00:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?


--
Mogens
+66 8701 33224


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Relational query question

2015-09-29 Thread Richard Reina
If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?


Re: Relational query question

2015-09-29 Thread shawn l.green



On 9/29/2015 1:27 PM, Ron Piggott wrote:



On 29/09/15 13:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?



I think you are going to want to use a "LEFT JOIN" using "purchases" as
the common table to join with a WHERE purchases.FRUIT_ID IS NULL





SELECT f.name
FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
INNER JOIN customer c
  on p.cust_id = c.id
  and c.name='Joey'
WHERE c.id IS NULL;

You have to make that "and...Joey" part of the LEFT JOIN to be selective 
for just "what Joey bought". it is the WHERE c.id IS NULL part that 
filters out and returns only the stuff that Joey did not buy.


If you put the c.name='Joey' term in the WHERE clause then you force a 
value to exist at that point of the query turning your LEFT JOIN into 
INNER JOIN (which would only show you what Joey did buy).


If you put WHERE c.name !='Joey' into the WHERE clause, then you would 
get the list of fruits that anyone else but Joey had purchased.


To see how this works and to understand the process a little better, 
expose all 3 layers of the problem as a big matrix (you'll get all 48 
row combinations).


SELECT f.id as f_id, f.name, p.id as p_id, p.cust_id, p.fruit_id, c.id 
as c_id,  c.name

FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
LEFT JOIN customer c
  on p.cust_id = c.id


From here, look at when the columns are NULL and when they aren't. Then 
experiment with different conditions. You are almost there. This should 
push you right to the top of the learning curve.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Relational query question

2015-09-29 Thread Ron Piggott



On 29/09/15 13:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?



I think you are going to want to use a "LEFT JOIN" using "purchases" as 
the common table to join with a WHERE purchases.FRUIT_ID IS NULL




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: server-side logging of query errors?

2015-06-25 Thread Andrew Moore
Further more, use logstash to collect the audit events and you can filter
out anything that wasn't an error and move it to a query error log.

On Wed, Jun 24, 2015 at 5:32 PM, Singer Wang w...@singerwang.com wrote:

 Yep, as shown below:

 root@audit-db.ec2:(none) select fark from fark from fark fark fark from
 frak;
 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near 'from fark fark fark from frak' at line 1
 root@audit-db.ec2:(none)

 Jun 24 16:29:52 audit-db percona-audit:

 {audit_record:{name:Query,record:217_2015-06-24T16:29:52,timestamp:2015-06-24T16:29:52

 UTC,command_class:error,connection_id:59,status:1064,sqltext:select
 fark from fark from fark fark fark from frak,user:root[root] @
 localhost [],host:localhost,os_user:,ip:}}

 error 1064


 On Wed, Jun 24, 2015 at 11:40 AM, Tomasz Chmielewski man...@wpkg.org
 wrote:

  Normal general log also logs everything including bad queries (i.e.
 SLCECT
  BLAH BLAH;) - although does not say if it was an error (i.e. syntax) or
 not.
 
  Does the audit plugin log the actual error?
 
 
  Tomasz
 
  On 2015-06-25 00:32, Singer Wang wrote:
 
  The performance hit of the Percona Audit is 15% for disk logging and
  for remote syslog we found it is lower. It logs everything including
  bad queries (eg. select fark from fark from fark fark fark from frak).
  You should be able to write a JSON parser that extracts what you want
  based on the log (eg. STATUS, COMMAND, NAME).
 
  On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
  vegiv...@tuxera.be wrote:
 
   -
 
   FROM: Singer X.J. Wang w...@singerwang.com
  SUBJECT: Re: server-side logging of query errors?
 
 
   You could log all queries using the audit plugin, 15% hit..
 
  Fair point, though: maybe one of the different audit plugins has
  the capability to specifically log faulty requests. Have a look
  through the slides from Percona Live London 2014, there was a talk
  about auditing.
 
  --
 
  Unhappiness is discouraged and will be corrected with kitten
  pictures.
 
 
 



Re: server-side logging of query errors?

2015-06-24 Thread Johan De Meersman
 From: Singer X.J. Wang w...@singerwang.com
 Subject: Re: server-side logging of query errors?

 You could log all queries using the audit plugin, 15% hit..

Fair point, though: maybe one of the different audit plugins has the capability 
to specifically log faulty requests. Have a look through the slides from 
Percona Live London 2014, there was a talk about auditing. 

-- 
Unhappiness is discouraged and will be corrected with kitten pictures. 


Re: server-side logging of query errors?

2015-06-24 Thread Singer Wang
The performance hit of the Percona Audit is 15% for disk logging and for
remote syslog we found it is lower. It logs everything including bad
queries (eg. select fark from fark from fark fark fark from frak). You
should be able to write a JSON parser that extracts what you want based on
the log (eg. STATUS, COMMAND, NAME).

On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman vegiv...@tuxera.be
wrote:


 --

 *From: *Singer X.J. Wang w...@singerwang.com
 *Subject: *Re: server-side logging of query errors?

 You could log all queries using the audit plugin, 15% hit..

 Fair point, though: maybe one of the different audit plugins has the
 capability to specifically log faulty requests. Have a look through the
 slides from Percona Live London 2014, there was a talk about auditing.



 --
 Unhappiness is discouraged and will be corrected with kitten pictures.



Re: server-side logging of query errors?

2015-06-24 Thread Singer Wang
Yep, as shown below:

root@audit-db.ec2:(none) select fark from fark from fark fark fark from
frak;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near 'from fark fark fark from frak' at line 1
root@audit-db.ec2:(none)

Jun 24 16:29:52 audit-db percona-audit:
{audit_record:{name:Query,record:217_2015-06-24T16:29:52,timestamp:2015-06-24T16:29:52
UTC,command_class:error,connection_id:59,status:1064,sqltext:select
fark from fark from fark fark fark from frak,user:root[root] @
localhost [],host:localhost,os_user:,ip:}}

error 1064


On Wed, Jun 24, 2015 at 11:40 AM, Tomasz Chmielewski man...@wpkg.org
wrote:

 Normal general log also logs everything including bad queries (i.e. SLCECT
 BLAH BLAH;) - although does not say if it was an error (i.e. syntax) or not.

 Does the audit plugin log the actual error?


 Tomasz

 On 2015-06-25 00:32, Singer Wang wrote:

 The performance hit of the Percona Audit is 15% for disk logging and
 for remote syslog we found it is lower. It logs everything including
 bad queries (eg. select fark from fark from fark fark fark from frak).
 You should be able to write a JSON parser that extracts what you want
 based on the log (eg. STATUS, COMMAND, NAME).

 On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
 vegiv...@tuxera.be wrote:

  -

  FROM: Singer X.J. Wang w...@singerwang.com
 SUBJECT: Re: server-side logging of query errors?


  You could log all queries using the audit plugin, 15% hit..

 Fair point, though: maybe one of the different audit plugins has
 the capability to specifically log faulty requests. Have a look
 through the slides from Percona Live London 2014, there was a talk
 about auditing.

 --

 Unhappiness is discouraged and will be corrected with kitten
 pictures.





Re: server-side logging of query errors?

2015-06-24 Thread Tomasz Chmielewski
Normal general log also logs everything including bad queries (i.e. 
SLCECT BLAH BLAH;) - although does not say if it was an error (i.e. 
syntax) or not.


Does the audit plugin log the actual error?


Tomasz

On 2015-06-25 00:32, Singer Wang wrote:

The performance hit of the Percona Audit is 15% for disk logging and
for remote syslog we found it is lower. It logs everything including
bad queries (eg. select fark from fark from fark fark fark from frak).
You should be able to write a JSON parser that extracts what you want
based on the log (eg. STATUS, COMMAND, NAME).

On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
vegiv...@tuxera.be wrote:


-


FROM: Singer X.J. Wang w...@singerwang.com
SUBJECT: Re: server-side logging of query errors?



You could log all queries using the audit plugin, 15% hit..

Fair point, though: maybe one of the different audit plugins has
the capability to specifically log faulty requests. Have a look
through the slides from Percona Live London 2014, there was a talk
about auditing.

--

Unhappiness is discouraged and will be corrected with kitten
pictures.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: server-side logging of query errors?

2015-06-23 Thread Singer Wang
You could log all queries using the audit plugin, 15% hit..

On Tue, Jun 23, 2015 at 4:54 AM, Johan De Meersman vegiv...@tuxera.be
wrote:

 - Original Message -
  From: Tomasz Chmielewski man...@wpkg.org

  It would be a mild security risk; a malicious
  (or just stupid, see Hanlon's razor) user could spam your server with
  malformed requests until the logging disk runs full, at which point
  the daemon would suspend operations until space is freed.
 
  I don't think it's a valid argument - the same is true right now for
  general query log. Any stupid/malicious user can produce loads of
  queries and fill the disk if one has general query log enabled.
 
  In short, anyone enabling any logging should consider what limitations
  it brings.

 Including quite a bit of overhead, which is why its use is discouraged in
 production :-)


 --
 Unhappiness is discouraged and will be corrected with kitten pictures.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




server-side logging of query errors?

2015-06-23 Thread Tomasz Chmielewski

Suppose I run a query which has a syntax error:

mysql blah;
ERROR 1064 (42000): You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right 
syntax to use near 'blah' at line 1



How can I get mysql server to log this error?


According to the documentation:

http://dev.mysql.com/doc/refman/5.5/en/server-logs.html

- Error log - will only log mysqld errors - so, it won't log syntax 
errors in the query
- General query log - it will log all queries, but without indicating 
if it was an error or not



Is there a way to log query syntax errors on the server? Please assume 
that connection and the query can be coming from PHP, perl etc. code, so 
any /usr/bin/mysql stderr output redirecting is not helping here.



--
Tomasz Chmielewski
http://wpkg.org


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: server-side logging of query errors?

2015-06-23 Thread Johan De Meersman

None that I'm aware of. It would be a mild security risk; a malicious (or just 
stupid, see Hanlon's razor) user could spam your server with malformed requests 
until the logging disk runs full, at which point the daemon would suspend 
operations until space is freed.

Maybe one of the proxies out there has support for such logging, I'm not really 
familiar with any of them.


- Original Message -
 From: Tomasz Chmielewski man...@wpkg.org
 To: MySql mysql@lists.mysql.com
 Sent: Tuesday, 23 June, 2015 09:35:46
 Subject: server-side logging of query errors?

 Suppose I run a query which has a syntax error:
 
 mysql blah;
 ERROR 1064 (42000): You have an error in your SQL syntax; check the
 manual that corresponds to your MySQL server version for the right
 syntax to use near 'blah' at line 1
 
 
 How can I get mysql server to log this error?
 
 
 According to the documentation:
 
 http://dev.mysql.com/doc/refman/5.5/en/server-logs.html
 
 - Error log - will only log mysqld errors - so, it won't log syntax
 errors in the query
 - General query log - it will log all queries, but without indicating
 if it was an error or not
 
 
 Is there a way to log query syntax errors on the server? Please assume
 that connection and the query can be coming from PHP, perl etc. code, so
 any /usr/bin/mysql stderr output redirecting is not helping here.
 
 
 --
 Tomasz Chmielewski
 http://wpkg.org
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: server-side logging of query errors?

2015-06-23 Thread Tomasz Chmielewski

On 2015-06-23 17:29, Johan De Meersman wrote:

None that I'm aware of.


It's a pity!
This could ease debugging in many cases, without the need to change the 
client (i.e. PHP/Perl/Python code which sends the queries, and we 
suspect that some of them are bogus).




It would be a mild security risk; a malicious
(or just stupid, see Hanlon's razor) user could spam your server with
malformed requests until the logging disk runs full, at which point
the daemon would suspend operations until space is freed.


I don't think it's a valid argument - the same is true right now for 
general query log. Any stupid/malicious user can produce loads of 
queries and fill the disk if one has general query log enabled.


In short, anyone enabling any logging should consider what limitations 
it brings.




Maybe one of the proxies out there has support for such logging, I'm
not really familiar with any of them.




--
Tomasz Chmielewski
http://wpkg.org


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: server-side logging of query errors?

2015-06-23 Thread Johan De Meersman
- Original Message -
 From: Tomasz Chmielewski man...@wpkg.org

 It would be a mild security risk; a malicious
 (or just stupid, see Hanlon's razor) user could spam your server with
 malformed requests until the logging disk runs full, at which point
 the daemon would suspend operations until space is freed.
 
 I don't think it's a valid argument - the same is true right now for
 general query log. Any stupid/malicious user can produce loads of
 queries and fill the disk if one has general query log enabled.
 
 In short, anyone enabling any logging should consider what limitations
 it brings.

Including quite a bit of overhead, which is why its use is discouraged in 
production :-)


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/1/2015 4:49 PM, Larry Martell wrote:

I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_recipe.id IN (148) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...


Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE 
term against the data_recipe table.


Compare the two EXPLAINS, in the faster query you see that data_recipe 
is listed second. This allows the additional term a chance to reduce the 
number of row combinations for the entire query.


To really get at the logic behind how the Optimizer chooses its 
execution plan, get an optimizer trace. Look at the cost estimates for 
each phase being considered.

http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/4/2015 3:18 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:


I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_recipe.id IN (148) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...



Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? 
It might make it even faster


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread Larry Martell
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
 Hi Larry,


 On 2/4/2015 3:18 PM, Larry Martell wrote:

 On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

 Hi Larry,


 On 2/1/2015 4:49 PM, Larry Martell wrote:


 I have 2 queries. One takes 4 hours to run and returns 21 rows, and
 the other, which has 1 additional where clause, takes 3 minutes and
 returns 20 rows. The main table being selected from is largish
 (37,247,884 rows with 282 columns). Caching is off for my testing, so
 it's not related to that. To short circuit anyone asking, these
 queries are generated by python code, which is why there's an IN
 clause with 1 value, as oppose to an =.

 Here are the queries and their explains. The significant difference is
 that the faster query has Using
 intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
 those 2 indexes are on the 2 columns in the where clause, so that's
 why the second one is faster. But I am wondering what I can do to make
 the first one faster.


 4 hour query:

 SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id


 ... snipped ...



 Faster query:

 SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
 data_recipe.id IN (148) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id

 ... snip ...



 Thanks for taking the time to read this, and for any help or pointers
 you can give me.


 The biggest difference is the added selectivity generated by the WHERE
 term
 against the data_recipe table.

 Compare the two EXPLAINS, in the faster query you see that data_recipe is
 listed second. This allows the additional term a chance to reduce the
 number
 of row combinations for the entire query.

 To really get at the logic behind how the Optimizer chooses its execution
 plan, get an optimizer trace. Look at the cost estimates for each phase
 being considered.
 http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
 http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


 Thanks very much Shawn for the reply and the links. I will check those
 out and I'm sure I will find them very useful.

 Meanwhile I changed the query to select from data_cst using the where
 clause into a temp table and then I join the temp table with the other
 tables. That has improved the slow query from 4 hours to 10 seconds
 (!)


 Did you also add an index to the temporary table for the JOIN condition? It
 might make it even faster

No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread Larry Martell
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
 Hi Larry,


 On 2/1/2015 4:49 PM, Larry Martell wrote:

 I have 2 queries. One takes 4 hours to run and returns 21 rows, and
 the other, which has 1 additional where clause, takes 3 minutes and
 returns 20 rows. The main table being selected from is largish
 (37,247,884 rows with 282 columns). Caching is off for my testing, so
 it's not related to that. To short circuit anyone asking, these
 queries are generated by python code, which is why there's an IN
 clause with 1 value, as oppose to an =.

 Here are the queries and their explains. The significant difference is
 that the faster query has Using
 intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
 those 2 indexes are on the 2 columns in the where clause, so that's
 why the second one is faster. But I am wondering what I can do to make
 the first one faster.


 4 hour query:

 SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id


 ... snipped ...



 Faster query:

 SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
data_recipe.id IN (148) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id

 ... snip ...


 Thanks for taking the time to read this, and for any help or pointers
 you can give me.


 The biggest difference is the added selectivity generated by the WHERE term
 against the data_recipe table.

 Compare the two EXPLAINS, in the faster query you see that data_recipe is
 listed second. This allows the additional term a chance to reduce the number
 of row combinations for the entire query.

 To really get at the logic behind how the Optimizer chooses its execution
 plan, get an optimizer trace. Look at the cost estimates for each phase
 being considered.
 http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
 http://dev.mysql.com/doc/internals/en/optimizer-tracing.html

Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hello Larry,

On 2/4/2015 3:37 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/4/2015 3:18 PM, Larry Martell wrote:


On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
wrote:


Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:



I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_recipe.id IN (148) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...




Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE
term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the
number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html



Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? It
might make it even faster


No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.



If...
* it's a MEMORY temp table
* it's always the same table design

Then, you can use DELETE to clear the content (it's faster than DROP

Help improving query performance

2015-02-01 Thread Larry Martell
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
   MIN(data_cst.date_time) start,
   MAX(data_cst.date_time) end,
   MIN(data_target.name) as target,
   MIN(data_lot.name) as lot,
   MIN(data_wafer.name) as wafer,
   MIN(measname) as measname,
   MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
  data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id

Explain:

++-+-++---+---+-+-+--+-+
| id | select_type | table   | type   | possible_keys

| key   | key_len | ref
 | rows | Extra   |
++-+-++---+---+-+-+--+-+
|  1 | SIMPLE  | data_target | const  | PRIMARY

| PRIMARY   | 4   | const
 |1 | Using temporary; Using filesort |
|  1 | SIMPLE  | data_measparams | index  | PRIMARY

| PRIMARY   | 4   | NULL
 |  767 | Using index |
|  1 | SIMPLE  | data_cst| ref|
data_cst_4262c4f4,data_cst_01213b5c,data_cst_bbccbce0,data_cst_b0229011,data_cst_65c0,data_cst_fba12377,data_cst_634020d0
| data_cst_634020d0 | 5   | motor_gf.data_measparams.id |   48 |
Using where |
|  1 | SIMPLE  | data_tool   | eq_ref | PRIMARY

| PRIMARY   | 4   |
motor_gf.data_cst.tool_id   |1 | NULL|
|  1 | SIMPLE  | data_recipe | eq_ref | PRIMARY

| PRIMARY   | 4   |
motor_gf.data_cst.recipe_id |1 | NULL|
|  1 | SIMPLE  | data_lot| eq_ref | PRIMARY

| PRIMARY   | 4   |
motor_gf.data_cst.lot_id|1 | NULL|
|  1 | SIMPLE  | data_wafer  | eq_ref | PRIMARY

| PRIMARY   | 4   |
motor_gf.data_cst.wafer_id  |1 | NULL|
++-+-++---+---+-+-+--+-+

Faster query:

SELECT MIN(data_tool.name) as tool,
   MIN(data_cst.date_time) start,
   MAX(data_cst.date_time) end,
   MIN(data_target.name) as target,
   MIN(data_lot.name) as lot,
   MIN(data_wafer.name) as wafer,
   MIN(measname) as measname,
   MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
  data_recipe.id IN (148) AND
  data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 23:59:59'
GROUP BY wafer_id, data_cst.lot_id

Re: Help optimize query.

2014-12-01 Thread shawn l.green

Hello Mimko,

Sorry for the late reply. I had a bunch of work to take care of before 
vacation, then there was the vacation itself. :)


On 11/13/2014 2:34 PM, Mimiko wrote:

Hello. I have this table:

  show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
   id int(10) unsigned NOT NULL AUTO_INCREMENT,
   date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
   cc_agent varchar(45) NOT NULL,
   cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
   cc_queue_id tinyint(3) unsigned NOT NULL,
   cc_agent_id int(10) unsigned NOT NULL,
   cc_agent_phone smallint(5) unsigned NOT NULL,
   cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
   PRIMARY KEY (id),
   KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
   KEY IDX_cc_agents_tier_status_log_3 (date_log),
   KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
   KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
   KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id)
USING BTREE,
   KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
   CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart
(id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

  show index from cc_agents_tier_status_log:
TableNon_uniqueKey_nameSeq_in_indexColumn_name
Collation CardinalitySub_partPackedNullIndex_type
CommentIndex_comment
cc_agents_tier_status_log0PRIMARY1idA
23999(null)BTREE(null) (null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_21
cc_agentA260(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_31
date_logA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_21
cc_agent_idA2(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_31
cc_queue_idA14(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_11
cc_agent_tier_status_idA2(null)BTREE
(null)(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_71
idA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_72
date_logA23999(null)BTREE(null)(null)

And the query is:
 set @enddate:=now();
 set @startdate:='2014-11-01';
 set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from a.theDateHour)
as theHour,count(c.cc_agent_tier_status_id) as nrAgents
from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and
b.id=datediff(@enddate,@startdate)+1 ) as d
left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION
ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10
UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL
SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17
UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL
SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h
on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or
if(@queue_id=c.cc_queue_id,1,0))
group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and
c.cc_agent_tier_status_id=2
group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id
login/logout per queue per phone. status_id can have value 1 (logged
out) and 2 (login) at date_log datetime.

The resulting table must contain average number of agents logged in at
every hour per startdate to enddate.

Hope for some hints. Thank you.


The first problem is that you are generating a lot of extra rows before 
you actually need them. The only place where you should be faking

Re: Help optimize query.

2014-11-15 Thread Mimiko

On 15.11.2014 01:06, Peter Brawley wrote:

Let's see the results of Explain Extended this query,  result of Show
Create Table cc_member_queue_end_log.


cc_member_queue_end_log is not of interest, it is used just as a series 
of numbers. It may be any table with ids.


I've changed a bit the query which seemed to reduce the select time, but 
not for a lot.


set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
explain extended select s.theHour as theHour,avg(s.nrAgents) as 
nrAgents from
- (select date(FROM_UNIXTIME(a.theDateHour)) as 
theDate,extract(hour from FROM_UNIXTIME(a.theDateHour)) as 
theHour,count(c.cc_agent_tier_status_id) as nrAgents

- from (
-
- select dh.theDateHour as theDateHour, max(c.date_log) as 
maxdatelog,c.*

- FROM
- ( select UNIX_TIMESTAMP(concat(d.thedate,' ',h.theHour,':0:0')) 
as theDateHour

- from
- ( select DATE(DATE_ADD(date('2014-11-01'), INTERVAL @i:=@i+1 
DAY) ) as theDate from (select @i:=-1) as t1
- inner join cc_agents_tier_status_log b on 1=1 and 
b.id=datediff(now(),'2014-11-01')+1 ) as d

- straight_join
- (SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION 
ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
SELECT 10 UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 
UNION ALL SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL 
SELECT 17 UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 
UNION ALL SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

- on 1=1 ) AS dh
- straight_join
- cc_agents_tier_status_log as c
- on UNIX_TIMESTAMP(c.date_log)=dh.theDateHour where 
(if(-10,1,0) or if(-1=c.cc_queue_id,1,0))

- group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone
-
-
- ) as a
- straight_join cc_agents_tier_status_log as c
- on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

- group by a.theDateHour
- order by date(FROM_UNIXTIME(a.theDateHour)),extract(hour from 
FROM_UNIXTIME(a.theDateHour)))

- as s
- group by s.theHour
- order by s.theHour\G
*** 1. row ***
   id: 1
  select_type: PRIMARY
table: derived2
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 360
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 2
  select_type: DERIVED
table: derived3
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 43560
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 3. row ***
   id: 2
  select_type: DERIVED
table: c
 type: ref
possible_keys: 
IDX_cc_agents_tier_status_log_3,FK_cc_agents_tier_status_log_2,FK_cc_agents_tier_status_log_3,FK_cc_agents_tier_status_log_1

  key: IDX_cc_agents_tier_status_log_3
  key_len: 4
  ref: a.maxdatelog
 rows: 1
 filtered: 100.00
Extra: Using where
*** 4. row ***
   id: 3
  select_type: DERIVED
table: derived4
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 360
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 5. row ***
   id: 3
  select_type: DERIVED
table: c
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 24207
 filtered: 100.00
Extra: Using where; Using join buffer
*** 6. row ***
   id: 4
  select_type: DERIVED
table: derived5
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 15
 filtered: 100.00
Extra:
*** 7. row ***
   id: 4
  select_type: DERIVED
table: derived7
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 24
 filtered: 100.00
Extra: Using join buffer
*** 8. row ***
   id: 7
  select_type: DERIVED
table: NULL
 type: NULL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: NULL
 filtered: NULL
Extra: No tables used
*** 9. row ***
   id: 8
  select_type: UNION
table: NULL

Re: Help optimize query.

2014-11-14 Thread Peter Brawley
Let's see the results of Explain Extended this query,  result of Show 
Create Table cc_member_queue_end_log.


PB

-

On 2014-11-13 1:34 PM, Mimiko wrote:

Hello. I have this table:

 show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  cc_agent varchar(45) NOT NULL,
  cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
  cc_queue_id tinyint(3) unsigned NOT NULL,
  cc_agent_id int(10) unsigned NOT NULL,
  cc_agent_phone smallint(5) unsigned NOT NULL,
  cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
  PRIMARY KEY (id),
  KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
  KEY IDX_cc_agents_tier_status_log_3 (date_log),
  KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
  KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
  KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id) 
USING BTREE,

  KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
  CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY 
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart 
(id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY 
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE 
CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY 
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE

) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

 show index from cc_agents_tier_status_log:
TableNon_uniqueKey_nameSeq_in_indexColumn_name 
Collation CardinalitySub_partPackedNull Index_type
CommentIndex_comment
cc_agents_tier_status_log0PRIMARY1idA 
23999(null)BTREE(null) (null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_21 
cc_agentA 260(null)BTREE(null)(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_31 
date_logA 23999(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_21 
cc_agent_idA 2(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_31 
cc_queue_idA 14(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_11 
cc_agent_tier_status_id A2(null)BTREE(null)
(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_7 1
idA23999(null)BTREE(null) (null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_72 
date_logA 23999(null)BTREE(null)(null)


And the query is:
set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from 
a.theDateHour) as theHour,count(c.cc_agent_tier_status_id) as nrAgents

from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as 
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and 
b.id=datediff(@enddate,@startdate)+1 ) as d

left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL 
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
SELECT 10 UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 
UNION ALL SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL 
SELECT 17 UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 
UNION ALL SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or 
if(@queue_id=c.cc_queue_id,1,0))

group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id 
login/logout per queue per phone. status_id can have value 1 (logged 
out) and 2 (login) at date_log datetime.


The resulting table must contain average number of agents logged in at 
every hour per startdate to enddate.


Hope for some hints. Thank you.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Help optimize query.

2014-11-13 Thread Mimiko

Hello. I have this table:

 show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  cc_agent varchar(45) NOT NULL,
  cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
  cc_queue_id tinyint(3) unsigned NOT NULL,
  cc_agent_id int(10) unsigned NOT NULL,
  cc_agent_phone smallint(5) unsigned NOT NULL,
  cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
  PRIMARY KEY (id),
  KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
  KEY IDX_cc_agents_tier_status_log_3 (date_log),
  KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
  KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
  KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id) 
USING BTREE,

  KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
  CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY 
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart 
(id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY 
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY 
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE

) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

 show index from cc_agents_tier_status_log:
Table	Non_unique	Key_name	Seq_in_index	Column_name	Collation 
Cardinality	Sub_part	Packed	Null	Index_type	Comment	Index_comment
cc_agents_tier_status_log	0	PRIMARY	1	id	A	23999			(null)	BTREE	(null) 
(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_2	1 
cc_agent	A	260			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_3	1 
date_log	A	23999			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_2	1 
cc_agent_id	A	2			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_3	1 
cc_queue_id	A	14			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_1	1 
cc_agent_tier_status_id	A	2			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1 
IDX_cc_agents_tier_status_log_7	1	id	A	23999			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_7	2 
date_log	A	23999			(null)	BTREE	(null)	(null)


And the query is:
set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from a.theDateHour) 
as theHour,count(c.cc_agent_tier_status_id) as nrAgents

from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as 
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and 
b.id=datediff(@enddate,@startdate)+1 ) as d

left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL 
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION 
ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10 
UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL 
SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17 
UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL 
SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or 
if(@queue_id=c.cc_queue_id,1,0))

group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id 
login/logout per queue per phone. status_id can have value 1 (logged 
out) and 2 (login) at date_log datetime.


The resulting table must contain average number of agents logged in at 
every hour per startdate to enddate.


Hope for some hints. Thank you.
--
Mimiko desu.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query with variable number of columns?

2014-10-08 Thread Jan Steinman
I often use CASE WHEN ... to pivot tables. For example, splitting sales data by 
year:

  SELECT
s_product.name AS `Product`,
SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS 
`2007`,
SUM(CASE WHEN YEAR(sales.Date)='2008' THEN sales.Total ELSE NULL END) AS 
`2008`,
SUM(CASE WHEN YEAR(sales.Date)='2009' THEN sales.Total ELSE NULL END) AS 
`2009`,
...
  WHERE dynamic predicate that only has results in one year

However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted.

What techniques do *you* use for avoiding this anti-pattern? Am I limited to 
using a separate programming language (PHP, in this case) with a separate 
COUNT(*) query for each possible column, then CASEing the generation of the 
column SQL? Seems awfully ugly!

Thanks in advance for any insight offered!

(And the following came up at random... perhaps I'll just live with a bunch of 
empty columns...)

 In attempting to fix any system, we may damage another that is working 
perfectly well. -- David Holmgren
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query with variable number of columns?

2014-10-08 Thread Peter Brawley

On 2014-10-08 1:38 PM, Jan Steinman wrote:

I often use CASE WHEN ... to pivot tables. For example, splitting sales data by 
year:

   SELECT
 s_product.name AS `Product`,
 SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS 
`2007`,
 SUM(CASE WHEN YEAR(sales.Date)='2008' THEN sales.Total ELSE NULL END) AS 
`2008`,
 SUM(CASE WHEN YEAR(sales.Date)='2009' THEN sales.Total ELSE NULL END) AS 
`2009`,
 ...
   WHERE dynamic predicate that only has results in one year

However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted.

What techniques do *you* use for avoiding this anti-pattern?


Non-procedural SQL is an incomplete computer language; it can't do that. 
MySQL stored procedures are less incomplete, and can do it, but they're 
awkward. I use the app language (eg PHP) to implement such logic.


PB

-


Am I limited to using a separate programming language (PHP, in this case) with 
a separate COUNT(*) query for each possible column, then CASEing the generation 
of the column SQL? Seems awfully ugly!

Thanks in advance for any insight offered!

(And the following came up at random... perhaps I'll just live with a bunch of 
empty columns...)

 In attempting to fix any system, we may damage another that is working 
perfectly well. -- David Holmgren
 Jan Steinman, EcoReality Co-op 





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query with variable number of columns?

2014-10-08 Thread hsv
 2014/10/08 11:38 -0700, Jan Steinman 
However, this pattern will often result in numerous empty columns -- empties 
that would not be there had the table not been pivoted. 


 2014/10/08 16:42 -0500, Peter Brawley 
MySQL stored procedures are less incomplete, and can do it, but they're 
awkward. 

From a webpage-link on this very list posted, I learnt of a means of (yes, 
clumsily) using SQL procedure to build PREPAREd statements that pivot. It 
entails twice reckoning, once to find good fields, once to pivot and show them.

One selects from a virtual table:
(SELECT DISTINCT YEAR(sales.Date) AS Y FROM ...) AS g

For each good Y one wants this generated (I use ANSI mode, with more PL1 than 
C):
'SUM(IF(YEAR(sales.Date)=' || Y || ', sales.Total, NULL)) AS ' || Y || ''

The outcome is something like this:
set @yearSal = (SELECT 'SELECT s_product.name AS Product, ' || 
GROUP_CONCAT('SUM(IF(YEAR(sales.Date)=' || Y || ', sales.Total, NULL)) AS ' || 
Y || '') || '
FROM ...'
FROM (SELECT DISTINCT YEAR(sales.Date) AS Y FROM ...) AS g;
PREPARE YearSal FROM @YearSal;
EXECUTE YearSal;

Unhappily, PREPARE takes only user-defined variables, and its prepared 
statement, too, is exposed to the procedure s caller. If the prepared statement 
is SELECT ... INTO ..., only user-defined variables are allowed after INTO. 
One who knows the names can learn something about the procedure s working.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query regarding implementation of parallel-replication

2014-09-10 Thread wagnerbianchi.com
It's good to know. Keep up with good work, cheers!!



--
*Wagner Bianchi, MySQL Database Specialist*
Mobile:  +55.31.8654.9510
E-mail:  m...@wagnerbianchi.com
Twitter: @wagnerbianchijr

2014-09-06 3:01 GMT-03:00 Ajay Garg ajaygargn...@gmail.com:

 Hi Wagner.

 That is what I did as the last resort, and that is only what solved the
 issue.


 Thanks.

 On Fri, Sep 5, 2014 at 1:52 AM, wagnerbianchi.com m...@wagnerbianchi.com
 wrote:
  You can try these steps:
 
  1-) Stop slave and write down the replication coordinates getting that in
  MySQL's error log (*very important step*);
  2-) Issue the `reset slave` command on MySQL Slave;
  3-) Issue the CHANGE MASTER TO considering the replication coordinates
  you've just written down on step 1;
  4-) Give replication a start;
  5-) Check if the issue has gone away.
 
  If you're not comfortable to do that, just share the SHOW SLAVE STATUS
  output with us.
 
  Let us know how's it going, cheers!!
 
 
 
 
  --
  Wagner Bianchi, MySQL Database Specialist
  Mobile:  +55.31.8654.9510
  E-mail:  m...@wagnerbianchi.com
  Twitter: @wagnerbianchijr
 
 
  2014-09-04 7:24 GMT-03:00 Ajay Garg ajaygargn...@gmail.com:
 
  Hi all.
 
  Unfortunately, I have run into the logs, as described at
  http://bugs.mysql.com/bug.php?id=71495
 
  Unfortunately, the issue does not go away, even after reverting back
  to slave-parallel-workers=0 in my.cnf, and restarting the mysql
  instance.
 
 
  Any quick idea, as to how we may get the mysql+replication up and
  running (even with the plain old non-multi-threaded mode)?
 
 
 
 
  On Tue, Sep 2, 2014 at 12:57 PM, Ajay Garg ajaygargn...@gmail.com
 wrote:
   Thanks Akshay for the reply.
  
   On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
   akshay.suryavansh...@gmail.com wrote:
   Hello Ajay,
  
   I tried testing the slave-parallel-workers few months ago, what I can
   surely
   tell you its still under development, and at that time needed some
   critical
   bug fixing.
  
   It is helpful in situations where each schema has even workload. The
   case
   you mentioned above doesnt have so. DB2 is getting different type of
   load
   than the others, in that case the other slave workers should be able
 to
   proceed with their workload as opposed to db2 which is still
 executing
   the
   long running statement. Now just imagine what happens if we try to
 take
   a
   backup, what binlog position should be captured ? the show slave
 status
   will
   print what ? this is where it needs development, I tried testing
   backups on
   it, but there is no concrete documentation on what position it would
   fetch.
  
   db2-statement-1 (very, very long-running)
   db2-statement-2 (short-running)
  
   about the above scenario, the next db2-statement-2 it will wait for
 the
   long
   running statement-1 to complete.
  
   Surely.. !! :)
  
  
   However, my concern is how this tracking is done.
   That is, how is the db-wise segregation of statements done (from a
   single-binlog-file originally coming onto the slave) ?
  
   If this segregation is not done, then I cannot think of a way on how
   things would scale up, like for example, when the slave-relay-log-file
   contains a random mix of statements from tens of different databases.
  
  
  
   Any pointers on the actual current implementation of this db-wise
   statements-segregation will be a great confidence-booster !!  :)
  
  
  
   Thanks and Regards,
   Ajay
  
  
However db2-statement-2 can be picked up by
   any other sql worker thread.
  
   This is a good feature added in mysql, however still needs to go
   through lot
   of testing. Please share your observation and findings in case it
   differs
   from the above.
  
   Cheers!!!
   Akshay
  
  
   On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com
   wrote:
  
   Hi all.
  
  
   We have replication set-up, where we cater to HUUGEE amounts of
 data.
   Since quite some time, we have been facing issues wherein the slave
   lags behind master quite a lot.
  
  
   So, yesterday we were able to setup parallel replication, by
   incorporating the following changes ::
  
   a)
   To begin with, partitioned some tables into dedicated databases.
  
   b)
   Set up the slave-parallel-workers parameter.
  
  
   The above seems to work functionally fine, but we have one
 doubt/query
   about the scalability of this solution.
  
  
  
  
   First, I will jot down the flow as far as I understand (please
 correct
   if wrong) ::
  
   
   Even in parallel-replication scenario, the master writes all the
   binlog (combined for all databases) in just one file, which then
 gets
   passed onto the slave as single-file itself. Thereafter, all the
   replication commands (combined for all databases) are written
   sequentially onto one slave-relay file.
  
   Thereafter, as per the documentation, the slave-SQL-Thread acts as
 the
   manager, handing over commands to worker-threads depending upon the
   databases

Re: Query on some MySQL-internals

2014-09-07 Thread Ajay Garg
Hi Martin.

Thanks for the reply.


As I had mentioned, we are running both the instances since last 6
years or so, and the records are inserted/deleted on both the
instances.

So, we did a show table status like 'XX' \G; on both the
instances, and following are the outputs (here XX is the table
upon which the OPTIMIZE command was run).

Also note that the outputs are after the OPTIMIZE command had been run
on the respective instance-tables ::


1)
Instance 1, which showed massive improvement in INSERT query
completion times after OPTIMIZE command was run on table XX::

db1show table status like 'XX' \G;
*** 1. row ***
   Name: XX
 Engine: InnoDB
Version: 10
 Row_format: Compact
   Rows: 12380147
 Avg_row_length: 473
Data_length: 5865701376
Max_data_length: 0
   Index_length: 522043392
  Data_free: 91226112
 Auto_increment: NULL
Create_time: NULL
Update_time: NULL
 Check_time: NULL
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options: partitioned
Comment:
1 row in set (0.08 sec)


2)
Instance 2, which showed no improvement in INSERT query completion
times, after running OPTIMIZE command on table XX ::


db2show table status like 'XX' \G;
*** 1. row ***
   Name: XX
 Engine: InnoDB
Version: 10
 Row_format: Compact
   Rows: 13189570
 Avg_row_length: 407
Data_length: 5376540672
Max_data_length: 0
   Index_length: 518553600
  Data_free: 36700160
 Auto_increment: NULL
Create_time: NULL
Update_time: NULL
 Check_time: NULL
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options: partitioned
Comment:
1 row in set (0.24 sec)



Thanks and Regards,
Ajay

On Sat, Sep 6, 2014 at 8:06 PM, Martin Gainty mgai...@hotmail.com wrote:




 Date: Sat, 6 Sep 2014 14:26:22 +0530
 Subject: Query on some MySQL-internals
 From: ajaygargn...@gmail.com
 To: mysql@lists.mysql.com

 Hi all.


 We are facing a very strange scenario.

 We have two mysql-instances running on the same machine, and they had
 been running functionally fine since about 6 years or so (catering to
 millions of records per day).

 However, since last few days, we were experiencing some elongated
 slowness on both the instances.
 So, we decided to OPTIMIZE TABLE slow_table on both the instances.

 We first ran the command on one instance.
 That speeded up things massively (select count(*) that was earlier
 taking 45 minutes was now running in less than 3 minutes).


 We then ran the command on the second instance. However, that seemed
 to have no effect.
 We ran the command again (on the same instance); again it had no effect.



 What could be the reason of this strange behavior?
 Both the instances run under fairly the same load
 MGHow do you extract the metrics to determine what the second instance is
 handling the same load as first instance?
 MGvmstat?
 MGiostat?
 MGSHOW GLOBAL STATUS ?

 and both instances
 are mounted on the same partition (obviously, all the directories are
 different).


 Hoping for some light on this strange issue.



 Thanks and Regards,
 Ajay

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql




-- 
Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Query on some MySQL-internals

2014-09-07 Thread Martin Gainty
Good Afternoon Ajay

 

Im not seeing any giant deltas between the two metrics except i did notice the 
elapsed time to run the metric on second instance was 3 times slower

Any chance we can do pathping from your present location:
1)pathping SQLServerInstance1

2)pathping SQLServerInstance2

to determine if there is an intervening router that is slowing down the second 
instance?


Does anyone have advice to get Ajay to track down why his second instance is 
non-preformant ?
Martin 
__ 

   


  


 Date: Sun, 7 Sep 2014 23:06:09 +0530
 Subject: Re: Query on some MySQL-internals
 From: ajaygargn...@gmail.com
 To: mgai...@hotmail.com
 CC: mysql@lists.mysql.com
 
 Hi Martin.
 
 Thanks for the reply.
 
 
 As I had mentioned, we are running both the instances since last 6
 years or so, and the records are inserted/deleted on both the
 instances.
 
 So, we did a show table status like 'XX' \G; on both the
 instances, and following are the outputs (here XX is the table
 upon which the OPTIMIZE command was run).
 
 Also note that the outputs are after the OPTIMIZE command had been run
 on the respective instance-tables ::
 
 
 1)
 Instance 1, which showed massive improvement in INSERT query
 completion times after OPTIMIZE command was run on table XX::
 
 db1show table status like 'XX' \G;
 *** 1. row ***
 Name: XX
 Engine: InnoDB
 Version: 10
 Row_format: Compact
 Rows: 12380147
 Avg_row_length: 473
 Data_length: 5865701376
 Max_data_length: 0
 Index_length: 522043392
 Data_free: 91226112
 Auto_increment: NULL
 Create_time: NULL
 Update_time: NULL
 Check_time: NULL
 Collation: latin1_swedish_ci
 Checksum: NULL
 Create_options: partitioned
 Comment:
 1 row in set (0.08 sec)
 
 
 2)
 Instance 2, which showed no improvement in INSERT query completion
 times, after running OPTIMIZE command on table XX ::
 
 
 db2show table status like 'XX' \G;
 *** 1. row ***
 Name: XX
 Engine: InnoDB
 Version: 10
 Row_format: Compact
 Rows: 13189570
 Avg_row_length: 407
 Data_length: 5376540672
 Max_data_length: 0
 Index_length: 518553600
 Data_free: 36700160
 Auto_increment: NULL
 Create_time: NULL
 Update_time: NULL
 Check_time: NULL
 Collation: latin1_swedish_ci
 Checksum: NULL
 Create_options: partitioned
 Comment:
 1 row in set (0.24 sec)
 
 
 
 Thanks and Regards,
 Ajay
 
 On Sat, Sep 6, 2014 at 8:06 PM, Martin Gainty mgai...@hotmail.com wrote:
 
 
 
 
  Date: Sat, 6 Sep 2014 14:26:22 +0530
  Subject: Query on some MySQL-internals
  From: ajaygargn...@gmail.com
  To: mysql@lists.mysql.com
 
  Hi all.
 
 
  We are facing a very strange scenario.
 
  We have two mysql-instances running on the same machine, and they had
  been running functionally fine since about 6 years or so (catering to
  millions of records per day).
 
  However, since last few days, we were experiencing some elongated
  slowness on both the instances.
  So, we decided to OPTIMIZE TABLE slow_table on both the instances.
 
  We first ran the command on one instance.
  That speeded up things massively (select count(*) that was earlier
  taking 45 minutes was now running in less than 3 minutes).
 
 
  We then ran the command on the second instance. However, that seemed
  to have no effect.
  We ran the command again (on the same instance); again it had no effect.
 
 
 
  What could be the reason of this strange behavior?
  Both the instances run under fairly the same load
  MGHow do you extract the metrics to determine what the second instance is
  handling the same load as first instance?
  MGvmstat?
  MGiostat?
  MGSHOW GLOBAL STATUS ?
 
  and both instances
  are mounted on the same partition (obviously, all the directories are
  different).
 
 
  Hoping for some light on this strange issue.
 
 
 
  Thanks and Regards,
  Ajay
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: http://lists.mysql.com/mysql
 
 
 
 
 -- 
 Regards,
 Ajay
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql
 
  

Re: Query regarding implementation of parallel-replication

2014-09-06 Thread Ajay Garg
Hi Wagner.

That is what I did as the last resort, and that is only what solved the issue.


Thanks.

On Fri, Sep 5, 2014 at 1:52 AM, wagnerbianchi.com m...@wagnerbianchi.com 
wrote:
 You can try these steps:

 1-) Stop slave and write down the replication coordinates getting that in
 MySQL's error log (*very important step*);
 2-) Issue the `reset slave` command on MySQL Slave;
 3-) Issue the CHANGE MASTER TO considering the replication coordinates
 you've just written down on step 1;
 4-) Give replication a start;
 5-) Check if the issue has gone away.

 If you're not comfortable to do that, just share the SHOW SLAVE STATUS
 output with us.

 Let us know how's it going, cheers!!




 --
 Wagner Bianchi, MySQL Database Specialist
 Mobile:  +55.31.8654.9510
 E-mail:  m...@wagnerbianchi.com
 Twitter: @wagnerbianchijr


 2014-09-04 7:24 GMT-03:00 Ajay Garg ajaygargn...@gmail.com:

 Hi all.

 Unfortunately, I have run into the logs, as described at
 http://bugs.mysql.com/bug.php?id=71495

 Unfortunately, the issue does not go away, even after reverting back
 to slave-parallel-workers=0 in my.cnf, and restarting the mysql
 instance.


 Any quick idea, as to how we may get the mysql+replication up and
 running (even with the plain old non-multi-threaded mode)?




 On Tue, Sep 2, 2014 at 12:57 PM, Ajay Garg ajaygargn...@gmail.com wrote:
  Thanks Akshay for the reply.
 
  On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
  akshay.suryavansh...@gmail.com wrote:
  Hello Ajay,
 
  I tried testing the slave-parallel-workers few months ago, what I can
  surely
  tell you its still under development, and at that time needed some
  critical
  bug fixing.
 
  It is helpful in situations where each schema has even workload. The
  case
  you mentioned above doesnt have so. DB2 is getting different type of
  load
  than the others, in that case the other slave workers should be able to
  proceed with their workload as opposed to db2 which is still executing
  the
  long running statement. Now just imagine what happens if we try to take
  a
  backup, what binlog position should be captured ? the show slave status
  will
  print what ? this is where it needs development, I tried testing
  backups on
  it, but there is no concrete documentation on what position it would
  fetch.
 
  db2-statement-1 (very, very long-running)
  db2-statement-2 (short-running)
 
  about the above scenario, the next db2-statement-2 it will wait for the
  long
  running statement-1 to complete.
 
  Surely.. !! :)
 
 
  However, my concern is how this tracking is done.
  That is, how is the db-wise segregation of statements done (from a
  single-binlog-file originally coming onto the slave) ?
 
  If this segregation is not done, then I cannot think of a way on how
  things would scale up, like for example, when the slave-relay-log-file
  contains a random mix of statements from tens of different databases.
 
 
 
  Any pointers on the actual current implementation of this db-wise
  statements-segregation will be a great confidence-booster !!  :)
 
 
 
  Thanks and Regards,
  Ajay
 
 
   However db2-statement-2 can be picked up by
  any other sql worker thread.
 
  This is a good feature added in mysql, however still needs to go
  through lot
  of testing. Please share your observation and findings in case it
  differs
  from the above.
 
  Cheers!!!
  Akshay
 
 
  On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com
  wrote:
 
  Hi all.
 
 
  We have replication set-up, where we cater to HUUGEE amounts of data.
  Since quite some time, we have been facing issues wherein the slave
  lags behind master quite a lot.
 
 
  So, yesterday we were able to setup parallel replication, by
  incorporating the following changes ::
 
  a)
  To begin with, partitioned some tables into dedicated databases.
 
  b)
  Set up the slave-parallel-workers parameter.
 
 
  The above seems to work functionally fine, but we have one doubt/query
  about the scalability of this solution.
 
 
 
 
  First, I will jot down the flow as far as I understand (please correct
  if wrong) ::
 
  
  Even in parallel-replication scenario, the master writes all the
  binlog (combined for all databases) in just one file, which then gets
  passed onto the slave as single-file itself. Thereafter, all the
  replication commands (combined for all databases) are written
  sequentially onto one slave-relay file.
 
  Thereafter, as per the documentation, the slave-SQL-Thread acts as the
  manager, handing over commands to worker-threads depending upon the
  databases on which the commands run.
  
 
 
 
  So far, so good.
  However, what would happen if the slave-relay file contains the
  following
  ::
 
 
  db1-statement-1 (short-running)
  db2-statement-1 (very, very long-running)
  db2-statement-2 (short-running)
  db1-statement-2 (short-running)
  db1-statement-3 (short-running)
 
 
  We will be grateful if someone could please clarifiy, as to how the
  above statements

Query on some MySQL-internals

2014-09-06 Thread Ajay Garg
Hi all.


We are facing a very strange scenario.

We have two mysql-instances running on the same machine, and they had
been running functionally fine since about 6 years or so (catering to
millions of records per day).

However, since last few days, we were experiencing some elongated
slowness on both the instances.
So, we decided to OPTIMIZE TABLE slow_table on both the instances.

We first ran the command on one instance.
That speeded up things massively (select count(*) that was earlier
taking 45 minutes was now running in less than 3 minutes).


We then ran the command on the second instance. However, that seemed
to have no effect.
We ran the command again (on the same instance); again it had no effect.



What could be the reason of this strange behavior?
Both the instances run under fairly the same load, and both instances
are mounted on the same partition (obviously, all the directories are
different).


Hoping for some light on this strange issue.



Thanks and Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Query on some MySQL-internals

2014-09-06 Thread Martin Gainty




 Date: Sat, 6 Sep 2014 14:26:22 +0530
 Subject: Query on some MySQL-internals
 From: ajaygargn...@gmail.com
 To: mysql@lists.mysql.com
 
 Hi all.
 
 
 We are facing a very strange scenario.
 
 We have two mysql-instances running on the same machine, and they had
 been running functionally fine since about 6 years or so (catering to
 millions of records per day).
 
 However, since last few days, we were experiencing some elongated
 slowness on both the instances.
 So, we decided to OPTIMIZE TABLE slow_table on both the instances.
 
 We first ran the command on one instance.
 That speeded up things massively (select count(*) that was earlier
 taking 45 minutes was now running in less than 3 minutes).
 
 
 We then ran the command on the second instance. However, that seemed
 to have no effect.
 We ran the command again (on the same instance); again it had no effect.
 
 
 
 What could be the reason of this strange behavior?
 Both the instances run under fairly the same loadMGHow do you extract the 
 metrics to determine what the second instance is handling the same load as 
 first instance?
MGvmstat?MGiostat?
MGSHOW GLOBAL STATUS ?
 and both instances
 are mounted on the same partition (obviously, all the directories are
 different).
 
 
 Hoping for some light on this strange issue.
 
 
 
 Thanks and Regards,
 Ajay
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
  

Re: Query regarding implementation of parallel-replication

2014-09-04 Thread Ajay Garg
Hi all.

Unfortunately, I have run into the logs, as described at
http://bugs.mysql.com/bug.php?id=71495

Unfortunately, the issue does not go away, even after reverting back
to slave-parallel-workers=0 in my.cnf, and restarting the mysql
instance.


Any quick idea, as to how we may get the mysql+replication up and
running (even with the plain old non-multi-threaded mode)?




On Tue, Sep 2, 2014 at 12:57 PM, Ajay Garg ajaygargn...@gmail.com wrote:
 Thanks Akshay for the reply.

 On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
 akshay.suryavansh...@gmail.com wrote:
 Hello Ajay,

 I tried testing the slave-parallel-workers few months ago, what I can surely
 tell you its still under development, and at that time needed some critical
 bug fixing.

 It is helpful in situations where each schema has even workload. The case
 you mentioned above doesnt have so. DB2 is getting different type of load
 than the others, in that case the other slave workers should be able to
 proceed with their workload as opposed to db2 which is still executing the
 long running statement. Now just imagine what happens if we try to take a
 backup, what binlog position should be captured ? the show slave status will
 print what ? this is where it needs development, I tried testing backups on
 it, but there is no concrete documentation on what position it would fetch.

 db2-statement-1 (very, very long-running)
 db2-statement-2 (short-running)

 about the above scenario, the next db2-statement-2 it will wait for the long
 running statement-1 to complete.

 Surely.. !! :)


 However, my concern is how this tracking is done.
 That is, how is the db-wise segregation of statements done (from a
 single-binlog-file originally coming onto the slave) ?

 If this segregation is not done, then I cannot think of a way on how
 things would scale up, like for example, when the slave-relay-log-file
 contains a random mix of statements from tens of different databases.



 Any pointers on the actual current implementation of this db-wise
 statements-segregation will be a great confidence-booster !!  :)



 Thanks and Regards,
 Ajay


  However db2-statement-2 can be picked up by
 any other sql worker thread.

 This is a good feature added in mysql, however still needs to go through lot
 of testing. Please share your observation and findings in case it differs
 from the above.

 Cheers!!!
 Akshay


 On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com wrote:

 Hi all.


 We have replication set-up, where we cater to HUUGEE amounts of data.
 Since quite some time, we have been facing issues wherein the slave
 lags behind master quite a lot.


 So, yesterday we were able to setup parallel replication, by
 incorporating the following changes ::

 a)
 To begin with, partitioned some tables into dedicated databases.

 b)
 Set up the slave-parallel-workers parameter.


 The above seems to work functionally fine, but we have one doubt/query
 about the scalability of this solution.




 First, I will jot down the flow as far as I understand (please correct
 if wrong) ::

 
 Even in parallel-replication scenario, the master writes all the
 binlog (combined for all databases) in just one file, which then gets
 passed onto the slave as single-file itself. Thereafter, all the
 replication commands (combined for all databases) are written
 sequentially onto one slave-relay file.

 Thereafter, as per the documentation, the slave-SQL-Thread acts as the
 manager, handing over commands to worker-threads depending upon the
 databases on which the commands run.
 



 So far, so good.
 However, what would happen if the slave-relay file contains the following
 ::


 db1-statement-1 (short-running)
 db2-statement-1 (very, very long-running)
 db2-statement-2 (short-running)
 db1-statement-2 (short-running)
 db1-statement-3 (short-running)


 We will be grateful if someone could please clarifiy, as to how the
 above statements will be managed amongst the Manager and the
 Worker-Threads (let's say there is just one worker-thread-per-db) ?

 In particular, does the Manager thread creates internal
 slave-relay-log-files, one for per database-statements?



 Thanks and Regards,
 Ajay

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





 --
 Regards,
 Ajay



-- 
Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query regarding implementation of parallel-replication

2014-09-04 Thread wagnerbianchi.com
You can try these steps:

1-) Stop slave and write down the replication coordinates getting that in
MySQL's error log (*very important step*);
2-) Issue the `reset slave` command on MySQL Slave;
3-) Issue the CHANGE MASTER TO considering the replication coordinates
you've just written down on step 1;
4-) Give replication a start;
5-) Check if the issue has gone away.

If you're not comfortable to do that, just share the SHOW SLAVE STATUS
output with us.

Let us know how's it going, cheers!!




--
*Wagner Bianchi, MySQL Database Specialist*
Mobile:  +55.31.8654.9510
E-mail:  m...@wagnerbianchi.com
Twitter: @wagnerbianchijr


2014-09-04 7:24 GMT-03:00 Ajay Garg ajaygargn...@gmail.com:

 Hi all.

 Unfortunately, I have run into the logs, as described at
 http://bugs.mysql.com/bug.php?id=71495

 Unfortunately, the issue does not go away, even after reverting back
 to slave-parallel-workers=0 in my.cnf, and restarting the mysql
 instance.


 Any quick idea, as to how we may get the mysql+replication up and
 running (even with the plain old non-multi-threaded mode)?




 On Tue, Sep 2, 2014 at 12:57 PM, Ajay Garg ajaygargn...@gmail.com wrote:
  Thanks Akshay for the reply.
 
  On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
  akshay.suryavansh...@gmail.com wrote:
  Hello Ajay,
 
  I tried testing the slave-parallel-workers few months ago, what I can
 surely
  tell you its still under development, and at that time needed some
 critical
  bug fixing.
 
  It is helpful in situations where each schema has even workload. The
 case
  you mentioned above doesnt have so. DB2 is getting different type of
 load
  than the others, in that case the other slave workers should be able to
  proceed with their workload as opposed to db2 which is still executing
 the
  long running statement. Now just imagine what happens if we try to take
 a
  backup, what binlog position should be captured ? the show slave status
 will
  print what ? this is where it needs development, I tried testing
 backups on
  it, but there is no concrete documentation on what position it would
 fetch.
 
  db2-statement-1 (very, very long-running)
  db2-statement-2 (short-running)
 
  about the above scenario, the next db2-statement-2 it will wait for the
 long
  running statement-1 to complete.
 
  Surely.. !! :)
 
 
  However, my concern is how this tracking is done.
  That is, how is the db-wise segregation of statements done (from a
  single-binlog-file originally coming onto the slave) ?
 
  If this segregation is not done, then I cannot think of a way on how
  things would scale up, like for example, when the slave-relay-log-file
  contains a random mix of statements from tens of different databases.
 
 
 
  Any pointers on the actual current implementation of this db-wise
  statements-segregation will be a great confidence-booster !!  :)
 
 
 
  Thanks and Regards,
  Ajay
 
 
   However db2-statement-2 can be picked up by
  any other sql worker thread.
 
  This is a good feature added in mysql, however still needs to go
 through lot
  of testing. Please share your observation and findings in case it
 differs
  from the above.
 
  Cheers!!!
  Akshay
 
 
  On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com
 wrote:
 
  Hi all.
 
 
  We have replication set-up, where we cater to HUUGEE amounts of data.
  Since quite some time, we have been facing issues wherein the slave
  lags behind master quite a lot.
 
 
  So, yesterday we were able to setup parallel replication, by
  incorporating the following changes ::
 
  a)
  To begin with, partitioned some tables into dedicated databases.
 
  b)
  Set up the slave-parallel-workers parameter.
 
 
  The above seems to work functionally fine, but we have one doubt/query
  about the scalability of this solution.
 
 
 
 
  First, I will jot down the flow as far as I understand (please correct
  if wrong) ::
 
  
  Even in parallel-replication scenario, the master writes all the
  binlog (combined for all databases) in just one file, which then gets
  passed onto the slave as single-file itself. Thereafter, all the
  replication commands (combined for all databases) are written
  sequentially onto one slave-relay file.
 
  Thereafter, as per the documentation, the slave-SQL-Thread acts as the
  manager, handing over commands to worker-threads depending upon the
  databases on which the commands run.
  
 
 
 
  So far, so good.
  However, what would happen if the slave-relay file contains the
 following
  ::
 
 
  db1-statement-1 (short-running)
  db2-statement-1 (very, very long-running)
  db2-statement-2 (short-running)
  db1-statement-2 (short-running)
  db1-statement-3 (short-running)
 
 
  We will be grateful if someone could please clarifiy, as to how the
  above statements will be managed amongst the Manager and the
  Worker-Threads (let's say there is just one worker-thread-per-db) ?
 
  In particular, does the Manager thread creates internal
  slave-relay-log-files, one for per database

Re: Query regarding implementation of parallel-replication

2014-09-02 Thread Ajay Garg
Ping !! :)

On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com wrote:
 Hi all.


 We have replication set-up, where we cater to HUUGEE amounts of data.
 Since quite some time, we have been facing issues wherein the slave
 lags behind master quite a lot.


 So, yesterday we were able to setup parallel replication, by
 incorporating the following changes ::

 a)
 To begin with, partitioned some tables into dedicated databases.

 b)
 Set up the slave-parallel-workers parameter.


 The above seems to work functionally fine, but we have one doubt/query
 about the scalability of this solution.




 First, I will jot down the flow as far as I understand (please correct
 if wrong) ::

 
 Even in parallel-replication scenario, the master writes all the
 binlog (combined for all databases) in just one file, which then gets
 passed onto the slave as single-file itself. Thereafter, all the
 replication commands (combined for all databases) are written
 sequentially onto one slave-relay file.

 Thereafter, as per the documentation, the slave-SQL-Thread acts as the
 manager, handing over commands to worker-threads depending upon the
 databases on which the commands run.
 



 So far, so good.
 However, what would happen if the slave-relay file contains the following ::


 db1-statement-1 (short-running)
 db2-statement-1 (very, very long-running)
 db2-statement-2 (short-running)
 db1-statement-2 (short-running)
 db1-statement-3 (short-running)


 We will be grateful if someone could please clarifiy, as to how the
 above statements will be managed amongst the Manager and the
 Worker-Threads (let's say there is just one worker-thread-per-db) ?

 In particular, does the Manager thread creates internal
 slave-relay-log-files, one for per database-statements?



 Thanks and Regards,
 Ajay



-- 
Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query regarding implementation of parallel-replication

2014-09-02 Thread Ajay Garg
Thanks Akshay for the reply.

On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
akshay.suryavansh...@gmail.com wrote:
 Hello Ajay,

 I tried testing the slave-parallel-workers few months ago, what I can surely
 tell you its still under development, and at that time needed some critical
 bug fixing.

 It is helpful in situations where each schema has even workload. The case
 you mentioned above doesnt have so. DB2 is getting different type of load
 than the others, in that case the other slave workers should be able to
 proceed with their workload as opposed to db2 which is still executing the
 long running statement. Now just imagine what happens if we try to take a
 backup, what binlog position should be captured ? the show slave status will
 print what ? this is where it needs development, I tried testing backups on
 it, but there is no concrete documentation on what position it would fetch.

 db2-statement-1 (very, very long-running)
 db2-statement-2 (short-running)

 about the above scenario, the next db2-statement-2 it will wait for the long
 running statement-1 to complete.

Surely.. !! :)


However, my concern is how this tracking is done.
That is, how is the db-wise segregation of statements done (from a
single-binlog-file originally coming onto the slave) ?

If this segregation is not done, then I cannot think of a way on how
things would scale up, like for example, when the slave-relay-log-file
contains a random mix of statements from tens of different databases.



Any pointers on the actual current implementation of this db-wise
statements-segregation will be a great confidence-booster !!  :)



Thanks and Regards,
Ajay


 However db2-statement-2 can be picked up by
 any other sql worker thread.

 This is a good feature added in mysql, however still needs to go through lot
 of testing. Please share your observation and findings in case it differs
 from the above.

 Cheers!!!
 Akshay


 On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com wrote:

 Hi all.


 We have replication set-up, where we cater to HUUGEE amounts of data.
 Since quite some time, we have been facing issues wherein the slave
 lags behind master quite a lot.


 So, yesterday we were able to setup parallel replication, by
 incorporating the following changes ::

 a)
 To begin with, partitioned some tables into dedicated databases.

 b)
 Set up the slave-parallel-workers parameter.


 The above seems to work functionally fine, but we have one doubt/query
 about the scalability of this solution.




 First, I will jot down the flow as far as I understand (please correct
 if wrong) ::

 
 Even in parallel-replication scenario, the master writes all the
 binlog (combined for all databases) in just one file, which then gets
 passed onto the slave as single-file itself. Thereafter, all the
 replication commands (combined for all databases) are written
 sequentially onto one slave-relay file.

 Thereafter, as per the documentation, the slave-SQL-Thread acts as the
 manager, handing over commands to worker-threads depending upon the
 databases on which the commands run.
 



 So far, so good.
 However, what would happen if the slave-relay file contains the following
 ::


 db1-statement-1 (short-running)
 db2-statement-1 (very, very long-running)
 db2-statement-2 (short-running)
 db1-statement-2 (short-running)
 db1-statement-3 (short-running)


 We will be grateful if someone could please clarifiy, as to how the
 above statements will be managed amongst the Manager and the
 Worker-Threads (let's say there is just one worker-thread-per-db) ?

 In particular, does the Manager thread creates internal
 slave-relay-log-files, one for per database-statements?



 Thanks and Regards,
 Ajay

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





-- 
Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query regarding implementation of parallel-replication

2014-08-31 Thread Ajay Garg
Hi all.


We have replication set-up, where we cater to HUUGEE amounts of data.
Since quite some time, we have been facing issues wherein the slave
lags behind master quite a lot.


So, yesterday we were able to setup parallel replication, by
incorporating the following changes ::

a)
To begin with, partitioned some tables into dedicated databases.

b)
Set up the slave-parallel-workers parameter.


The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.




First, I will jot down the flow as far as I understand (please correct
if wrong) ::


Even in parallel-replication scenario, the master writes all the
binlog (combined for all databases) in just one file, which then gets
passed onto the slave as single-file itself. Thereafter, all the
replication commands (combined for all databases) are written
sequentially onto one slave-relay file.

Thereafter, as per the documentation, the slave-SQL-Thread acts as the
manager, handing over commands to worker-threads depending upon the
databases on which the commands run.




So far, so good.
However, what would happen if the slave-relay file contains the following ::


db1-statement-1 (short-running)
db2-statement-1 (very, very long-running)
db2-statement-2 (short-running)
db1-statement-2 (short-running)
db1-statement-3 (short-running)


We will be grateful if someone could please clarifiy, as to how the
above statements will be managed amongst the Manager and the
Worker-Threads (let's say there is just one worker-thread-per-db) ?

In particular, does the Manager thread creates internal
slave-relay-log-files, one for per database-statements?



Thanks and Regards,
Ajay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



how to improve mysql's query speed in table 'token' of keystone,when using in openstack with lots of VMs?

2014-08-02 Thread 曾国仕
when i used mysql as the keystone's backend in openstack ,i found that the 
'token' table saved 29 millions record (using myisam as engine,the size of 
token.MYD is 100G) and have 4 new token save per second. That result to the 
slow query of a token .since of inserting new token frequently,how could i set 
the configure to speed up the query operation. 


the token's struct is id,expires,extra,valid,user_id with index {expires,valid}
and  the select sql is select id,expires,extra,valid,user_id from token where 
valid=1 and expires ='-XX-XX XX:XX:XX' and user_id 
='XXX';with often return 2 results.


Here is some db status data in a real  openstack environment with 381 active 
VMs: 
+---+-+ 
| Variable_name | Value | 
+---+-+ 
| Handler_read_first | 259573419 | 
| Handler_read_key | 1344821219 | 
| Handler_read_next | 3908969530 | 
| Handler_read_prev | 1235 | 
| Handler_read_rnd | 1951101 | 
| Handler_read_rnd_next | 48777237518 | 
+---+-+ 

and 
+-++ 
| Variable_name | Value | 
+-++ 
| Qcache_free_blocks | 498 | 
| Qcache_free_memory | 1192512 | 
| Qcache_hits | 1122242834 | 
| Qcache_inserts | 352700155 | 
| Qcache_lowmem_prunes | 34145019 | 
| Qcache_not_cached | 1529123943 | 
| Qcache_queries_in_cache | 1681 | 
| Qcache_total_blocks | 4949 | 
+-++ ‍


it seems that the 'insert' operation of saving new token affects the query 
buffer,and result of a low-level of query-hit's rate.


please give me some help,thanks.

Query time taken on disk

2014-07-14 Thread Satendra
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine  some time to seek the result from
disk (if that is not in cache/buffer)

Can anybody from the group please suggest any clue about the execution time
on the disk?

Many thanks
Stdranwl


Re: Query time taken on disk

2014-07-14 Thread Reindl Harald


Am 14.07.2014 12:48, schrieb Satendra:
 Hi there, I'm struggling to find the total time taken by a database query
 on the disk? As I understand when a database query start execution it takes
 some time inside the database engine  some time to seek the result from
 disk (if that is not in cache/buffer)
 
 Can anybody from the group please suggest any clue about the execution time
 on the disk?

mysql can't know this in any useful way

disk can be anyhting, real disk access, VFS and so cache
the application layer knows nothing about



signature.asc
Description: OpenPGP digital signature


Re: Query time taken on disk

2014-07-14 Thread Keith Murphy
Satendra,

Google show profile as it may give you all the information that you need.
There is a lot more details in the performance_schema if you want to dig
into it, but it can be quite difficult to get out.  Here is one place to
start if you want to pursue that angle:
http://www.markleith.co.uk/2011/05/23/monitoring-mysql-io-latency-with-performance_schema/

keith


On Mon, Jul 14, 2014 at 5:59 AM, Reindl Harald h.rei...@thelounge.net
wrote:



 Am 14.07.2014 12:48, schrieb Satendra:
  Hi there, I'm struggling to find the total time taken by a database query
  on the disk? As I understand when a database query start execution it
 takes
  some time inside the database engine  some time to seek the result from
  disk (if that is not in cache/buffer)
 
  Can anybody from the group please suggest any clue about the execution
 time
  on the disk?

 mysql can't know this in any useful way

 disk can be anyhting, real disk access, VFS and so cache
 the application layer knows nothing about




-- 



(c) 850-449-1912
(f)  423-930-8646


Re: Query time taken on disk

2014-07-14 Thread Morgan Tocker
Hi Satendra,

On Jul 14, 2014, at 3:48 AM, Satendra stdra...@gmail.com wrote:

 Hi there, I'm struggling to find the total time taken by a database query
 on the disk? As I understand when a database query start execution it takes
 some time inside the database engine  some time to seek the result from
 disk (if that is not in cache/buffer)
 
 Can anybody from the group please suggest any clue about the execution time
 on the disk?

I have a performance_schema example demonstrating total IO wait time for a 
workload:
http://www.tocker.ca/2014/02/18/todays-practical-use-case-for-performance-schema.html

To prepare this data non-aggregated (per-query) is always a little bit 
difficult:
- With select statements there is read ahead.
- With write statements there is redo logging (which is grouped together with 
other statements).

Maybe someone else on the list has better ideas on how to accommodate this?

- Morgan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query time taken on disk

2014-07-14 Thread greg . lane

Hi Satendra,


On 7/14/2014 5:48 AM, Satendra wrote:

Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine  some time to seek the result from
disk (if that is not in cache/buffer)

Can anybody from the group please suggest any clue about the execution time
on the disk?

Many thanks
Stdranwl



As stated in other emails you can use the performance_schema.  Mark 
Leith has provided a stable version of ps_helper now called the sys 
schema and can be obtained on github here, 
https://github.com/MarkLeith/mysql-sys .  There is quite a bit of help 
and examples in the README.md.  Also you check into the experimental 
portion of it called dba helper also on github, 
https://github.com/MarkLeith/dbahelper .  I think you will be able to 
find what you are looking for utilizing these tools that make 
performance schema much easier to use and under stand.


Greg

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to write a multi query in mysqltest framework?

2014-07-10 Thread Sergey Petrunia
On Thu, Jul 10, 2014 at 10:33:04AM +0800, 娄帅 wrote:
 In the C API, we can call mysql_query(select 1; select 2);
 which just send the command once to the server, and server
 return two result sets, So i want to know if there is a command in the
 mysqltest framework to do the job?
 I want to write a test case like that.

The client knows about statement bounds from query delimiter. By default the
delimiter is semicolon. You can change it to something else with 'delimiter'
command:

delimiter |;
select 1; select 2;|


BR
 Sergei
-- 
Sergei Petrunia, Software Developer
MariaDB | Skype: sergefp | Blog: http://s.petrunia.net/blog



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



How to write a multi query in mysqltest framework?

2014-07-09 Thread 娄帅
Hi, all,

In the C API, we can call mysql_query(select 1; select 2);
which just send the command once to the server, and server
return two result sets, So i want to know if there is a command in the
mysqltest framework to do the job?
I want to write a test case like that.

Thank you for your reply!


Re: Decode Json in MySQL query

2014-03-21 Thread Sukhjinder K. Narula
Many Thanks for the kind replies.

I have decoded in my code but just wondering in case I missed any solution
to decode via query.


On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:

 Short answer, no.  There is nothing in MySQL to facilitate this. In
 general, storing structured data as a blob (JSON, CSV, XML-fragment,
 etc..) is an anti-pattern in a relational environment.  There are
 NoSQL solutions that provide the facility: Mongo comes to mind; there
 are some others, I am sure.



 On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
  Hi, you probably want to perform this conversion on your client.   There
 are JSON parser libraries available for Java, PHP and the like.   Cheers,
 Karr
 
  On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula narula...@gmail.com
 wrote:
 
  Hello,
  I would like to know if there is a way to decode the json string stored
 in
  one of the fields as text without using triggers or stored procedures.
  What I want to do is is within the query, I would like to get one row
 per
  element within the json string.
  For example: the json string is as follow:
 
  [
   {
 name : Abc,
 age : 20
   },
   {
 name : Xyz,
 age : 18
   }
  ]
 
  and after query, I want result as:
  NameAge
  Abc   20
  Xyz   18
 
 
  Would this be possible, I greatly appreciate any help regarding this
  matter.
 
  Many Thanks,
  Sukhjinder
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Decode Json in MySQL query

2014-03-21 Thread Andrew Moore
May also be of interest;

http://www.slideshare.net/blueskarlsson/using-json-with-mariadb-and-mysql




On Fri, Mar 21, 2014 at 12:27 PM, Sukhjinder K. Narula
narula...@gmail.comwrote:

 Many Thanks for the kind replies.

 I have decoded in my code but just wondering in case I missed any solution
 to decode via query.


 On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:

  Short answer, no.  There is nothing in MySQL to facilitate this. In
  general, storing structured data as a blob (JSON, CSV, XML-fragment,
  etc..) is an anti-pattern in a relational environment.  There are
  NoSQL solutions that provide the facility: Mongo comes to mind; there
  are some others, I am sure.
 
 
 
  On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
   Hi, you probably want to perform this conversion on your client.
 There
  are JSON parser libraries available for Java, PHP and the like.   Cheers,
  Karr
  
   On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula 
 narula...@gmail.com
  wrote:
  
   Hello,
   I would like to know if there is a way to decode the json string
 stored
  in
   one of the fields as text without using triggers or stored procedures.
   What I want to do is is within the query, I would like to get one row
  per
   element within the json string.
   For example: the json string is as follow:
  
   [
{
  name : Abc,
  age : 20
},
{
  name : Xyz,
  age : 18
}
   ]
  
   and after query, I want result as:
   NameAge
   Abc   20
   Xyz   18
  
  
   Would this be possible, I greatly appreciate any help regarding this
   matter.
  
   Many Thanks,
   Sukhjinder
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Decode Json in MySQL query

2014-03-20 Thread Sukhjinder K. Narula
Hello,
I would like to know if there is a way to decode the json string stored in
one of the fields as text without using triggers or stored procedures.
What I want to do is is within the query, I would like to get one row per
element within the json string.
For example: the json string is as follow:

[
  {
name : Abc,
age : 20
  },
  {
name : Xyz,
age : 18
  }
]

and after query, I want result as:
NameAge
Abc   20
Xyz   18


Would this be possible, I greatly appreciate any help regarding this
matter.

Many Thanks,
Sukhjinder


Re: Decode Json in MySQL query

2014-03-20 Thread Karr Abgarian
Hi, you probably want to perform this conversion on your client.   There are 
JSON parser libraries available for Java, PHP and the like.   Cheers, Karr

On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula narula...@gmail.com wrote:

 Hello,
 I would like to know if there is a way to decode the json string stored in
 one of the fields as text without using triggers or stored procedures.
 What I want to do is is within the query, I would like to get one row per
 element within the json string.
 For example: the json string is as follow:
 
 [
  {
name : Abc,
age : 20
  },
  {
name : Xyz,
age : 18
  }
 ]
 
 and after query, I want result as:
 NameAge
 Abc   20
 Xyz   18
 
 
 Would this be possible, I greatly appreciate any help regarding this
 matter.
 
 Many Thanks,
 Sukhjinder


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Decode Json in MySQL query

2014-03-20 Thread Kishore Vaishnav
Hi,

http://blog.ulf-wendel.de/2013/mysql-5-7-sql-functions-for-json-udf/

This is not the exact solution for you query, but might help you better if
you add the libraries.


*thanks,*
*-- *Kishore Kumar Vaishnav

On Thu, Mar 20, 2014 at 11:35 AM, Sukhjinder K. Narula
narula...@gmail.comwrote:

 Hello,
 I would like to know if there is a way to decode the json string stored in
 one of the fields as text without using triggers or stored procedures.
 What I want to do is is within the query, I would like to get one row per
 element within the json string.
 For example: the json string is as follow:

 [
   {
 name : Abc,
 age : 20
   },
   {
 name : Xyz,
 age : 18
   }
 ]

 and after query, I want result as:
 NameAge
 Abc   20
 Xyz   18


 Would this be possible, I greatly appreciate any help regarding this
 matter.

 Many Thanks,
 Sukhjinder



Re: Decode Json in MySQL query

2014-03-20 Thread Michael Dykman
Short answer, no.  There is nothing in MySQL to facilitate this. In
general, storing structured data as a blob (JSON, CSV, XML-fragment,
etc..) is an anti-pattern in a relational environment.  There are
NoSQL solutions that provide the facility: Mongo comes to mind; there
are some others, I am sure.



On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
 Hi, you probably want to perform this conversion on your client.   There are 
 JSON parser libraries available for Java, PHP and the like.   Cheers, Karr

 On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula narula...@gmail.com 
 wrote:

 Hello,
 I would like to know if there is a way to decode the json string stored in
 one of the fields as text without using triggers or stored procedures.
 What I want to do is is within the query, I would like to get one row per
 element within the json string.
 For example: the json string is as follow:

 [
  {
name : Abc,
age : 20
  },
  {
name : Xyz,
age : 18
  }
 ]

 and after query, I want result as:
 NameAge
 Abc   20
 Xyz   18


 Would this be possible, I greatly appreciate any help regarding this
 matter.

 Many Thanks,
 Sukhjinder


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mySQL Query support/assistance...

2014-02-11 Thread Don Wieland
Hi gang,

I am looking for someone that I can pay a few hours to work with me on coming 
up with a few needed QUERIES for a large mysql database. The queries will span 
across tables, so I great knowledge of JOINS will most likely be necessary. We 
will work using SKYPE and GoToMeeting.

Please contact me, privately, if you can assist. I am located in California, 
USA (PST). Thanks!
 
Don Wieland
d...@pointmade.net
http://www.pointmade.net





Index Query Tunning

2014-01-29 Thread Anupam Karmarkar

Hi All,

I have situation here about Innodb locking.
In  transaction, We select from XYZ transaction table values and then updates 
it like below

SESSION 1:

START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
COMMIT;

SESSION 2:

UPDATE XYZ SET FLAG=1 WHERE ID = 2;


We
 are keep on getting deadlock due to index locking, there is index on 
FLAG, we can allow phantom read in session 1, we tried with READ 
COMMITTED but still same, I think issue with next-key locking.

If i do following in SESSION 1 would that help in locking or still it would 
lock index. Any suggestion.

SESSION 1:

START TRANSACTION;
UPDATE XYZ SET FLAG=0,ID=(@oID:=ID) ORDER BY TIME LIMIT 1;

SELECT @oID AS ID;
COMMIT;

--Anupam


Re: Index Query Tunning

2014-01-29 Thread Morgan Tocker

Hi Anupam,


We
  are keep on getting deadlock due to index locking, there is index on
FLAG, we can allow phantom read in session 1, we tried with READ
COMMITTED but still same, I think issue with next-key locking.


Did you try setting binlog-format=ROW as well?

I have a brief explanation of this here under 'Write scalability of 
certain statements':


http://www.tocker.ca/2013/09/04/row-based-replication.html


If i do following in SESSION 1 would that help in locking or still it would 
lock index. Any suggestion.

SESSION 1:

START TRANSACTION;
UPDATE XYZ SET FLAG=0,ID=(@oID:=ID) ORDER BY TIME LIMIT 1;

SELECT @oID AS ID;
COMMIT;


Locks are held for the duration of the transaction, so I don't think it 
will help here.


- Morgan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Index locking Query

2014-01-17 Thread Anupam Karmarkar
Hi All,

I have situation here about Innodb locking.
In  transaction, We select from XYZ transaction table values and then updates 
it like below

SESSION 1:

START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
COMMIT;

SESSION 2:

UPDATE XYZ SET FLAG=1 WHERE ID = 2;


We are keep on getting deadlock due to index locking, there is index on FLAG, 
we can allow phantom read in session 1, we tried with READ COMMITTED but still 
same, I think issue with next-key locking.

If i do following in SESSION 1 would that help in locking or still it would 
lock index. Any suggestion.

SESSION 1:

START TRANSACTION;
UPDATE XYZ SET FLAG=0,ID=(@oID:=ID) ORDER BY TIME LIMIT 1;

SELECT @oID AS ID;
COMMIT;

Re: ERROR 2013 (HY000): Lost connection to MySQL server during query

2014-01-08 Thread Manuel Arostegui
2014/1/7 h...@tbbs.net

  2014/01/06 17:07 +0100, Reindl Harald 
 what about look in the servers logfiles
 most likely max_allowed_packet laughable low
 
 Is this then, too, likly when the server and the client are the same
 machine?

 I left this out, that it only then happens when the client has been idle,
 and right afterwards the client repeats the request and all goes well. The
 message is no more than an irritatind break between request and fulfillment.

 Hello,

That happens when you're trying to re-use an existing connection which
wasn't properly closed and as you said, it's been idle. When you repeat the
operation, the thread is created again and thus everything goes normal.

Review the following variables

wait_timeout
net_write_timeout
net_read_timeout


Manu


Re: ERROR 2013 (HY000): Lost connection to MySQL server during query

2014-01-07 Thread hsv
 2014/01/06 17:07 +0100, Reindl Harald 
what about look in the servers logfiles
most likely max_allowed_packet laughable low 

Is this then, too, likly when the server and the client are the same machine?

I left this out, that it only then happens when the client has been idle, and 
right afterwards the client repeats the request and all goes well. The message 
is no more than an irritatind break between request and fulfillment.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ERROR 2013 (HY000): Lost connection to MySQL server during query

2014-01-06 Thread hsv
Now that I installed 5.6.14 on our Vista machine, when using mysql I often 
see that error-message, which under 5.5.8 I never saw. What is going on?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



  1   2   3   4   5   6   7   8   9   10   >