Re: auto_increment

2008-04-22 Thread Sebastian Mendel

Hiep Nguyen schrieb:

hi list,

reading manual on mysql regarding auto_increment with multiple-column 
index:


CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
);

INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

SELECT * FROM animals ORDER BY grp,id;

+++-+
| grp| id | name|
+++-+
| fish   |  1 | lax |
| mammal |  1 | dog |
| mammal |  2 | cat |
| mammal |  3 | whale   |
| bird   |  1 | penguin |
| bird   |  2 | ostrich |
+++-+

my question is what id would be if i:

UPDATE `animals` SET `grp`='mammal' WHERE `grp`='bird' AND `id`='2' AND 
`name`='ostrich' LIMIT 1;


you do not need `name`='ostrich' in WHERE, cause grp,id are already UNIQUE

your key is grp,id (bird,2)

but your query will fail, because there is already grp,id (mammal,2) and 
therre can not be two identical UNIQUE (PRIMARY) keys


--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment

2008-04-22 Thread Sebastian Mendel

Sebastian Mendel schrieb:

Hiep Nguyen schrieb:

hi list,

reading manual on mysql regarding auto_increment with multiple-column 
index:


CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
);

INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

SELECT * FROM animals ORDER BY grp,id;

+++-+
| grp| id | name|
+++-+
| fish   |  1 | lax |
| mammal |  1 | dog |
| mammal |  2 | cat |
| mammal |  3 | whale   |
| bird   |  1 | penguin |
| bird   |  2 | ostrich |
+++-+

my question is what id would be if i:

UPDATE `animals` SET `grp`='mammal' WHERE `grp`='bird' AND `id`='2' 
AND `name`='ostrich' LIMIT 1;


you do not need `name`='ostrich' in WHERE, cause grp,id are already UNIQUE

your key is grp,id (bird,2)

but your query will fail, because there is already grp,id (mammal,2) and 
therre can not be two identical UNIQUE (PRIMARY) keys


auto_increment comes only in effect when inserting NULL (or 0 in some SQL 
mode) or nothing (with default NULL, 0 what should be always the case for 
auto_increment fields)


your query should look like this:

UPDATE `animals`
   SET `grp` = 'mammal',
   `id`  = NULL
 WHERE `grp` = 'bird'
   AND `id`  = '2'
 LIMIT 1;

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How hard is it to move from on server to another?

2008-04-22 Thread Sebastian Mendel

B. Keith Murphy schrieb:
A simple rsync should do the trick.  How long will depend on how much 
data you have.  I would just shut down the server, copy over the data 
directory and start the new server up.  Should be a piece of cake.


Keith

David Ruggles wrote:

I have a MySQL 5.x box and I am thinking about moving it to another more
powerful server. I would be able to schedule some downtime so that's 
not an
issue. How complicated a process would this be? I don't want to 
upgrade the
software or anything, just move the existing tables, users and 
permissions

to another physical server. It would even have the same IP address.


additionally, depending on data size it could much more easier to just 
install our old HDD into the new server ... whether copy the data to the new 
HDD or sue the old ones ...



--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment

2008-04-22 Thread Ben Clewett

Are you sure, I just get:

CREATE TABLE ...

ERROR 1075 (42000): Incorrect table definition; there can be only one 
auto column and it must be defined as a key


On version 5.0.41.  What version are you using?



Hiep Nguyen wrote:

hi list,

reading manual on mysql regarding auto_increment with multiple-column 
index:


CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
);

INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

SELECT * FROM animals ORDER BY grp,id;

+++-+
| grp| id | name|
+++-+
| fish   |  1 | lax |
| mammal |  1 | dog |
| mammal |  2 | cat |
| mammal |  3 | whale   |
| bird   |  1 | penguin |
| bird   |  2 | ostrich |
+++-+

my question is what id would be if i:

UPDATE `animals` SET `grp`='mammal' WHERE `grp`='bird' AND `id`='2' AND 
`name`='ostrich' LIMIT 1;


i'm confused on auto_increment now.

thanks



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment

2008-04-22 Thread Sebastian Mendel

Ben Clewett schrieb:

Are you sure, I just get:

CREATE TABLE ...

ERROR 1075 (42000): Incorrect table definition; there can be only one 
auto column and it must be defined as a key


the mentioned CREATE TABLE is fine and works



On version 5.0.41.  What version are you using?


this works on all versions, and the example is from the MySQL manual

http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-22 Thread Jan Kirchhoff
Eric Bergen schrieb:
 TCP checksums aren't as strong as encryption. It's rare but corruption
 can happen.
   
But it happens every other day? that means at least one error in 4GB of
data (I have around 2GB of binlogs/day)?
Every DVD-ISO you download would be corrupt (statistically)?

 Where are you reading the positions from and how are you taking the
 snapshot to restore the slave?
   
From the log file:

080415  6:39:20 [ERROR] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with SLAVE START. We stopped at log
'mysql-bin.045709' position 172


I use rsync to set up the slave...



 On Mon, Apr 21, 2008 at 12:30 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote:
   
 Eric Bergen schrieb:

 
 Hi Jan,
   
  
   You have two separate issues here. First the issue with the link
   between the external slave and the master. Running mysql through
   something like stunnel may help with the connection and data loss
   issues.
  
  I wonder how any corruption could happen on a TCP connection as TCP has
  its own checksums and a connection would break down in case of a missing
  packet?

 
 The second problem is that your slave is corrupt. Duplicate key errors
   
   are sometimes caused by a corrupt table but more often by restarting
   replication from an incorrect binlog location. Try recloning the slave
   and starting replication again through stunnel.
  
  The duplicate key errors happen after I start at the beginning of a
  logfile (master_log_pos=0) when the positions that mysql reports as its
  last positions is not working.

  I think I have 2 issues:
  #1: how can this kind of binlog corruption happen on a TCP link although
  TCP has its checksums and resends lost packets?

  #2: why does mysql report a master log position that is obviously wrong?
  mysql  reports log-posion 172 which is not working at all in a change
  master to command, my only option is to start with master_log_pos=0 and
  the number of duplicate key errors and such that I have to skip after
  starting from master_log_pos=0 shows me that the real position that
  mysql has stopped processing the binlog must be something in the
  thousands or tenthousands and not 172?!

  Jan

 



   


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment

2008-04-22 Thread Ben Clewett


You are right, I've tried 5.0.18 and 5.0.45 which work.

There must have been a bug in 5.0.41 with which I used test the question...

I belive the question has been answered by now anyway :)

Ben


Sebastian Mendel wrote:

Ben Clewett schrieb:

Are you sure, I just get:

CREATE TABLE ...

ERROR 1075 (42000): Incorrect table definition; there can be only one 
auto column and it must be defined as a key


the mentioned CREATE TABLE is fine and works



On version 5.0.41.  What version are you using?


this works on all versions, and the example is from the MySQL manual

http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment

2008-04-22 Thread Hiep Nguyen

On Tue, 22 Apr 2008, Sebastian Mendel wrote:


Sebastian Mendel schrieb:

Hiep Nguyen schrieb:

hi list,

reading manual on mysql regarding auto_increment with multiple-column 
index:


CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
);

INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');

SELECT * FROM animals ORDER BY grp,id;

+++-+
| grp| id | name|
+++-+
| fish   |  1 | lax |
| mammal |  1 | dog |
| mammal |  2 | cat |
| mammal |  3 | whale   |
| bird   |  1 | penguin |
| bird   |  2 | ostrich |
+++-+

my question is what id would be if i:

UPDATE `animals` SET `grp`='mammal' WHERE `grp`='bird' AND `id`='2' AND 
`name`='ostrich' LIMIT 1;


you do not need `name`='ostrich' in WHERE, cause grp,id are already UNIQUE

your key is grp,id (bird,2)

but your query will fail, because there is already grp,id (mammal,2) and 
therre can not be two identical UNIQUE (PRIMARY) keys


auto_increment comes only in effect when inserting NULL (or 0 in some SQL 
mode) or nothing (with default NULL, 0 what should be always the case for 
auto_increment fields)


your query should look like this:

UPDATE `animals`
  SET `grp` = 'mammal',
  `id`  = NULL
WHERE `grp` = 'bird'
  AND `id`  = '2'
LIMIT 1;

--
Sebastian Mendel



thanks, i got it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Working with Images

2008-04-22 Thread Sebastian Mendel

Victor Subervi schrieb:

Hi;
The python code works properly, so I assume this is a strictly MySQL
question now :)
If I grab an image in the database thus:

  sql = select pic1 from products where id=' + str(id) + ';
  cursor.execute(sql)
  pic1 = cursor.fetchall()[0][0].tostring()
#  pic1 = cursor.fetchall()[0][0]  // either this or the above line

and try and re-insert it thus:

  cursor.execute('update products set pic1=%s where id=%s, ;',
(pic1, id))


i am not familiar with this python db abstraction class, but

the last comma seems to be wrong

try
'update products set pic1=%s where id=%s;'
instead of
'update products set pic1=%s where id=%s, ;'

or even without ';' too

and is this some sort of prepared statement, or are the parameters escaped 
somewhere else?


--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SQL question: find items tagged with specific tags

2008-04-22 Thread Ingo Weiss

Hi all,

I have an application where items can be tagged. There are three  
tables 'items', 'taggings'  and 'tags' joined together like this:


 items inner join taggings on (items.id = taggings.item_id) inner  
join tags on (tags.id = taggings.tag_id)


Now I have been struggling for some time now with coming up with the  
SQL to find the items the tags of which include a specified list of  
tag names. Example:


I am looking for items tagged with 'blue' and 'red'. This should find  
me:


- items tagged with 'blue' and 'red'
- items tagged with 'blue', 'red' and 'green'

Any help is very much appreciated!
Ingo


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL question: find items tagged with specific tags

2008-04-22 Thread Sebastian Mendel

Ingo Weiss schrieb:

Hi all,

I have an application where items can be tagged. There are three tables 
'items', 'taggings'  and 'tags' joined together like this:


 items inner join taggings on (items.id = taggings.item_id) inner join 
tags on (tags.id = taggings.tag_id)


Now I have been struggling for some time now with coming up with the SQL 
to find the items the tags of which include a specified list of tag 
names. Example:


I am looking for items tagged with 'blue' and 'red'. This should find me:

- items tagged with 'blue' and 'red'
- items tagged with 'blue', 'red' and 'green'


SELECT DISTINCT items.*
FROM [your join above]
WHERE tags.name IN ('blue', 'red');

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance

2008-04-22 Thread Bruno B . B . Magalhães

Hi everybody,

I am back to this list after a long period away due to work time  
restrictions... I have great news and a few interesting applications  
that I will release to the mysql community very soon, most probably as  
open source.


But now I have a performance problem with a client of mine, that I was  
not able to solve... The problem is that I have a very large table in  
terms of data, about 7.000.000 financial transactions records, with  
the following table (translated from portuguese):


CREATE TABLE `transactions` (
 `client_id` int(5) unsigned zerofill NOT NULL default '0',
 `client_unit_id` int(4) unsigned zerofill NOT NULL default '',
 `client_property_id` int(6) unsigned zerofill NOT NULL default  
'00',

 `transaction_id` int(6) unsigned zerofill NOT NULL default '00',
 `transaction_account_id` int(3) unsigned zerofill NOT NULL default  
'000',

 `transaction_classification_id` int(3) unsigned NOT NULL default '0',
 `transaction_category_id` int(4) unsigned zerofill NOT NULL default  
'',

 `transaction_complement` varchar(200) NOT NULL,
 `transaction_date` date default NULL,
 `transaction_amount` decimal(16,2) NOT NULL,
 `transaction_parcel` varchar(8) NOT NULL,
 `transaction_nature` varchar(1) NOT NULL
 KEY `transactions_idx_1`  
(` 
client_id 
`,`client_unit_id`,`client_property_id`,`transaction_account_id`,
   
` 
transaction_classification_id 
` 
,` 
transaction_category_id 
`,`transaction_id`,`transaction_date`,`transaction_nature`)

) ENGINE=MyISAM DEFAULT CHARSET=utf8

And most the queries are similar to this one:

 SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_parcel AS parcel,
   transactions.transaction_amount AS amount,
   transactions.transaction_nature AS nature,
		   transactions_categories.transaction_category_description AS  
category_description

  FROM transactions AS transactions
LEFT JOIN transactions_categories AS transactions_categories
		ON transactions.transaction_category_id =  
transactions_categories.transaction_category_id

 WHERE transactions.client_id = :client
   AND transactions.client_unit_id = :unit
   AND transactions.transaction_date = :start_date
   AND transactions.transaction_date = :stop_date
ORDER BY transactions.transaction_date,
   transactions.transaction_id ASC

So the most important indexes are client_id , client_unit_id ,  
client_property_id , transaction_account_id ,   
transaction_classification_id  , transaction_category_id ,  
transaction_id , transaction_date , transaction_nature, and most of  
the time they are called together, I thing the most problematic part  
of those queries are the date range part, should I use a different  
index only for this column to maintain the index small?


Most of the financials reports today takes about 8 to 12 seconds to be  
generated for one month (course that I have to sum previous months  
totals to give the balance).


Thanks in advance...

Regards,
Bruno B B Magalh'aes

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance

2008-04-22 Thread Phil
I'm sure if you created an index on
client_id,client_unit_id,transaction_date (with optionally something else to
make unique) it would increase performance.

What does an EXPLAIN give you?

Phil

On Tue, Apr 22, 2008 at 11:41 AM, Bruno B. B. Magalhães 
[EMAIL PROTECTED] wrote:

 Hi everybody,

 I am back to this list after a long period away due to work time
 restrictions... I have great news and a few interesting applications that I
 will release to the mysql community very soon, most probably as open source.

 But now I have a performance problem with a client of mine, that I was not
 able to solve... The problem is that I have a very large table in terms of
 data, about 7.000.000 financial transactions records, with the following
 table (translated from portuguese):

 CREATE TABLE `transactions` (
  `client_id` int(5) unsigned zerofill NOT NULL default '0',
  `client_unit_id` int(4) unsigned zerofill NOT NULL default '',
  `client_property_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_account_id` int(3) unsigned zerofill NOT NULL default '000',
  `transaction_classification_id` int(3) unsigned NOT NULL default '0',
  `transaction_category_id` int(4) unsigned zerofill NOT NULL default
 '',
  `transaction_complement` varchar(200) NOT NULL,
  `transaction_date` date default NULL,
  `transaction_amount` decimal(16,2) NOT NULL,
  `transaction_parcel` varchar(8) NOT NULL,
  `transaction_nature` varchar(1) NOT NULL
  KEY `transactions_idx_1`
 (`client_id`,`client_unit_id`,`client_property_id`,`transaction_account_id`,

  
 `transaction_classification_id`,`transaction_category_id`,`transaction_id`,`transaction_date`,`transaction_nature`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8

 And most the queries are similar to this one:

  SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_parcel AS parcel,
   transactions.transaction_amount AS amount,
   transactions.transaction_nature AS nature,
   transactions_categories.transaction_category_description
 AS category_description
  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
ON transactions.transaction_category_id =
 transactions_categories.transaction_category_id
 WHERE transactions.client_id = :client
   AND transactions.client_unit_id = :unit
   AND transactions.transaction_date = :start_date
   AND transactions.transaction_date = :stop_date
 ORDER BY transactions.transaction_date,
   transactions.transaction_id ASC

 So the most important indexes are client_id , client_unit_id ,
 client_property_id , transaction_account_id ,  transaction_classification_id
  , transaction_category_id , transaction_id , transaction_date ,
 transaction_nature, and most of the time they are called together, I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

 Most of the financials reports today takes about 8 to 12 seconds to be
 generated for one month (course that I have to sum previous months totals to
 give the balance).

 Thanks in advance...

 Regards,
 Bruno B B Magalh'aes

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Help build our city at http://free-dc.myminicity.com !


Restarting/Rebuilding Slave

2008-04-22 Thread Ashley M. Kirchner


   I'm rebuilding a server that is a slave to our primary.  At this 
point in the game, I can no longer access anything on the slave server 
(its main drive is being rebuild).  However, I do have a full rsync of 
the /var/lib/mysql folder that was on it (this rsync was done AFTER 
MySql was properly shutdown on this server.)  And I have all the info 
from the master server as well.


   So the question is, once I finish rebuilding this machine and I 
install MySQL back on it, what's the proper or safest way to rebuild the 
slave setup and get it going again (and pick up where it left off before 
the machine went down)?  Can I simply rsync all the files back and start 
MySQL?


   Looking at the rsynced data, I can see the -relay-bin files, all the 
mysql-bin files, and all the individual databases in their respective 
folders:


drwx--  2 mysql mysql12288 Sep 11  2007 lif1/
drwx--  2 mysql mysql 4096 Apr  3 10:00 lif2/
drwx--  2 mysql mysql16384 Mar  4 19:00 lotro/
drwx--x--x  2 mysql mysql 4096 Apr 14  2007 mysql/
drwxr-xr-x  2 mysql mysql 4096 Apr 13 11:59 test/
drwx--  2 mysql mysql 4096 Jan 28 10:54 wonderland/
-rw-rw  1 mysql mysql 10485760 Apr 22 10:30 ibdata1
-rw-rw  1 mysql mysql  5242880 Apr 22 10:30 ib_logfile0
-rw-rw 14 mysql mysql  5242880 May 11  2007 ib_logfile1
-rw-rw  1 mysql root 25480 Apr 22 10:30 lilpusher.err
-rw-rw  1 mysql mysql  4607313 Apr 22 10:30 lilpusher-relay-bin.005216
-rw-rw  3 mysql mysql   29 Apr 20 05:42 lilpusher-relay-bin.index
-rw-rw  1 mysql mysql23176 Apr 22 10:30 log.01
-rw-rw  1 mysql mysql   84 Apr 22 10:28 master.info
-rw-rw 14 mysql mysql  117 May 11  2007 mysql-bin.01
-rw-rw 14 mysql mysql  117 May 11  2007 mysql-bin.02
-rw-rw 14 mysql mysql  117 May 21  2007 mysql-bin.03
-rw-rw 14 mysql mysql   98 May 21  2007 mysql-bin.04
-rw-rw 14 mysql mysql  117 Jun 15  2007 mysql-bin.05
-rw-rw 14 mysql mysql  117 Aug 11  2007 mysql-bin.06
-rw-rw 14 mysql mysql  117 Sep 25  2007 mysql-bin.07
-rw-rw 14 mysql mysql  117 Oct 24 10:06 mysql-bin.08
-rw-rw 14 mysql mysql  117 Oct 28 09:50 mysql-bin.09
-rw-rw 14 mysql mysql  117 Nov  6 12:26 mysql-bin.10
-rw-rw 14 mysql mysql  117 Dec 23 03:15 mysql-bin.11
-rw-rw  1 mysql mysql  117 Apr 22 10:30 mysql-bin.12
-rw-rw 14 mysql mysql  228 Dec 23 03:18 mysql-bin.index
-rw-rw  1 mysql mysql   66 Apr 22 10:28 relay-log.info


--
W | It's not a bug - it's an undocumented feature.
 +
 Ashley M. Kirchner mailto:[EMAIL PROTECTED]   .   303.442.6410 x130
 IT Director / SysAdmin / Websmith . 800.441.3873 x130
 Photo Craft Imaging   . 3550 Arapahoe Ave. #6
 http://www.pcraft.com . .  ..   Boulder, CO 80303, U.S.A. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance

2008-04-22 Thread Rob Wultsch
On Tue, Apr 22, 2008 at 8:41 AM, Bruno B. B. Magalhães 
[EMAIL PROTECTED] wrote:

 Hi everybody,

 I am back to this list after a long period away due to work time
 restrictions... I have great news and a few interesting applications that I
 will release to the mysql community very soon, most probably as open source.

 But now I have a performance problem with a client of mine, that I was not
 able to solve... The problem is that I have a very large table in terms of
 data, about 7.000.000 financial transactions records, with the following
 table (translated from portuguese):

 CREATE TABLE `transactions` (
  `client_id` int(5) unsigned zerofill NOT NULL default '0',
  `client_unit_id` int(4) unsigned zerofill NOT NULL default '',
  `client_property_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_account_id` int(3) unsigned zerofill NOT NULL default '000',
  `transaction_classification_id` int(3) unsigned NOT NULL default '0',
  `transaction_category_id` int(4) unsigned zerofill NOT NULL default
 '',
  `transaction_complement` varchar(200) NOT NULL,
  `transaction_date` date default NULL,
  `transaction_amount` decimal(16,2) NOT NULL,
  `transaction_parcel` varchar(8) NOT NULL,
  `transaction_nature` varchar(1) NOT NULL
  KEY `transactions_idx_1`
 (`client_id`,`client_unit_id`,`client_property_id`,`transaction_account_id`,

  
 `transaction_classification_id`,`transaction_category_id`,`transaction_id`,`transaction_date`,`transaction_nature`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8

 And most the queries are similar to this one:

  SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_parcel AS parcel,
   transactions.transaction_amount AS amount,
   transactions.transaction_nature AS nature,
   transactions_categories.transaction_category_description
 AS category_description
  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
ON transactions.transaction_category_id =
 transactions_categories.transaction_category_id
 WHERE transactions.client_id = :client
   AND transactions.client_unit_id = :unit
   AND transactions.transaction_date = :start_date
   AND transactions.transaction_date = :stop_date
 ORDER BY transactions.transaction_date,
   transactions.transaction_id ASC

 So the most important indexes are client_id , client_unit_id ,
 client_property_id , transaction_account_id ,  transaction_classification_id
  , transaction_category_id , transaction_id , transaction_date ,
 transaction_nature, and most of the time they are called together, I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

 Most of the financials reports today takes about 8 to 12 seconds to be
 generated for one month (course that I have to sum previous months totals to
 give the balance).

 Thanks in advance...

 Regards,
 Bruno B B Magalh'aes

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 I would think probably not on the date range. An Phil says an EXPLAIN
would probably be helpful. Particularly of note would be the key_len. I
would image an optimal index would be something like
KEY `transactions_idx_1` (`client_id`,`client_unit_id`,`transaction_date`)

If memory serves MySQL does not use a composite index to the right of range
scan.


-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)


Re: Performance

2008-04-22 Thread Rob Wultsch
On Tue, Apr 22, 2008 at 8:41 AM, Bruno B. B. Magalhães 
[EMAIL PROTECTED] wrote:

 Hi everybody,

 I am back to this list after a long period away due to work time
 restrictions... I have great news and a few interesting applications that I
 will release to the mysql community very soon, most probably as open source.

 But now I have a performance problem with a client of mine, that I was not
 able to solve... The problem is that I have a very large table in terms of
 data, about 7.000.000 financial transactions records, with the following
 table (translated from portuguese):

 CREATE TABLE `transactions` (
  `client_id` int(5) unsigned zerofill NOT NULL default '0',
  `client_unit_id` int(4) unsigned zerofill NOT NULL default '',
  `client_property_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_id` int(6) unsigned zerofill NOT NULL default '00',
  `transaction_account_id` int(3) unsigned zerofill NOT NULL default '000',
  `transaction_classification_id` int(3) unsigned NOT NULL default '0',
  `transaction_category_id` int(4) unsigned zerofill NOT NULL default
 '',
  `transaction_complement` varchar(200) NOT NULL,
  `transaction_date` date default NULL,
  `transaction_amount` decimal(16,2) NOT NULL,
  `transaction_parcel` varchar(8) NOT NULL,
  `transaction_nature` varchar(1) NOT NULL
  KEY `transactions_idx_1`
 (`client_id`,`client_unit_id`,`client_property_id`,`transaction_account_id`,

  
 `transaction_classification_id`,`transaction_category_id`,`transaction_id`,`transaction_date`,`transaction_nature`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8

 And most the queries are similar to this one:

  SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_parcel AS parcel,
   transactions.transaction_amount AS amount,
   transactions.transaction_nature AS nature,
   transactions_categories.transaction_category_description
 AS category_description
  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
ON transactions.transaction_category_id =
 transactions_categories.transaction_category_id
 WHERE transactions.client_id = :client
   AND transactions.client_unit_id = :unit
   AND transactions.transaction_date = :start_date
   AND transactions.transaction_date = :stop_date
 ORDER BY transactions.transaction_date,
   transactions.transaction_id ASC

 So the most important indexes are client_id , client_unit_id ,
 client_property_id , transaction_account_id ,  transaction_classification_id
  , transaction_category_id , transaction_id , transaction_date ,
 transaction_nature, and most of the time they are called together, I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

 Most of the financials reports today takes about 8 to 12 seconds to be
 generated for one month (course that I have to sum previous months totals to
 give the balance).

 Thanks in advance...

 Regards,
 Bruno B B Magalh'aes

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 I would think probably not on the date range. An Phil says an EXPLAIN
would probably be helpful. Particularly of note would be the key_len. I
would image an optimal index would be something like
(`client_id`,`client_unit_id`,`transaction_date`)

If memory serves MySQL does not use a composite index to the right of range
scan.

PS I apologize if this gets sent twice... had an issue with the mail client.

-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)


Re: Performance

2008-04-22 Thread Perrin Harkins
On Tue, Apr 22, 2008 at 11:41 AM, Bruno B.  B.  Magalhães
[EMAIL PROTECTED] wrote:
 I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

My experience with doing data warehousing in MySQL was that when all
you need is day granularity, you are much better off having a
de-normalized 'days_since_epoch' column or a date dimension table with
a column like that.  Then your date math becomes simple integer
comparisons which are much faster.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



a strange problem

2008-04-22 Thread liaojian_163
hi,all.
In my mysql server,I have a strange problem.
can someone help me?
Thank you.

mysql select id,classid,newstime from phome_ecms_zhichang  where classid=41 
and id 2500 order by id desc  limit 10;
+--+-+-+
| id   | classid | newstime|
+--+-+-+
| 2543 |  41 | 2008-04-22 21:55:22 | 
| 2540 |  41 | 2008-04-19 12:29:30 | 
| 2537 |  41 | 2008-04-18 17:38:50 | 
| 2536 |  41 | 2008-04-18 17:37:56 | 
| 2534 |  41 | 2008-04-18 12:22:24 | 
| 2533 |  41 | 2008-04-18 01:19:49 | 
| 2532 |  41 | 2008-04-18 01:18:42 | 
| 2527 |  41 | 2008-04-16 18:45:34 | 
| 2526 |  41 | 2008-04-16 18:43:03 | 
| 2523 |  41 | 2008-04-16 08:47:16 | 
+--+-+-+
10 rows in set (0.00 sec)

mysql select id,classid,newstime from phome_ecms_zhichang  where classid=41 
order by id desc  limit 10;
+--+-+-+
| id   | classid | newstime|
+--+-+-+
| 2540 |  41 | 2008-04-19 12:29:30 | 
| 2537 |  41 | 2008-04-18 17:38:50 | 
| 2536 |  41 | 2008-04-18 17:37:56 | 
| 2534 |  41 | 2008-04-18 12:22:24 | 
| 2533 |  41 | 2008-04-18 01:19:49 | 
| 2532 |  41 | 2008-04-18 01:18:42 | 
| 2527 |  41 | 2008-04-16 18:45:34 | 
| 2526 |  41 | 2008-04-16 18:43:03 | 
| 2523 |  41 | 2008-04-16 08:47:16 | 
| 2522 |  41 | 2008-04-15 15:34:55 | 
+--+-+-+


mysql select id,classid,newstime from phome_ecms_zhichang  where classid=41 
and id 0 order by id desc  limit 10;
Empty set (0.00 sec)


desc  phome_ecms_zhichang;
+-+--+--+-+-++
| Field   | Type | Null | Key | Default | Extra 
 |
+-+--+--+-+-++
| id  | int(11)  | NO   | PRI | NULL| 
auto_increment | 
| classid | smallint(6)  | NO   | MUL | 0   |   
 | 
| onclick | int(11)  | NO   | | 0   |   
 | 
| newspath| varchar(50)  | NO   | | |   
 | 
| keyboard| varchar(255) | NO   | | |   
 | 
| keyid   | varchar(255) | NO   | | |   
 | 
| userid  | int(11)  | NO   | | 0   |   
 | 
| username| varchar(30)  | NO   | | |   
 | 
| ztid| varchar(255) | NO   | | |   
 | 
| checked | tinyint(1)   | NO   | MUL | 0   |   
 | 
| istop   | tinyint(4)   | NO   | | 0   |   
 | 
| truetime| int(11)  | NO   | MUL | 0   |   
 | 
| ismember| tinyint(1)   | NO   | | 0   |   
 | 
| dokey   | tinyint(1)   | NO   | | 0   |   
 | 
| userfen | int(11)  | NO   | | 0   |   
 | 
| isgood  | tinyint(1)   | NO   | | 0   |   
 | 
| titlefont   | varchar(50)  | NO   | | |   
 | 
| titleurl| varchar(200) | NO   | | |   
 | 
| filename| varchar(60)  | NO   | | |   
 | 
| filenameqz  | varchar(28)  | NO   | | |   
 | 
| fh  | tinyint(1)   | NO   | | 0   |   
 | 
| groupid | smallint(6)  | NO   | | 0   |   
 | 
| newstempid  | smallint(6)  | NO   | | 0   |   
 | 
| plnum   | int(11)  | NO   | | 0   |   
 | 
| firsttitle  | tinyint(1)   | NO   | | 0   |   
 | 
| isqf| tinyint(1)   | NO   | | 0   |   
 | 
| totaldown   | int(11)  | NO   | | 0   |   
 | 
| title   | varchar(200) | NO   | | |   
 | 
| newstime| datetime | NO   | MUL | -00-00 00:00:00 |   
 | 
| titlepic| varchar(200) | NO   | | |   
 | 
| closepl | tinyint(1)   | NO   | | 0   |   
 | 
| havehtml| tinyint(1)   | NO   | | 0   |   
 | 
| lastdotime  | int(11)  | NO   | | 0   |   
 | 
| haveaddfen  | tinyint(1)   | NO   | | 0   |   
 | 
| infopfen| int(11)  | NO   | | 0   |   
 | 
| infopfennum | int(11)  | NO   | | 0   |   
 | 
| votenum | 

Weird result on max compared to order by

2008-04-22 Thread jmacaranas
Hi,

  I did a select on a primary key.. 
Select max(account_id) from mytable;
   -- it gave me a value X

  I did a select with order by
Select account_id from mytable order by account_id desc limit 3
   -- it gave me a value of Y ( Y is the right value )


  I was wondering why it didn't gave me the same value and after some
time doing a select max gave me the right value Y

Tia,


This message and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom it is
addressed. It may contain sensitive and private proprietary or legally
privileged information. No confidentiality or privilege is waived or
lost by any mistransmission. If you are not the intended recipient,
please immediately delete it and all copies of it from your system,
destroy any hard copies of it and notify the sender. You must not,
directly or indirectly, use, disclose, distribute, print, or copy any
part of this message if you are not the intended recipient. 
FXDirectDealer, LLC reserves the right to monitor all e-mail 
communications through its networks. Any views expressed in this 
message are those of the individual sender, except where the 
message states otherwise and the sender is authorized to state them.

Unless otherwise stated, any pricing information given in this message
is indicative only, is subject to change and does not constitute an
offer to deal at any price quoted. Any reference to the terms of
executed transactions should be treated as preliminary only and subject
to our formal confirmation. FXDirectDealer, LLC is not responsible for any
recommendation, solicitation, offer or agreement or any information
about any transaction, customer account or account activity contained in
this communication.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a strange problem

2008-04-22 Thread Phil
Not knowing your msqyl version, perhaps it's the form of your LIMIT clause.

try LIMIT 0,10 instead.

Phil

2008/4/22 liaojian_163 [EMAIL PROTECTED]:

 hi,all.
 In my mysql server,I have a strange problem.
 can someone help me?
 Thank you.

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 2500 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2543 |  41 | 2008-04-22 21:55:22 |
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 +--+-+-+
 10 rows in set (0.00 sec)

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 | 2522 |  41 | 2008-04-15 15:34:55 |
 +--+-+-+


 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 0 order by id desc  limit 10;
 Empty set (0.00 sec)


 desc  phome_ecms_zhichang;

 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra
|

 +-+--+--+-+-++
 | id  | int(11)  | NO   | PRI | NULL|
 auto_increment |
 | classid | smallint(6)  | NO   | MUL | 0   |
|
 | onclick | int(11)  | NO   | | 0   |
|
 | newspath| varchar(50)  | NO   | | |
|
 | keyboard| varchar(255) | NO   | | |
|
 | keyid   | varchar(255) | NO   | | |
|
 | userid  | int(11)  | NO   | | 0   |
|
 | username| varchar(30)  | NO   | | |
|
 | ztid| varchar(255) | NO   | | |
|
 | checked | tinyint(1)   | NO   | MUL | 0   |
|
 | istop   | tinyint(4)   | NO   | | 0   |
|
 | truetime| int(11)  | NO   | MUL | 0   |
|
 | ismember| tinyint(1)   | NO   | | 0   |
|
 | dokey   | tinyint(1)   | NO   | | 0   |
|
 | userfen | int(11)  | NO   | | 0   |
|
 | isgood  | tinyint(1)   | NO   | | 0   |
|
 | titlefont   | varchar(50)  | NO   | | |
|
 | titleurl| varchar(200) | NO   | | |
|
 | filename| varchar(60)  | NO   | | |
|
 | filenameqz  | varchar(28)  | NO   | | |
|
 | fh  | tinyint(1)   | NO   | | 0   |
|
 | groupid | smallint(6)  | NO   | | 0   |
|
 | newstempid  | smallint(6)  | NO   | | 0   |
|
 | plnum   | int(11)  | NO   | | 0   |
|
 | firsttitle  | tinyint(1)   | NO   | | 0   |
|
 | isqf| tinyint(1)   | NO   | | 0   |
|
 | totaldown   | int(11)  | NO   | | 0   |
|
 | title   | varchar(200) | NO   | | |
|
 | newstime| datetime | NO   | MUL | -00-00 00:00:00 |
|
 | titlepic| varchar(200) | NO   | | |
|
 | closepl | tinyint(1)   | NO   | | 0   |
|
 | havehtml| tinyint(1)   | NO   | | 0   |
|
 | lastdotime  | int(11)  | NO   | | 0   |
|
 | haveaddfen  | tinyint(1)   | NO   | | 0   |
|
 | infopfen| int(11)  | NO   | | 0   |
|
 | infopfennum | int(11)  | NO   | | 0   |
|
 | votenum | int(11)  | NO   | | 0   |
|
 | ftitle  | varchar(200) | NO   | |   

Re: a strange problem

2008-04-22 Thread liaojian_163
I don't think that the limit cause the stange problem.

thank you anyway.
- Original Message - 
From: Phil [EMAIL PROTECTED]
To: liaojian_163 [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, April 23, 2008 3:46 AM
Subject: Re: a strange problem


 Not knowing your msqyl version, perhaps it's the form of your LIMIT clause.
 
 try LIMIT 0,10 instead.
 
 Phil
 
 2008/4/22 liaojian_163 [EMAIL PROTECTED]:
 
 hi,all.
 In my mysql server,I have a strange problem.
 can someone help me?
 Thank you.

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 2500 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2543 |  41 | 2008-04-22 21:55:22 |
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 +--+-+-+
 10 rows in set (0.00 sec)

 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 order by id desc  limit 10;
 +--+-+-+
 | id   | classid | newstime|
 +--+-+-+
 | 2540 |  41 | 2008-04-19 12:29:30 |
 | 2537 |  41 | 2008-04-18 17:38:50 |
 | 2536 |  41 | 2008-04-18 17:37:56 |
 | 2534 |  41 | 2008-04-18 12:22:24 |
 | 2533 |  41 | 2008-04-18 01:19:49 |
 | 2532 |  41 | 2008-04-18 01:18:42 |
 | 2527 |  41 | 2008-04-16 18:45:34 |
 | 2526 |  41 | 2008-04-16 18:43:03 |
 | 2523 |  41 | 2008-04-16 08:47:16 |
 | 2522 |  41 | 2008-04-15 15:34:55 |
 +--+-+-+


 mysql select id,classid,newstime from phome_ecms_zhichang  where
 classid=41 and id 0 order by id desc  limit 10;
 Empty set (0.00 sec)


 desc  phome_ecms_zhichang;

 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra
|

 +-+--+--+-+-++
 | id  | int(11)  | NO   | PRI | NULL|
 auto_increment |
 | classid | smallint(6)  | NO   | MUL | 0   |
|
 | onclick | int(11)  | NO   | | 0   |
|
 | newspath| varchar(50)  | NO   | | |
|
 | keyboard| varchar(255) | NO   | | |
|
 | keyid   | varchar(255) | NO   | | |
|
 | userid  | int(11)  | NO   | | 0   |
|
 | username| varchar(30)  | NO   | | |
|
 | ztid| varchar(255) | NO   | | |
|
 | checked | tinyint(1)   | NO   | MUL | 0   |
|
 | istop   | tinyint(4)   | NO   | | 0   |
|
 | truetime| int(11)  | NO   | MUL | 0   |
|
 | ismember| tinyint(1)   | NO   | | 0   |
|
 | dokey   | tinyint(1)   | NO   | | 0   |
|
 | userfen | int(11)  | NO   | | 0   |
|
 | isgood  | tinyint(1)   | NO   | | 0   |
|
 | titlefont   | varchar(50)  | NO   | | |
|
 | titleurl| varchar(200) | NO   | | |
|
 | filename| varchar(60)  | NO   | | |
|
 | filenameqz  | varchar(28)  | NO   | | |
|
 | fh  | tinyint(1)   | NO   | | 0   |
|
 | groupid | smallint(6)  | NO   | | 0   |
|
 | newstempid  | smallint(6)  | NO   | | 0   |
|
 | plnum   | int(11)  | NO   | | 0   |
|
 | firsttitle  | tinyint(1)   | NO   | | 0   |
|
 | isqf| tinyint(1)   | NO   | | 0   |
|
 | totaldown   | int(11)  | NO   | | 0   |
|
 | title   | varchar(200) | NO   | | |
|
 | newstime| datetime | NO   | MUL | -00-00 00:00:00 |
|
 | titlepic| varchar(200) | NO   | | |
|
 | closepl | tinyint(1)   | NO   | | 0   |
|
 | havehtml| tinyint(1)   | NO   | | 0   |
|
 | lastdotime  | int(11)  | NO   | | 0   |
|
 | haveaddfen  | tinyint(1)   | NO   | | 0   |
|
 | 

Re: Performance

2008-04-22 Thread Bruno B. B. Magalhães

Hi Phill, Rob and Perrin,

I forgot to attach the explain query from MySQL, course it's one of  
the most important things... Sorry!!!


EXPLAIN SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
   transactions.transaction_complement AS complement,
   transactions.transaction_value AS value,
   transactions.transaction_nature AS nature,
			   transactions_categories.transaction_category_description AS  
category_description,

   clients_units.client_unit_complement AS 
unit_complement
  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
			ON transactions.transaction_category_id =  
transactions_categories.transaction_category_id

 LEFT JOIN clients_units AS clients_units
ON transactions.client_id = clients_units.client_id
   AND transactions.client_unit_id = 
clients_units.client_unit_id
 WHERE transactions.client_id = '00379'
   AND transactions.transaction_account_id = '001'
   AND transactions.transaction_classification_id = '101'
   AND transactions.transaction_date = '2008-03-01'
   AND transactions.transaction_date = '2008-03-31'
  ORDER BY transactions.transaction_date ASC


1	SIMPLE	transactions	ref	transactions_idx_1	transactions_idx_1	4	 
const	9582	Using where; Using temporary; Using filesort

1   SIMPLE  transactions_classificationsALL NULLNULLNULL
NULL1660
1	SIMPLE	clients_units	ref	clients_units_idx_1	clients_units_idx_1	8	 
bap_sat.transactions.client_id,bap_sat.transactions.client_unit_id	1	


Seems that the transactions table is the sort of the problem, as it's  
using file sort and where... But my myisam sort cache is big, I thing  
it's about 80MB or so...


Thank you everybody for your help!!!

Best Regards,
Bruno B B Magalhaes


On Apr 22, 2008, at 3:21 PM, Perrin Harkins wrote:


On Tue, Apr 22, 2008 at 11:41 AM, Bruno B.  B.  Magalhães
[EMAIL PROTECTED] wrote:

I thing
the most problematic part of those queries are the date range part,  
should I
use a different index only for this column to maintain the index  
small?


My experience with doing data warehousing in MySQL was that when all
you need is day granularity, you are much better off having a
de-normalized 'days_since_epoch' column or a date dimension table with
a column like that.  Then your date math becomes simple integer
comparisons which are much faster.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance

2008-04-22 Thread Rob Wultsch
On Tue, Apr 22, 2008 at 1:13 PM, Bruno B. B. Magalhães
[EMAIL PROTECTED] wrote:

 Hi Phill, Rob and Perrin,

 I forgot to attach the explain query from MySQL, course it's one of the most 
 important things... Sorry!!!

 EXPLAIN SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,

   transactions.transaction_complement AS complement,
   transactions.transaction_value AS value,

   transactions.transaction_nature AS nature,
   
 transactions_categories.transaction_category_description AS 
 category_description,
   clients_units.client_unit_complement AS 
 unit_complement

  FROM transactions AS transactions
 LEFT JOIN transactions_categories AS transactions_categories
ON transactions.transaction_category_id = 
 transactions_categories.transaction_category_id
 LEFT JOIN clients_units AS clients_units
ON transactions.client_id = clients_units.client_id
   AND transactions.client_unit_id = 
 clients_units.client_unit_id
 WHERE transactions.client_id = '00379'
   AND transactions.transaction_account_id = '001'
   AND transactions.transaction_classification_id = '101'
   AND transactions.transaction_date = '2008-03-01'
   AND transactions.transaction_date = '2008-03-31'
  ORDER BY transactions.transaction_date ASC


 1   SIMPLE  transactionsref transactions_idx_1  
 transactions_idx_1  4   const   9582Using where; Using temporary; 
 Using filesort
 1   SIMPLE  transactions_classificationsALL NULLNULLNULL  
   NULL1660
 1   SIMPLE  clients_units   ref clients_units_idx_1 
 clients_units_idx_1 8   
 bap_sat.transactions.client_id,bap_sat.transactions.client_unit_id  1

 Seems that the transactions table is the sort of the problem, as it's using 
 file sort and where... But my myisam sort cache is big, I thing it's about 
 80MB or so...

 Thank you everybody for your help!!!

 Best Regards,
 Bruno B B Magalhaes





 On Apr 22, 2008, at 3:21 PM, Perrin Harkins wrote:


 
 
 
  On Tue, Apr 22, 2008 at 11:41 AM, Bruno B.  B.  Magalhães
  [EMAIL PROTECTED] wrote:
 
   I thing
   the most problematic part of those queries are the date range part, 
   should I
   use a different index only for this column to maintain the index small?
  
 
  My experience with doing data warehousing in MySQL was that when all
  you need is day granularity, you are much better off having a
  de-normalized 'days_since_epoch' column or a date dimension table with
  a column like that.  Then your date math becomes simple integer
  comparisons which are much faster.
 
  - Perrin
 

About how many rows actually get returned? If it is significantly less
than 9582 you are not well indexed for the query. If you getting not
all that many results returned then the filesort would not
significantly impact performance. I think the giant composite index is
not being used well from the length returned in the explain. I would
try a new index on something like:
 client_id
 transaction_account_id
 transaction_classification_id
 transaction_date

 1   SIMPLE  transactions_classificationsALL NULLNULL
  NULLNULL1660

That sucks. I am not seeing that table in query you sent (may be going blind...)

-- 
Rob Wultsch
[EMAIL PROTECTED]
wultsch (aim)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]