Re: [plug] Large MySQL table

2008-01-17 Thread Orlando Andico
Yes because it's on the right side of the equality. there are other benefits to storing dates as integers -- it's how astronomers do it (Julian day). date arithmetic becomes trivial. there is no notion of "months" or "weeks" or "years" which are artificial constructs. you can use a float and als

Re: [plug] Large MySQL table

2008-01-17 Thread Michael Tinsay
amp() only once for the query? --- mike t. - Original Message From: Roger Filomeno <[EMAIL PROTECTED]> To: Philippine Linux Users' Group (PLUG) Technical Discussion List Sent: Thursday, January 17, 2008 6:12:24 PM Subject: Re: [plug] Large MySQL table Extreme solution:

Re: [plug] Large MySQL table

2008-01-17 Thread Roger Filomeno
Actually i had this same problem before too when i worked with date based alerts (scheduling)for mobile and the best advice i got was using timestamp date field since its supposed to be closer to int type datafield which was optimized with how mysql implemented indexing (search it out, i think it

Re: [plug] Large MySQL table

2008-01-17 Thread Robert Locke
I think Tiger's dissertation on the subject said it all. =) I'm not sure what using an int datatype would buy you above and beyond just indexing datedue. I suppose using int *might* be slightly faster, but you would lose date functionality. And, from a purist's perspective, call a spade a spa

Re: [plug] Large MySQL table

2008-01-17 Thread Roger Filomeno
Extreme solution: 1. change date field from datetime or timestamp to int type equivalent. Warning do not use alter table, data will be lost; make a script to re create the date data as int using unix_timestamp instead. 2. index the field. 3. rewrite sql as select id,datedue from table1 where dat

Re: [plug] Large MySQL table

2008-01-16 Thread joebert jacaba
All, I have indexed due_date and account and got much better performance already. I don't really want to split the date into 1 or 2 ints. I use Hibernate and I want to maximize GregorianCalendar. I need to perform complex date operations I don't want to rewrite this functions. I rely heavily on C

Re: [plug] Large MySQL table

2008-01-16 Thread Gerald Timothy Quimpo
On Wed, 2008-01-16 at 20:55 -0800, Michael Tinsay wrote: > --- On Thu, 17/1/08, joebert jacaba <[EMAIL PROTECTED]> wrote: > > Common queries: so Michael and Rob were right then. And you won't need that integer month field. Michael has given you a bunch of SQL. That should be sufficient. If

Re: [plug] Large MySQL table

2008-01-16 Thread Michael Tinsay
--- On Thu, 17/1/08, joebert jacaba <[EMAIL PROTECTED]> wrote: > From: joebert jacaba <[EMAIL PROTECTED]> > Subject: Re: [plug] Large MySQL table > To: "Philippine Linux Users' Group (PLUG) Technical Discussion List" > > Date: Thursday, 17 January, 20

Re: [plug] Large MySQL table

2008-01-16 Thread Michael Tinsay
--- On Wed, 16/1/08, Zak B. Elep <[EMAIL PROTECTED]> wrote: > From: Zak B. Elep <[EMAIL PROTECTED]> > Subject: Re: [plug] Large MySQL table > To: "Philippine Linux Users' Group (PLUG) Technical Discussion List" > > Date: Wednesday, 16 January, 2008,

Re: [plug] Large MySQL table

2008-01-16 Thread joebert jacaba
my query should have an and on the where clause for year(due_date)=year(now()). sorry. Common queries: 1. get all due dates of an account code before a specified date 2. get all due dates of an account code after a specified date 3. get the most recent due date or the next due date of an account

Re: [plug] Large MySQL table

2008-01-16 Thread Gerald Timothy Quimpo
On Wed, 2008-01-16 at 17:30 +0800, joebert jacaba wrote: > So in my case since I have a lot of functions that deal with dates i > would need at least two int columns to replace the date column. Is > this the widely accepted industry practice? what are examples of what you're trying to do exactly?

Re: [plug] Large MySQL table

2008-01-16 Thread Zak B. Elep
On Jan 16, 2008 5:40 PM, Robert Locke <[EMAIL PROTECTED]> wrote: > > So in my case since I have a lot of functions that deal with dates i > > would need at least two int columns to replace the date column. Is > > this the widely accepted industry practice? > > I can't speak authoritatively on indus

Re: [plug] Large MySQL table

2008-01-16 Thread Robert Locke
> So in my case since I have a lot of functions that deal with dates i > would need at least two int columns to replace the date column. Is > this the widely accepted industry practice? I can't speak authoritatively on industry practice but we've been able to get away with just using indexed date

Re: [plug] Large MySQL table

2008-01-16 Thread joebert jacaba
All, Thanks! So in my case since I have a lot of functions that deal with dates i would need at least two int columns to replace the date column. Is this the widely accepted industry practice? Joebert _ Philippine Linux Users' Group (PLUG) Mailing

Re: [plug] Large MySQL table

2008-01-14 Thread Michael Tinsay
ippine Linux Users' Group (PLUG) Technical Discussion List > > Sent: Tuesday, January 15, 2008 12:59:09 PM > Subject: Re: [plug] Large MySQL table > > > select > count(*) > from > table > where > month(due_date)=month(now()); __

Re: [plug] Large MySQL table

2008-01-14 Thread Robert Locke
Hi, Bear in mind that your query will include that given month across multiple years. For example, with month = 5, you would be including data from May of 2001, 2002, 2003, 2004, etc. Is that your intention? Assuming you have an index on due_date, your database will not be able to take advan

Re: [plug] Large MySQL table

2008-01-14 Thread joebert jacaba
I was thinking I may need this query in the future. What I do now is use paging and only fetch 10 rows. How can you speed up this type of query? select count(*) from table where month(due_date)=month(now()); _ Philippine Linux Users' Group (PLUG) Mai

Re: [plug] Large MySQL table

2008-01-14 Thread joebert jacaba
All, Thanks guys my query execution time is 0.06 seconds down from almost 2 minutes. Apparently the indexing worked. I don't have any joins or subselects on this table. Regards, Joebert _ Philippine Linux Users' Group (PLUG) Mailing List plug@lists

Re: [plug] Large MySQL table

2008-01-14 Thread Michael Tinsay
Post your SQL query statement(s). What index you provide would be determine by what the query needs. Also additional considerations may be needed for joins and subselects. --- mike t. _ Philippine Linux Users' Group (PLUG) Mailing List plug@l

Re: [plug] Large MySQL table

2008-01-14 Thread Ariz Jacinto
Joebert, you might want to share the performance data of the machine hosting your database, the output of the explain command, the SQL query that you're running, storage engine you're using, filesystem you're using, etc. let me guess, are you querying based on dates (e.g. WHERE date = )? O

Re: [plug] Large MySQL table

2008-01-14 Thread Alvin Delagon
Partitioning your table might also help. Or split the database across different machines on horizontal partitioning. You can also further improve your query speed by inducing SELECT only statements on a mySQL replica. On Jan 14, 2008 11:35 PM, Orlando Andico <[EMAIL PROTECTED]> wrote: > It depend

Re: [plug] Large MySQL table

2008-01-14 Thread Orlando Andico
It depends also on how much speed you want. Generally primary-key or indexed-key queries will be quite fast, O(sqrt(N)) speed. However if you are doing range queries that will be slower. 130M rows is a large-ish table by most measures, depending on your row size and result set size, you might be

Re: [plug] Large MySQL table

2008-01-14 Thread Gerald Timothy Quimpo
On Mon, 2008-01-14 at 17:46 +0800, joebert jacaba wrote: > Please give some pointers to speed up query to a 130 million ++ rows > mysql table. This table has 5 columns. A primary key, an account code, > an OR number, a date and a status id. I am currently indexing the > table on the account code.

[plug] Large MySQL table

2008-01-14 Thread joebert jacaba
Please give some pointers to speed up query to a 130 million ++ rows mysql table. This table has 5 columns. A primary key, an account code, an OR number, a date and a status id. I am currently indexing the table on the account code. Thanks! Joebert