Re: trouble with group by and similar tables

2008-06-05 Thread ddevaudreuil
Eben <[EMAIL PROTECTED]> wrote on 06/05/2008 02:44:42 PM:

> I have the following tables:
> 
> table1
> ---
> id1
> some_field
> 
> table2
> ---
> id
> id1
> score
> 
> table3
> ---
> id
> id1
> score
> 
> I then have the following query:
> SELECT table1.id,SUM(table2.score) as table2_score
> FROM table1, table2
> WHERE table1.some_field = 'value'
> AND table2.id1 = table1.id
> GROUP BY table1.id
> 
> This works fine and returns each record in table1, grouped by id, with 
> the sum of scores from table2.  However, when I do this query:
> 
> SELECT table1.id,SUM(table2.score) as table2_score, SUM(table3.score) as 

> table3_score
> FROM table1, table2, table3
> WHERE table1.some_field = 'value'
> AND table2.id1 = table1.id
> AND table3.id1 = table1.id
> GROUP BY table1.id
> 
> The sum'd score values go crazy, reflecting #s that aren't logical.  Is 
> the issue that table2 and table3 are identical table structures, or that 

> I simply don't understand how the group by is really working here...?
> 
> Any advice is appreciated,
> Eben
> 

Try taking away the sum and the group by and just select * from your 
query.  You'll see the problem is with the join, not the group by. 

There are probably several solutions.  Here's one way (untested).  This 
will only work if your version of MySQL
supports derived tables, which I think is 4.1 or higher but I'm not sure.

SELECT table1.id, t2sum.table2_score, t3sum.table3_score
FROM table1
INNER JOIN (SELECT id1, SUM(score)as table2_score FROM table2 GROUP BY 
id1) as t2sum 
ON table1.id=t2sum.id1
INNER JOIN (SELECT id1, SUM(score)as table3_score FROM table3 GROUP BY 
id1) as t3sum
ON table1.id=t3sum.id1


Donna

MYSQL HIGH

2008-06-05 Thread Daniel Dias Gonçalves

Hi,

I have the following problem:

mysql  Ver 14.12 Distrib 5.0.51a, for portbld-freebsd6.3 (i386) using  5.0

080605 15:33:26 - mysqld got signal 10;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong

and this may fail.

key_buffer_size=268435456
read_buffer_size=1044480
max_used_connections=92
max_connections=1000
threads_connected=42
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 2306136 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.


How to solve ?

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MYSQL HIGH

2008-06-05 Thread Daniel Dias Gonçalves

Hi,

I have the following problem:

080605 15:33:26 - mysqld got signal 10;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong

and this may fail.

key_buffer_size=268435456
read_buffer_size=1044480
max_used_connections=92
max_connections=1000
threads_connected=42
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 2306136 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.


How to solve ?

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: trouble with group by and similar tables

2008-06-05 Thread Arthur Fuller
What you need, basically, is to combine two queries, one of which addresses
t1 and t2 while the other addresses t1 and t3. Create two views similar to
your first query and then join the two views on t1.id and you'll get what
you're after.

hth,
Arthur


On Thu, Jun 5, 2008 at 3:44 PM, Eben <[EMAIL PROTECTED]> wrote:

> I have the following tables:
>
> table1
> ---
> id1
> some_field
>
> table2
> ---
> id
> id1
> score
>
> table3
> ---
> id
> id1
> score
>
> I then have the following query:
> SELECT table1.id,SUM(table2.score) as table2_score
> FROM table1, table2
> WHERE table1.some_field = 'value'
> AND table2.id1 = table1.id
> GROUP BY table1.id
>
> This works fine and returns each record in table1, grouped by id, with the
> sum of scores from table2.  However, when I do this query:
>
> SELECT table1.id,SUM(table2.score) as table2_score, SUM(table3.score) as
> table3_score
> FROM table1, table2, table3
> WHERE table1.some_field = 'value'
> AND table2.id1 = table1.id
> AND table3.id1 = table1.id
> GROUP BY table1.id
>
> The sum'd score values go crazy, reflecting #s that aren't logical.  Is the
> issue that table2 and table3 are identical table structures, or that I
> simply don't understand how the group by is really working here...?
>
> Any advice is appreciated,
> Eben
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>


trouble with group by and similar tables

2008-06-05 Thread Eben

I have the following tables:

table1
---
id1
some_field

table2
---
id
id1
score

table3
---
id
id1
score

I then have the following query:
SELECT table1.id,SUM(table2.score) as table2_score
FROM table1, table2
WHERE table1.some_field = 'value'
AND table2.id1 = table1.id
GROUP BY table1.id

This works fine and returns each record in table1, grouped by id, with 
the sum of scores from table2.  However, when I do this query:


SELECT table1.id,SUM(table2.score) as table2_score, SUM(table3.score) as 
table3_score

FROM table1, table2, table3
WHERE table1.some_field = 'value'
AND table2.id1 = table1.id
AND table3.id1 = table1.id
GROUP BY table1.id

The sum'd score values go crazy, reflecting #s that aren't logical.  Is 
the issue that table2 and table3 are identical table structures, or that 
I simply don't understand how the group by is really working here...?


Any advice is appreciated,
Eben

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Olaf Stein
Even more when you compare to a script executing the inserts, instead the
mysql client...

Olaf



On 6/5/08 12:06 PM, "mos" <[EMAIL PROTECTED]> wrote:

> At 10:30 AM 6/5/2008, you wrote:
>> Simon,
>> 
>> In my experience load data infile is a lot faster than a sql file htrough
>> the client.
>> I would parse the sql file and create a csv file with just the columns of
>> your table and then use load data infile using the created csv file
>> 
>> Olaf
> 
> Olaf,
> Using a Load Data on an empty file is at least 10x faster than
> using SQL inserts. But I thought his blob field was binary (images) in
> which case Load Data won't work. If his blob field is plain text, then of
> course Load Data will work nicely. :)
> 
> Mike 
> 

- Confidentiality Notice:
The following mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain confidential
and privileged information. The recipient is responsible to
maintain the confidentiality of this information and to use the
information only for authorized purposes. If you are not the
intended recipient (or authorized to receive information for the
intended recipient), you are hereby notified that any review, use,
disclosure, distribution, copying, printing, or action taken in
reliance on the contents of this e-mail is strictly prohibited. If
you have received this communication in error, please notify us
immediately by reply e-mail and destroy all copies of the original
message. Thank you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins

Olaf, Mike

Thanks for the input, the blob data is just text, I'll have a go at 
using the load data command


Regards

Simon

mos wrote:

At 10:30 AM 6/5/2008, you wrote:

Simon,

In my experience load data infile is a lot faster than a sql file 
htrough

the client.
I would parse the sql file and create a csv file with just the 
columns of

your table and then use load data infile using the created csv file

Olaf


Olaf,
   Using a Load Data on an empty file is at least 10x faster than 
using SQL inserts. But I thought his blob field was binary (images) in 
which case Load Data won't work. If his blob field is plain text, then 
of course Load Data will work nicely. :)


Mike




--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread mos

At 10:30 AM 6/5/2008, you wrote:

Simon,

In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file

Olaf


Olaf,
   Using a Load Data on an empty file is at least 10x faster than 
using SQL inserts. But I thought his blob field was binary (images) in 
which case Load Data won't work. If his blob field is plain text, then of 
course Load Data will work nicely. :)


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Olaf Stein
Simon,

In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file

Olaf


On 6/5/08 4:52 AM, "Simon Collins" <[EMAIL PROTECTED]> wrote:

> I can do  - if the load data infile command definitely improves
> performance and splitting the file does the same I have no problem with
> doing this. It just seems strange that it's problems with the way the
> import file is configured. I thought the problem would be somehow with
> the table getting bigger.
> 
> Regards
> 
> Simon
> 
> 
> Ananda Kumar wrote:
>> Simon,
>> Why dont u split the file and use LOAD DATA INFILE command which would
>> improve the performance while loading into an empty table with keys
>> disabled.
>> 
>> regards
>> anandkl
>> 
>> 
>> On 6/5/08, Simon Collins <[EMAIL PROTECTED]> wrote:
>>   
>>> I'm loading the data through the command below mysql -f -u root -p enwiki <
>>> enwiki.sql
>>> 
>>> The version is MySQL 5.0.51a-community
>>> 
>>> I've disabled the primary key, so there are no indexes. The CPU has 2 cores
>>> and 2 Gigs memory.
>>> 
>>> The import fell over overnight with a "table full" error as it hit 1T (I
>>> think this may be a file system problem). As it's not importing before
>>> anymore show status isn't going to provide any interesting info however, I
>>> did notice that mysql was not consuming much CPU time ~ 10%.
>>> 
>>> I wouldn't like to split the data up into separate tables as it would
>>> change the schema and I'm not in charge of the schema design - just the DBA
>>> at the backend.
>>> 
>>> Cheers
>>> 
>>> Simon
>>> 
>>> mos wrote:
>>> 
>>> 
 Simon,
 As someone else mentioned, how are you loading the data? Can you post the
 SQL?
 
 You have an Id field, so is that not the primary key? If so, the slowdown
 could be maintaining the index. If so, add up to 30% of your available ram
 to your key_bufer_size in your my.cnf file and restart the server. How much
 RAM do you have on your machine and how many CPU's do you have? What
 version
 of MySQL are you using? Also can you post your "Show Status" output after
 it
 has started to slow down? How much CPU is being used after the import slows
 down?
 
 Now from what you've said, it looks like you are using this table as a
 lookup table, so if it just has an id and a blob field, you probably return
 the blob field for a given id, correct? If it were up to me, I would break
 the data into more manageable tables. If you have 100 million rows, then
 I'd
 break it into 10x10 million row tables. Table_1 would have id's from 1 to
 9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
 lookup would call a stored procedure which determines which table to use
 based on the Id it was given. If you really had to search all the tables
 you
 can then use a Merge table based on those 10 tables. I use Merge tables
 quite a bit and the performance is quite good.
 
 Mike
 
 At 11:42 AM 6/4/2008, you wrote:
 
   
> Dear all,
> 
> I'm presently trying to import the full wikipedia dump for one of our
> research users. Unsurprisingly it's a massive import file (2.7T)
> 
> Most of the data is importing into a single MyISAM table which has an id
> field and a blob field. There are no constraints / indexes on this table.
> We're using an XFS filesystem.
> 
> The import starts of quickly but gets increasingly slower as it
> progresses, starting off at about 60 G per hour but now the MyISAM table
> is
> ~1TB it's slowed to a load of about 5G per hour. At this rate the import
> will not finish for a considerable time, if at all.
> 
> Can anyone suggest to me why this is happening and if there's a way to
> improve performance. If there's a more suitable list to discuss this,
> please
> let me know.
> 
> Regards
> 
> Simon
> 
> 
 
   
>>> --
>>> Dr Simon Collins
>>> Data Grid Consultant
>>> National Grid Service
>>> University of Manchester
>>> Research Computing Services
>>> Kilburn Building
>>> Oxford Road
>>> Manchester
>>> M13 9PL
>>> 
>>> Tel 0161 275 0604
>>> 
>>> 
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>>> 
>>> 
>>> 
>> 
>>   
> 

- Confidentiality Notice:
The following mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain confidential
and privileged information. The recipient is responsible to
maintain the confidentiality of this information and to use the
information only for authorized purposes. If you are not the
intended recipient (or authorized to 

Re: mysqld-nt Windows service: delay btwn svc running and accepting conx

2008-06-05 Thread Les Schaffer

Sebastian Mendel wrote:


did you take a look add the MySQL log?
there you can see what MySQL is doing, with times


perhaps i wasn't clear earlier. we checked the logs, and we see that 
Innodb is started and accepting connections sometimes 1-6 seconds AFTER 
we see the Windows ServiceManager report that the MySQL service is in 
state SERVICE_RUNNING.


we had thought that SERVICE_RUNNING meant we could now connect, and 99% 
of the time we have no problem making a connection.


looks like we just have to poll for a connection while the service goes 
from RUNNING to accepting connections.


Les



you could also prioritize background processes instead of desktop 
processes to speedup MySQL start





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JAVA UDF HOW

2008-06-05 Thread Antony T Curtis

Hi!

MySQL is an open-source product. That allows anyone to contribute  
patches and new features from the community.
In time, when the feature is mature, it may be included in some future  
version.


Documentation on this feature is very limited because the (2) people  
behind the project have only pursued it during their personal free  
time. The code is of alpha quality and the target audience is mostly  
aimed at people of expert knowledge who can largely figure out how to  
use it without the aid of verbose documentation.


Of course, if anyone is willing to contribute documentation, it would  
be greatly appreciated. The forge site is a wiki: Anyone may add stuff  
to it.


Regards,
Antony

On 5 Jun 2008, at 05:05, Abhayjeet Singh Grewal wrote:


Thanks Everybody,

I went through the project and was a little dissapointed with MySQL  
not supporting Java natively as yet. Let's hope we include this  
support in coming future.

Also, there is not enough documentation for the project mentioned:
http://forge.mysql.com/wiki/ProjectPage_External_Language_Stored_Procedures

Thanks again !!

Abhay Grewal

On Thu, Jun 5, 2008 at 11:04 AM, Antony T Curtis <[EMAIL PROTECTED] 
> wrote:

Hi,

Check out this link for Java stored procedures with MySQL

http://forge.mysql.com/wiki/ProjectPage_External_Language_Stored_Procedures

It is probably what you are looking for. I have some more info on my  
blog at http://antbits.blogspot.com/


For more information, you can email Eric Herman <[EMAIL PROTECTED]> or  
you can email me.
Currently, you will not see any Java Stored Procedure functionality  
in any official MySQL release but I plan to maintain a fork with  
this feature.


Soon, I plan to be in a position to build 'preview' binaries for  
Linux, FreeBSD, Windows and MacOS.



Regards,
Antony Curtis,

(Not speaking on behalf of my current or any prior employer)



On 4 Jun 2008, at 01:39, Abhayjeet Singh Grewal wrote:

Thanks Martin,
I looked at the link, but I guess I was not able to put my question  
in the

right way.

Basically I have a Java Package and I want to call that package from  
MYSQL

function or procedure.

Any help would be much appreciated.

Thanks,
Abhay

On Tue, Jun 3, 2008 at 9:32 PM, Martin <[EMAIL PROTECTED]> wrote:

Abhay-

It seems you are referring to the UDF Oracle specific User-defined  
forms

Have you looked at

http://download.oracle.com/docs/cd/E10391_01/doc.910/e10363/oimadm.htm#CIAEFFAD

?
Martin
- Original Message - From: "Abhayjeet Singh Grewal" <
[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 03, 2008 9:50 AM
Subject: JAVA UDF HOW



Hi,

I am working on oracle to mysql migration and really stuck bad with  
Java

UDF. I did not find any documantation and the ones which I did do not
work.

Please help me at the earliest. I am using MySQL 5.0.51B comunity  
server

with Java JDK1.6 on windows 2003.

Regards,

Abhay Grewal









RE: mysqld-nt Windows service: delay btwn svc running and accepting conx

2008-06-05 Thread Jerry Schwartz
>-Original Message-
>From: Les Schaffer [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, June 04, 2008 9:23 PM
>To: mysql@lists.mysql.com
>Subject: mysqld-nt Windows service: delay btwn svc running and accepting
>conx
>
>We are having a small technical glitch about which we would like to have
>some insight:
>
>our application needs to start mysqld-nt as a Windows service after
>which we fairly quickly try to make connections to the Server. we are
>using the python wrappers, MySQLdb, and we successfully bring up the
>service 99.9% of the time. this means we get
>win32service.SERVICE_RUNNING before trying to make connections.  and
>most other times, we have no problem making connections immediately
>after SERVICE_RUNNING is achieved.
[JS] We have a similar problem (not related to MySQL). When WinXP starts, it
appears that the networking isn't fully operational for a variable period of
time. What we did is set up a link that pings the thing we are trying to
connect to. Only after the ping succeeds do we actually try to connect. You
could do that.

I have no idea why this would happen when XP has been running for awhile,
just because you bounced the MySQL server, but you could use the same basic
procedure.
>
>however, if we try to start the app right after Windows XP boots, or if
>XP has been sitting idle for a long while after the service had been
>brought up and down,  we get OperationalError 2003, can't connect to
>server. when i check the server logs, i see that Windows ServiceManager
>tells us that we are SERVICE_RUNNING several seconds before the log says
>" ... mysqld-nt.exe: ready for connections ".
>
>we could put a 4-6 seconds on the connect_timeout, but we would like to
>at least understand what we are seeing. are there  other states the
>ServiceManager would report for mysqld-nt, or are we limited to
>STOPPED/STARTING/RUNNING/STOPPING?
>
>more to the point, what determines the time delay between
>SERVICE_RUNNING and " ... mysqld-nt.exe: ready for connections " and is
>there a tried and true way to detect "ready for connections" without
>simply making a connection attempt???
>
>thanks
>
>Les Schaffer
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JAVA UDF HOW

2008-06-05 Thread Abhayjeet Singh Grewal
Thanks Everybody,

I went through the project and was a little dissapointed with MySQL not
supporting Java natively as yet. Let's hope we include this support in
coming future.
Also, there is not enough documentation for the project mentioned:
http://forge.mysql.com/wiki/ProjectPage_External_Language_Stored_Procedures

Thanks again !!

Abhay Grewal

On Thu, Jun 5, 2008 at 11:04 AM, Antony T Curtis <[EMAIL PROTECTED]>
wrote:

> Hi,
>
> Check out this link for Java stored procedures with MySQL
>
> http://forge.mysql.com/wiki/ProjectPage_External_Language_Stored_Procedures
>
> It is probably what you are looking for. I have some more info on my blog
> at http://antbits.blogspot.com/
>
> For more information, you can email Eric Herman <[EMAIL PROTECTED]> or you
> can email me.
> Currently, you will not see any Java Stored Procedure functionality in any
> official MySQL release but I plan to maintain a fork with this feature.
>
> Soon, I plan to be in a position to build 'preview' binaries for Linux,
> FreeBSD, Windows and MacOS.
>
>
> Regards,
> Antony Curtis,
>
> (Not speaking on behalf of my current or any prior employer)
>
>
>
> On 4 Jun 2008, at 01:39, Abhayjeet Singh Grewal wrote:
>
> Thanks Martin,
>> I looked at the link, but I guess I was not able to put my question in the
>> right way.
>>
>> Basically I have a Java Package and I want to call that package from MYSQL
>> function or procedure.
>>
>> Any help would be much appreciated.
>>
>> Thanks,
>> Abhay
>>
>> On Tue, Jun 3, 2008 at 9:32 PM, Martin <[EMAIL PROTECTED]> wrote:
>>
>> Abhay-
>>>
>>> It seems you are referring to the UDF Oracle specific User-defined forms
>>> Have you looked at
>>>
>>>
>>> http://download.oracle.com/docs/cd/E10391_01/doc.910/e10363/oimadm.htm#CIAEFFAD
>>>
>>> ?
>>> Martin
>>> - Original Message - From: "Abhayjeet Singh Grewal" <
>>> [EMAIL PROTECTED]>
>>> To: 
>>> Sent: Tuesday, June 03, 2008 9:50 AM
>>> Subject: JAVA UDF HOW
>>>
>>>
>>>
>>> Hi,
>>>

 I am working on oracle to mysql migration and really stuck bad with Java
 UDF. I did not find any documantation and the ones which I did do not
 work.

 Please help me at the earliest. I am using MySQL 5.0.51B comunity server
 with Java JDK1.6 on windows 2003.

 Regards,

 Abhay Grewal



>>>
>


Re: Large import into MYISAM - performance problems

2008-06-05 Thread Saravanan
Hi Simon,

what kind of table you are using. If it is myisam you can increase the max size 
of table by changing the following variable

myisam_data_pointer_size = 7

as default it should be 6.

Please let me know if that helps you.

Thanks,
Saravanan



--- On Thu, 6/5/08, Simon Collins <[EMAIL PROTECTED]> wrote:
From: Simon Collins <[EMAIL PROTECTED]>
Subject: Re: Large import into MYISAM - performance problems
To: mysql@lists.mysql.com
Date: Thursday, June 5, 2008, 3:05 PM

I'm loading the data through the command below mysql -f -u root -p 
enwiki < enwiki.sql

The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.

The import fell over overnight with a "table full" error as it hit 1T
(I 
think this may be a file system problem). As it's not importing before 
anymore show status isn't going to provide any interesting info however, 
I did notice that mysql was not consuming much CPU time ~ 10%.

I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just the 
DBA at the backend.

Cheers

Simon

mos wrote:
> Simon,
> As someone else mentioned, how are you loading the data? Can you post 
> the SQL?
>
> You have an Id field, so is that not the primary key? If so, the 
> slowdown could be maintaining the index. If so, add up to 30% of your 
> available ram to your key_bufer_size in your my.cnf file and restart 
> the server. How much RAM do you have on your machine and how many 
> CPU's do you have? What version of MySQL are you using? Also can you 
> post your "Show Status" output after it has started to slow
down? How 
> much CPU is being used after the import slows down?
>
> Now from what you've said, it looks like you are using this table as a

> lookup table, so if it just has an id and a blob field, you probably 
> return the blob field for a given id, correct? If it were up to me, I 
> would break the data into more manageable tables. If you have 100 
> million rows, then I'd break it into 10x10 million row tables. Table_1

> would have id's from 1 to 9,999,999, and table_2 with id's from 10

> million to 10,999,999 etc. Your lookup would call a stored procedure 
> which determines which table to use based on the Id it was given. If 
> you really had to search all the tables you can then use a Merge table 
> based on those 10 tables. I use Merge tables quite a bit and the 
> performance is quite good.
>
> Mike
>
> At 11:42 AM 6/4/2008, you wrote:
>> Dear all,
>>
>> I'm presently trying to import the full wikipedia dump for one of
our 
>> research users. Unsurprisingly it's a massive import file (2.7T)
>>
>> Most of the data is importing into a single MyISAM table which has an 
>> id field and a blob field. There are no constraints / indexes on this 
>> table. We're using an XFS filesystem.
>>
>> The import starts of quickly but gets increasingly slower as it 
>> progresses, starting off at about 60 G per hour but now the MyISAM 
>> table is ~1TB it's slowed to a load of about 5G per hour. At this 
>> rate the import will not finish for a considerable time, if at all.
>>
>> Can anyone suggest to me why this is happening and if there's a
way 
>> to improve performance. If there's a more suitable list to discuss

>> this, please let me know.
>>
>> Regards
>>
>> Simon
>
>


-- 
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


  

analyze table

2008-06-05 Thread Ananda Kumar
Hi All,
We have proc which create around 7 table, inserts data , creates indexes and
then run analyze table command.

The analyze table command on 7 tables are taking around 18 hrs. They are
myisam table.
The db server has 8 cpu, 16gb RAM, with both innodb and myisam.

Is there a way we can improve the analyze table command.

regards
anandkl


Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I can do  - if the load data infile command definitely improves 
performance and splitting the file does the same I have no problem with 
doing this. It just seems strange that it's problems with the way the 
import file is configured. I thought the problem would be somehow with 
the table getting bigger.


Regards

Simon


Ananda Kumar wrote:

Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.

regards
anandkl


On 6/5/08, Simon Collins <[EMAIL PROTECTED]> wrote:
  

I'm loading the data through the command below mysql -f -u root -p enwiki <
enwiki.sql

The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 cores
and 2 Gigs memory.

The import fell over overnight with a "table full" error as it hit 1T (I
think this may be a file system problem). As it's not importing before
anymore show status isn't going to provide any interesting info however, I
did notice that mysql was not consuming much CPU time ~ 10%.

I wouldn't like to split the data up into separate tables as it would
change the schema and I'm not in charge of the schema design - just the DBA
at the backend.

Cheers

Simon

mos wrote:



Simon,
As someone else mentioned, how are you loading the data? Can you post the
SQL?

You have an Id field, so is that not the primary key? If so, the slowdown
could be maintaining the index. If so, add up to 30% of your available ram
to your key_bufer_size in your my.cnf file and restart the server. How much
RAM do you have on your machine and how many CPU's do you have? What version
of MySQL are you using? Also can you post your "Show Status" output after it
has started to slow down? How much CPU is being used after the import slows
down?

Now from what you've said, it looks like you are using this table as a
lookup table, so if it just has an id and a blob field, you probably return
the blob field for a given id, correct? If it were up to me, I would break
the data into more manageable tables. If you have 100 million rows, then I'd
break it into 10x10 million row tables. Table_1 would have id's from 1 to
9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
lookup would call a stored procedure which determines which table to use
based on the Id it was given. If you really had to search all the tables you
can then use a Merge table based on those 10 tables. I use Merge tables
quite a bit and the performance is quite good.

Mike

At 11:42 AM 6/4/2008, you wrote:

  

Dear all,

I'm presently trying to import the full wikipedia dump for one of our
research users. Unsurprisingly it's a massive import file (2.7T)

Most of the data is importing into a single MyISAM table which has an id
field and a blob field. There are no constraints / indexes on this table.
We're using an XFS filesystem.

The import starts of quickly but gets increasingly slower as it
progresses, starting off at about 60 G per hour but now the MyISAM table is
~1TB it's slowed to a load of about 5G per hour. At this rate the import
will not finish for a considerable time, if at all.

Can anyone suggest to me why this is happening and if there's a way to
improve performance. If there's a more suitable list to discuss this, please
let me know.

Regards

Simon




  

--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Adrian Bruce
You could load the data into several smaller tables and combine them 
into a merged table which would have no real effect on the schema.


Ade

Simon Collins wrote:
I'm loading the data through the command below mysql -f -u root -p 
enwiki < enwiki.sql


The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.


The import fell over overnight with a "table full" error as it hit 1T 
(I think this may be a file system problem). As it's not importing 
before anymore show status isn't going to provide any interesting info 
however, I did notice that mysql was not consuming much CPU time ~ 10%.


I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just 
the DBA at the backend.


Cheers

Simon

mos wrote:

Simon,
As someone else mentioned, how are you loading the data? Can you post 
the SQL?


You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart 
the server. How much RAM do you have on your machine and how many 
CPU's do you have? What version of MySQL are you using? Also can you 
post your "Show Status" output after it has started to slow down? How 
much CPU is being used after the import slows down?


Now from what you've said, it looks like you are using this table as 
a lookup table, so if it just has an id and a blob field, you 
probably return the blob field for a given id, correct? If it were up 
to me, I would break the data into more manageable tables. If you 
have 100 million rows, then I'd break it into 10x10 million row 
tables. Table_1 would have id's from 1 to 9,999,999, and table_2 with 
id's from 10 million to 10,999,999 etc. Your lookup would call a 
stored procedure which determines which table to use based on the Id 
it was given. If you really had to search all the tables you can then 
use a Merge table based on those 10 tables. I use Merge tables quite 
a bit and the performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of 
our research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has 
an id field and a blob field. There are no constraints / indexes on 
this table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM 
table is ~1TB it's slowed to a load of about 5G per hour. At this 
rate the import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way 
to improve performance. If there's a more suitable list to discuss 
this, please let me know.


Regards

Simon









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Ananda Kumar
Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.

regards
anandkl


On 6/5/08, Simon Collins <[EMAIL PROTECTED]> wrote:
>
> I'm loading the data through the command below mysql -f -u root -p enwiki <
> enwiki.sql
>
> The version is MySQL 5.0.51a-community
>
> I've disabled the primary key, so there are no indexes. The CPU has 2 cores
> and 2 Gigs memory.
>
> The import fell over overnight with a "table full" error as it hit 1T (I
> think this may be a file system problem). As it's not importing before
> anymore show status isn't going to provide any interesting info however, I
> did notice that mysql was not consuming much CPU time ~ 10%.
>
> I wouldn't like to split the data up into separate tables as it would
> change the schema and I'm not in charge of the schema design - just the DBA
> at the backend.
>
> Cheers
>
> Simon
>
> mos wrote:
>
>> Simon,
>> As someone else mentioned, how are you loading the data? Can you post the
>> SQL?
>>
>> You have an Id field, so is that not the primary key? If so, the slowdown
>> could be maintaining the index. If so, add up to 30% of your available ram
>> to your key_bufer_size in your my.cnf file and restart the server. How much
>> RAM do you have on your machine and how many CPU's do you have? What version
>> of MySQL are you using? Also can you post your "Show Status" output after it
>> has started to slow down? How much CPU is being used after the import slows
>> down?
>>
>> Now from what you've said, it looks like you are using this table as a
>> lookup table, so if it just has an id and a blob field, you probably return
>> the blob field for a given id, correct? If it were up to me, I would break
>> the data into more manageable tables. If you have 100 million rows, then I'd
>> break it into 10x10 million row tables. Table_1 would have id's from 1 to
>> 9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
>> lookup would call a stored procedure which determines which table to use
>> based on the Id it was given. If you really had to search all the tables you
>> can then use a Merge table based on those 10 tables. I use Merge tables
>> quite a bit and the performance is quite good.
>>
>> Mike
>>
>> At 11:42 AM 6/4/2008, you wrote:
>>
>>> Dear all,
>>>
>>> I'm presently trying to import the full wikipedia dump for one of our
>>> research users. Unsurprisingly it's a massive import file (2.7T)
>>>
>>> Most of the data is importing into a single MyISAM table which has an id
>>> field and a blob field. There are no constraints / indexes on this table.
>>> We're using an XFS filesystem.
>>>
>>> The import starts of quickly but gets increasingly slower as it
>>> progresses, starting off at about 60 G per hour but now the MyISAM table is
>>> ~1TB it's slowed to a load of about 5G per hour. At this rate the import
>>> will not finish for a considerable time, if at all.
>>>
>>> Can anyone suggest to me why this is happening and if there's a way to
>>> improve performance. If there's a more suitable list to discuss this, please
>>> let me know.
>>>
>>> Regards
>>>
>>> Simon
>>>
>>
>>
>>
>
> --
> Dr Simon Collins
> Data Grid Consultant
> National Grid Service
> University of Manchester
> Research Computing Services
> Kilburn Building
> Oxford Road
> Manchester
> M13 9PL
>
> Tel 0161 275 0604
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


Re: sort_buffer_size

2008-06-05 Thread Ananda Kumar
Hi Marti,
Thanks for the info, i had read this doc. Just wanted to confirm.
So based on doc, sort_buffer_size is also used by myisam tables using group
by or order by in their select statement, does it help in creating indexes
also for myisam tables.

regards
anandkl


On 6/5/08, Martijn Tonies <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> > Is sort_buffer_size works only for innodb or both myisam and innodb.
> >
> > Or should i use myisam_sort_buffer_size for myisam
>
> According to:
> http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html
>
> myisam_sort_buffer_size:
> The size of the buffer that is allocated when sorting MyISAM indexes during
> a REPAIR TABLE or when creating indexes with CREATE INDEX or ALTER TABLE.
>
> sort_buffer_size:
> Each thread that needs to do a sort allocates a buffer of this size.
> Increase this value for faster ORDER BY or GROUP BY operations
>
> Martijn Tonies
> Database Workbench - development tool for MySQL, and more!
> Upscene Productions
> http://www.upscene.com
> My thoughts:
> http://blog.upscene.com/martijn/
> Database development questions? Check the forum!
> http://www.databasedevelopmentforum.com
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I'm loading the data through the command below mysql -f -u root -p 
enwiki < enwiki.sql


The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.


The import fell over overnight with a "table full" error as it hit 1T (I 
think this may be a file system problem). As it's not importing before 
anymore show status isn't going to provide any interesting info however, 
I did notice that mysql was not consuming much CPU time ~ 10%.


I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just the 
DBA at the backend.


Cheers

Simon

mos wrote:

Simon,
As someone else mentioned, how are you loading the data? Can you post 
the SQL?


You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart 
the server. How much RAM do you have on your machine and how many 
CPU's do you have? What version of MySQL are you using? Also can you 
post your "Show Status" output after it has started to slow down? How 
much CPU is being used after the import slows down?


Now from what you've said, it looks like you are using this table as a 
lookup table, so if it just has an id and a blob field, you probably 
return the blob field for a given id, correct? If it were up to me, I 
would break the data into more manageable tables. If you have 100 
million rows, then I'd break it into 10x10 million row tables. Table_1 
would have id's from 1 to 9,999,999, and table_2 with id's from 10 
million to 10,999,999 etc. Your lookup would call a stored procedure 
which determines which table to use based on the Id it was given. If 
you really had to search all the tables you can then use a Merge table 
based on those 10 tables. I use Merge tables quite a bit and the 
performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an 
id field and a blob field. There are no constraints / indexes on this 
table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM 
table is ~1TB it's slowed to a load of about 5G per hour. At this 
rate the import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way 
to improve performance. If there's a more suitable list to discuss 
this, please let me know.


Regards

Simon






--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incorrect information in file: './maindb/users.frm'

2008-06-05 Thread Stut
Sorry, should've sent an update last night. That was indeed the  
problem. It's fixed now.


Thanks to everyone who helped.

-Stut

--
http://stut.net/

On 5 Jun 2008, at 09:04, Ben Clewett wrote:

It could be the size of your InnoDB log files.  Or something else  
which stops InnoDB running.


I once changed server and by mistake changed the size of the  
innodb_log_file_size.  The result was that InnoDB could not start  
and instead dissabled its self.  Therefore every table I created  
became MyISAM.


Check the error.log.  If this is the case it will say

Also do a SHOW ENGINS, may show something...

Ben


Stut wrote:

On 5 Jun 2008, at 00:41, Phil wrote:

Just a very quick guess but is innobd engine running ?

SHOW STATUS like '%inno%'
Output shown below. I've just finished importing the backup file  
again (into a different DB) and it's ignored the engine=innodb on  
each create table and has used MyISAM instead. Trying to alter a  
table yields the following...

mysql> alter table users engine=InnoDB;
Query OK, 853097 rows affected, 1 warning (1 min 35.71 sec)
Records: 853097  Duplicates: 0  Warnings: 0
mysql> show warnings;
+-+--+---+
| Level   | Code | Message   |
+-+--+---+
| Warning | 1266 | Using storage engine MyISAM for table 'users' |
+-+--+---+
1 row in set (0.00 sec)
Show status output...
mysql> SHOW STATUS like '%inno%';
+---++
| Variable_name | Value  |
+---++
| Com_show_innodb_status| 0  |
| Innodb_buffer_pool_pages_data | 0  |
| Innodb_buffer_pool_pages_dirty| 0  |
| Innodb_buffer_pool_pages_flushed  | 0  |
| Innodb_buffer_pool_pages_free | 196608 |
| Innodb_buffer_pool_pages_latched  | 0  |
| Innodb_buffer_pool_pages_misc | 0  |
| Innodb_buffer_pool_pages_total| 196608 |
| Innodb_buffer_pool_read_ahead_rnd | 0  |
| Innodb_buffer_pool_read_ahead_seq | 0  |
| Innodb_buffer_pool_read_requests  | 0  |
| Innodb_buffer_pool_reads  | 0  |
| Innodb_buffer_pool_wait_free  | 0  |
| Innodb_buffer_pool_write_requests | 0  |
| Innodb_data_fsyncs| 0  |
| Innodb_data_pending_fsyncs| 0  |
| Innodb_data_pending_reads | 0  |
| Innodb_data_pending_writes| 0  |
| Innodb_data_read  | 0  |
| Innodb_data_reads | 1  |
| Innodb_data_writes| 0  |
| Innodb_data_written   | 0  |
| Innodb_dblwr_pages_written| 0  |
| Innodb_dblwr_writes   | 0  |
| Innodb_log_waits  | 0  |
| Innodb_log_write_requests | 0  |
| Innodb_log_writes | 0  |
| Innodb_os_log_fsyncs  | 0  |
| Innodb_os_log_pending_fsyncs  | 0  |
| Innodb_os_log_pending_writes  | 0  |
| Innodb_os_log_written | 0  |
| Innodb_page_size  | 16384  |
| Innodb_pages_created  | 0  |
| Innodb_pages_read | 0  |
| Innodb_pages_written  | 0  |
| Innodb_row_lock_current_waits | 0  |
| Innodb_row_lock_time  | 0  |
| Innodb_row_lock_time_avg  | 0  |
| Innodb_row_lock_time_max  | 0  |
| Innodb_row_lock_waits | 0  |
| Innodb_rows_deleted   | 0  |
| Innodb_rows_inserted  | 0  |
| Innodb_rows_read  | 0  |
| Innodb_rows_updated   | 0  |
+---++
44 rows in set (0.00 sec)
Anyone have any ideas?
Thanks.
-Stut



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incorrect information in file: './maindb/users.frm'

2008-06-05 Thread Ben Clewett
It could be the size of your InnoDB log files.  Or something else which 
stops InnoDB running.


I once changed server and by mistake changed the size of the 
innodb_log_file_size.  The result was that InnoDB could not start and 
instead dissabled its self.  Therefore every table I created became MyISAM.


Check the error.log.  If this is the case it will say

Also do a SHOW ENGINS, may show something...

Ben


Stut wrote:

On 5 Jun 2008, at 00:41, Phil wrote:

Just a very quick guess but is innobd engine running ?

SHOW STATUS like '%inno%'


Output shown below. I've just finished importing the backup file again 
(into a different DB) and it's ignored the engine=innodb on each create 
table and has used MyISAM instead. Trying to alter a table yields the 
following...


mysql> alter table users engine=InnoDB;
Query OK, 853097 rows affected, 1 warning (1 min 35.71 sec)
Records: 853097  Duplicates: 0  Warnings: 0

mysql> show warnings;
+-+--+---+
| Level   | Code | Message   |
+-+--+---+
| Warning | 1266 | Using storage engine MyISAM for table 'users' |
+-+--+---+
1 row in set (0.00 sec)

Show status output...

mysql> SHOW STATUS like '%inno%';
+---++
| Variable_name | Value  |
+---++
| Com_show_innodb_status| 0  |
| Innodb_buffer_pool_pages_data | 0  |
| Innodb_buffer_pool_pages_dirty| 0  |
| Innodb_buffer_pool_pages_flushed  | 0  |
| Innodb_buffer_pool_pages_free | 196608 |
| Innodb_buffer_pool_pages_latched  | 0  |
| Innodb_buffer_pool_pages_misc | 0  |
| Innodb_buffer_pool_pages_total| 196608 |
| Innodb_buffer_pool_read_ahead_rnd | 0  |
| Innodb_buffer_pool_read_ahead_seq | 0  |
| Innodb_buffer_pool_read_requests  | 0  |
| Innodb_buffer_pool_reads  | 0  |
| Innodb_buffer_pool_wait_free  | 0  |
| Innodb_buffer_pool_write_requests | 0  |
| Innodb_data_fsyncs| 0  |
| Innodb_data_pending_fsyncs| 0  |
| Innodb_data_pending_reads | 0  |
| Innodb_data_pending_writes| 0  |
| Innodb_data_read  | 0  |
| Innodb_data_reads | 1  |
| Innodb_data_writes| 0  |
| Innodb_data_written   | 0  |
| Innodb_dblwr_pages_written| 0  |
| Innodb_dblwr_writes   | 0  |
| Innodb_log_waits  | 0  |
| Innodb_log_write_requests | 0  |
| Innodb_log_writes | 0  |
| Innodb_os_log_fsyncs  | 0  |
| Innodb_os_log_pending_fsyncs  | 0  |
| Innodb_os_log_pending_writes  | 0  |
| Innodb_os_log_written | 0  |
| Innodb_page_size  | 16384  |
| Innodb_pages_created  | 0  |
| Innodb_pages_read | 0  |
| Innodb_pages_written  | 0  |
| Innodb_row_lock_current_waits | 0  |
| Innodb_row_lock_time  | 0  |
| Innodb_row_lock_time_avg  | 0  |
| Innodb_row_lock_time_max  | 0  |
| Innodb_row_lock_waits | 0  |
| Innodb_rows_deleted   | 0  |
| Innodb_rows_inserted  | 0  |
| Innodb_rows_read  | 0  |
| Innodb_rows_updated   | 0  |
+---++
44 rows in set (0.00 sec)

Anyone have any ideas?

Thanks.

-Stut



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sort_buffer_size

2008-06-05 Thread Martijn Tonies
Hi,

> Is sort_buffer_size works only for innodb or both myisam and innodb.
>
> Or should i use myisam_sort_buffer_size for myisam

According to:
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

myisam_sort_buffer_size:
 The size of the buffer that is allocated when sorting MyISAM indexes during
a REPAIR TABLE or when creating indexes with CREATE INDEX or ALTER TABLE.

sort_buffer_size:
 Each thread that needs to do a sort allocates a buffer of this size.
Increase this value for faster ORDER BY or GROUP BY operations

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sort_buffer_size

2008-06-05 Thread Ananda Kumar
Hi All,
Is sort_buffer_size works only for innodb or both myisam and innodb.

Or should i use myisam_sort_buffer_size for myisam


date

2008-06-05 Thread Krishna Chandra Prajapati
Hi,

I am looking for a query which can give all the dates of months. Inputs are
year and month.

-- 
Krishna Chandra Prajapati