Re: tuning suggestion for large query

2004-09-06 Thread Egor Egorov
Sun, Jennifer [EMAIL PROTECTED] wrote:

 l consuming all my RAM and swap and being killed with error=20
 'VM: kill=
 ing process mysql
 __alloc_pages: 0-order allocation failed (gfp=3D0x1d2/=
 0)'
 
 I would like to find a startup parameter either for client or serv=
 er to limit per thread memory usage. =20

First, ensure that key_buffer is reasonable and takes, say, 50% of 
ram. 

Then for each new thread MySQL will try to allocate bulk_insert_buffer_size,
read_rnd_buffer_size and sort_buffer_size. 

See also http://dev.mysql.com/doc/mysql/en/Server_system_variables.html 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: tuning suggestion for large query

2004-09-02 Thread mos
At 04:13 PM 9/1/2004, Sun, Jennifer wrote:
Thanks Mike.
Seems like even with handler, the big query process is still consuming all 
my RAM and swap and being killed with error
'VM: killing process mysql
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)'

I would like to find a startup parameter either for client or server to 
limit per thread memory usage.
Can't help you there, sorry. I don't think that would work because MySQL is 
putting the retrieved rows into memory to get a snapshot of the table, so 
memory has to decrease. If you decrease the memory for the thread, then it 
will hit a wall.

I thought Handler would work because it is using a cursor into the table 
and shouldn't be consuming much memory. You fetch 10,000 rows at a time, 
then fetch another 10,000 rows etc.. It should only be consuming enough 
memory to hold 10,000 rows. Can you post the code showing how you're using 
Handler?

The only other thing I can suggest is to put a select statement inside of a 
loop and select using either a primary key, say rcd_id 1 to , export 
them, then select 1 to 1 and export it etc until you reach eof. 
This should be quite fast and consumes low amount of memory. I would 
recommend using the primary key instead of say select ... LIMIT 
offset,limit because the offset will need to read over the first 'n' 
offset rows and this can be quite slow. Because you're doing only part of 
the table at a time, you won't get an exact snapshot if people are updating 
the table as you are exporting it, but it will be very low on memory.

Mike

-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query
At 10:40 AM 9/1/2004, you wrote:
Hi,

We have a job that do 'select * from big-table' on a staging mysql
database, then dump to data warehouse, it is scheduled to run once a day,
but may be run manually. Also we have several other small OLTP database on
the same server.
When the big job run, it would use all the physical mem and swap, all
other process slow down because of this.

I would like to limit the resource usage for each mysql client  so that
they can use only certain max amount of RAM, and don't select everything
into memory before display it to users. However, I couldn't find any
parameter would let me implement it.
Anybody ever encounter the same issue before? Please share your experience.

Thanks.

-
Try the Handler command with an Index if you need the results sorted. It
may do the trick. :)
Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To 
unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Hi,
 
 We have a job that do 'select * from big-table' on a staging mysql database, then 
 dump to data warehouse, it is scheduled to run once a day, but may be run manually. 
 Also we have several other small OLTP database on the same server.
 When the big job run, it would use all the physical mem and swap, all other process 
 slow down because of this.
 
 I would like to limit the resource usage for each mysql client  so that they can use 
 only certain max amount of RAM, and don't select everything into memory before 
 display it to users. However, I couldn't find any parameter would let me implement 
 it.
 Anybody ever encounter the same issue before? Please share your experience.

How exactly are you running this select * from big-table?  From the
mysql command line client?  Is that what is using memory?  It isn't
clear from your post if it is the server or something else using
memory.

If it is the mysql command line client that is the issue, try adding a
-q parameter.  If you are using myisam tables, however, keep in mind
that table will be effectively locked for the whole duration... but if
it is the mysql command line client (which defaults to buffering
everything in memory), it may be faster to use -q anyway.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: tuning suggestion for large query

2004-09-02 Thread Sun, Jennifer
The command is issued from mysql command line. Is there any parameters or options I 
can use without locking the table?


-Original Message-
From: Marc Slemko [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 02, 2004 2:24 PM
To: Sun, Jennifer
Cc: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query


On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Hi,
 
 We have a job that do 'select * from big-table' on a staging mysql database, then 
 dump to data warehouse, it is scheduled to run once a day, but may be run manually. 
 Also we have several other small OLTP database on the same server.
 When the big job run, it would use all the physical mem and swap, all other process 
 slow down because of this.
 
 I would like to limit the resource usage for each mysql client  so that they can use 
 only certain max amount of RAM, and don't select everything into memory before 
 display it to users. However, I couldn't find any parameter would let me implement 
 it.
 Anybody ever encounter the same issue before? Please share your experience.

How exactly are you running this select * from big-table?  From the
mysql command line client?  Is that what is using memory?  It isn't
clear from your post if it is the server or something else using
memory.

If it is the mysql command line client that is the issue, try adding a
-q parameter.  If you are using myisam tables, however, keep in mind
that table will be effectively locked for the whole duration... but if
it is the mysql command line client (which defaults to buffering
everything in memory), it may be faster to use -q anyway.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: tuning suggestion for large query

2004-09-02 Thread Sun, Jennifer
I did 'handler table_name read limit large_numbers'. Is there a way I can use lower 
number, but automatically loop through the number and display all of the table 
records? Thanks.

-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 02, 2004 10:37 AM
To: [EMAIL PROTECTED]
Subject: RE: tuning suggestion for large query


At 04:13 PM 9/1/2004, Sun, Jennifer wrote:
Thanks Mike.
Seems like even with handler, the big query process is still consuming all 
my RAM and swap and being killed with error
'VM: killing process mysql
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)'

I would like to find a startup parameter either for client or server to 
limit per thread memory usage.

Can't help you there, sorry. I don't think that would work because MySQL is 
putting the retrieved rows into memory to get a snapshot of the table, so 
memory has to decrease. If you decrease the memory for the thread, then it 
will hit a wall.

I thought Handler would work because it is using a cursor into the table 
and shouldn't be consuming much memory. You fetch 10,000 rows at a time, 
then fetch another 10,000 rows etc.. It should only be consuming enough 
memory to hold 10,000 rows. Can you post the code showing how you're using 
Handler?

The only other thing I can suggest is to put a select statement inside of a 
loop and select using either a primary key, say rcd_id 1 to , export 
them, then select 1 to 1 and export it etc until you reach eof. 
This should be quite fast and consumes low amount of memory. I would 
recommend using the primary key instead of say select ... LIMIT 
offset,limit because the offset will need to read over the first 'n' 
offset rows and this can be quite slow. Because you're doing only part of 
the table at a time, you won't get an exact snapshot if people are updating 
the table as you are exporting it, but it will be very low on memory.

Mike


-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query


At 10:40 AM 9/1/2004, you wrote:
 Hi,
 
 We have a job that do 'select * from big-table' on a staging mysql
 database, then dump to data warehouse, it is scheduled to run once a day,
 but may be run manually. Also we have several other small OLTP database on
 the same server.
 When the big job run, it would use all the physical mem and swap, all
 other process slow down because of this.
 
 I would like to limit the resource usage for each mysql client  so that
 they can use only certain max amount of RAM, and don't select everything
 into memory before display it to users. However, I couldn't find any
 parameter would let me implement it.
 Anybody ever encounter the same issue before? Please share your experience.
 
 Thanks.
 
 -


Try the Handler command with an Index if you need the results sorted. It
may do the trick. :)

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To 
unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes.  Reads will still be permitted
until another write request is made, at which time all further reads
and writes will be blocked until the query completes.

This, however, is already happening even without -q and adding the
-q will likely significantly shorten the time to execute, depending
on exactly how large this table is.

myisam is a very limiting table type as soon as you want to do
anything more than read from or write to a single row at a time using
indexed lookups.  innodb tables do not have this problem, although
they have limitations of their own.

On Thu, 2 Sep 2004 14:30:24 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 The command is issued from mysql command line. Is there any parameters or options I 
 can use without locking the table?
 
 
 
 
 -Original Message-
 From: Marc Slemko [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 02, 2004 2:24 PM
 To: Sun, Jennifer
 Cc: [EMAIL PROTECTED]
 Subject: Re: tuning suggestion for large query
 
 On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
 [EMAIL PROTECTED] wrote:
  Hi,
 
  We have a job that do 'select * from big-table' on a staging mysql database, then 
  dump to data warehouse, it is scheduled to run once a day, but may be run 
  manually. Also we have several other small OLTP database on the same server.
  When the big job run, it would use all the physical mem and swap, all other 
  process slow down because of this.
 
  I would like to limit the resource usage for each mysql client  so that they can 
  use only certain max amount of RAM, and don't select everything into memory before 
  display it to users. However, I couldn't find any parameter would let me implement 
  it.
  Anybody ever encounter the same issue before? Please share your experience.
 
 How exactly are you running this select * from big-table?  From the
 mysql command line client?  Is that what is using memory?  It isn't
 clear from your post if it is the server or something else using
 memory.
 
 If it is the mysql command line client that is the issue, try adding a
 -q parameter.  If you are using myisam tables, however, keep in mind
 that table will be effectively locked for the whole duration... but if
 it is the mysql command line client (which defaults to buffering
 everything in memory), it may be faster to use -q anyway.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: tuning suggestion for large query

2004-09-02 Thread Sun, Jennifer
Thanks Marc,

What version of myisam table you are talking about? We are on 4.0.20, when I ran the 
big table query, I tried to insert to it twice without any issues.
The -q worked good for mysql client. Thanks.


Jennifer 

-Original Message-
From: Marc Slemko [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 02, 2004 2:41 PM
To: Sun, Jennifer
Cc: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query


Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes.  Reads will still be permitted
until another write request is made, at which time all further reads
and writes will be blocked until the query completes.
This, however, is already happening even without -q and adding the
-q will likely significantly shorten the time to execute, depending
on exactly how large this table is.

myisam is a very limiting table type as soon as you want to do
anything more than read from or write to a single row at a time using
indexed lookups.  innodb tables do not have this problem, although
they have limitations of their own.

On Thu, 2 Sep 2004 14:30:24 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 The command is issued from mysql command line. Is there any parameters or options I 
 can use without locking the table?
 
 
 
 
 -Original Message-
 From: Marc Slemko [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 02, 2004 2:24 PM
 To: Sun, Jennifer
 Cc: [EMAIL PROTECTED]
 Subject: Re: tuning suggestion for large query
 
 On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
 [EMAIL PROTECTED] wrote:
  Hi,
 
  We have a job that do 'select * from big-table' on a staging mysql database, then 
  dump to data warehouse, it is scheduled to run once a day, but may be run 
  manually. Also we have several other small OLTP database on the same server.
  When the big job run, it would use all the physical mem and swap, all other 
  process slow down because of this.
 
  I would like to limit the resource usage for each mysql client  so that they can 
  use only certain max amount of RAM, and don't select everything into memory before 
  display it to users. However, I couldn't find any parameter would let me implement 
  it.
  Anybody ever encounter the same issue before? Please share your experience.
 
 How exactly are you running this select * from big-table?  From the
 mysql command line client?  Is that what is using memory?  It isn't
 clear from your post if it is the server or something else using
 memory.
 
 If it is the mysql command line client that is the issue, try adding a
 -q parameter.  If you are using myisam tables, however, keep in mind
 that table will be effectively locked for the whole duration... but if
 it is the mysql command line client (which defaults to buffering
 everything in memory), it may be faster to use -q anyway.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Thu, 2 Sep 2004 15:19:44 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Thanks Marc,
 
 What version of myisam table you are talking about? We are on 4.0.20, when I ran the 
 big table query, I tried to insert to it twice without any issues.
 The -q worked good for mysql client. Thanks.

There is an optimization that can allow inserts (note: not updates)
and selects to happen at the same time, which may be what you are
seeing.  There are lots of corner cases, etc. so your best bet is to
check out the documentation which does a reasonable job of explaining
them, in particular:

http://dev.mysql.com/doc/mysql/en/Internal_locking.html
http://dev.mysql.com/doc/mysql/en/Table_locking.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: tuning suggestion for large query

2004-09-02 Thread mos
At 01:33 PM 9/2/2004, Sun, Jennifer wrote:
I did 'handler table_name read limit large_numbers'. Is there a way I can 
use lower number, but automatically loop through the number and display 
all of the table records? Thanks.
If large_numbers is the number of rows in the table, then of course it 
will consume a lot of memory and would make it the equivalent to a Select 
statement. To use Handler properly you need 2 loops. The outer loop would 
be executing the Handler statement to get the Next 1 rows. Then the 
inner loop goes through those 10,000 rows and exports them to the other 
table. You exit the loop when you get EOF. So you only have 10,000 rows in 
memory at one time (or however many you want to process at a time). You 
need some sort of programming language to write the code to do the looping. 
You can't do this sort of looping at the MySQL prompt I'm afraid.

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: tuning suggestion for large query

2004-09-01 Thread mos
At 10:40 AM 9/1/2004, you wrote:
Hi,
We have a job that do 'select * from big-table' on a staging mysql 
database, then dump to data warehouse, it is scheduled to run once a day, 
but may be run manually. Also we have several other small OLTP database on 
the same server.
When the big job run, it would use all the physical mem and swap, all 
other process slow down because of this.

I would like to limit the resource usage for each mysql client  so that 
they can use only certain max amount of RAM, and don't select everything 
into memory before display it to users. However, I couldn't find any 
parameter would let me implement it.
Anybody ever encounter the same issue before? Please share your experience.

Thanks.
-

Try the Handler command with an Index if you need the results sorted. It 
may do the trick. :)

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: tuning suggestion for large query

2004-09-01 Thread Sun, Jennifer
Thanks Mike.
Seems like even with handler, the big query process is still consuming all my RAM and 
swap and being killed with error 
'VM: killing process mysql
__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)'

I would like to find a startup parameter either for client or server to limit per 
thread memory usage.  

-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query


At 10:40 AM 9/1/2004, you wrote:
Hi,

We have a job that do 'select * from big-table' on a staging mysql 
database, then dump to data warehouse, it is scheduled to run once a day, 
but may be run manually. Also we have several other small OLTP database on 
the same server.
When the big job run, it would use all the physical mem and swap, all 
other process slow down because of this.

I would like to limit the resource usage for each mysql client  so that 
they can use only certain max amount of RAM, and don't select everything 
into memory before display it to users. However, I couldn't find any 
parameter would let me implement it.
Anybody ever encounter the same issue before? Please share your experience.

Thanks.

-


Try the Handler command with an Index if you need the results sorted. It 
may do the trick. :)

Mike


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]