"Sun, Jennifer" <[EMAIL PROTECTED]> wrote:
> l consuming all my RAM and swap and being killed with error=20
> 'VM: kill=
> ing process mysql
> __alloc_pages: 0-order allocation failed (gfp=3D0x1d2/=
> 0)'
>
> I would like to find a startup parameter either for client or serv=
> er to limit per th
At 01:33 PM 9/2/2004, Sun, Jennifer wrote:
I did 'handler table_name read limit large_numbers'. Is there a way I can
use lower number, but automatically loop through the number and display
all of the table records? Thanks.
If "large_numbers" is the number of rows in the table, then of course it
On Thu, 2 Sep 2004 15:19:44 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Thanks Marc,
>
> What version of myisam table you are talking about? We are on 4.0.20, when I ran the
> big table query, I tried to insert to it twice without any issues.
> The -q worked good for mysql client. Thanks.
]
Sent: Thursday, September 02, 2004 2:41 PM
To: Sun, Jennifer
Cc: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query
Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes. Reads will still be permitted
until another write request is
an use without locking the table?
>
>
>
>
> -Original Message-
> From: Marc Slemko [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 02, 2004 2:24 PM
> To: Sun, Jennifer
> Cc: [EMAIL PROTECTED]
> Subject: Re: tuning suggestion for large query
>
> On Wed, 1 Sep 20
0:37 AM
To: [EMAIL PROTECTED]
Subject: RE: tuning suggestion for large query
At 04:13 PM 9/1/2004, Sun, Jennifer wrote:
>Thanks Mike.
>Seems like even with handler, the big query process is still consuming all
>my RAM and swap and being killed with error
>'VM: killing process my
suggestion for large query
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a job that do 'select * from big-table' on a staging mysql database, then
> dump to data warehouse, it is scheduled to run once a day, but may be r
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a job that do 'select * from big-table' on a staging mysql database, then
> dump to data warehouse, it is scheduled to run once a day, but may be run manually.
> Also we have several other small OLTP da
27;t get an exact snapshot if people are updating
the table as you are exporting it, but it will be very low on memory.
Mike
-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large
r to limit per
thread memory usage.
-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 01, 2004 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: tuning suggestion for large query
At 10:40 AM 9/1/2004, you wrote:
>Hi,
>
>We have a job that do 'selec
At 10:40 AM 9/1/2004, you wrote:
Hi,
We have a job that do 'select * from big-table' on a staging mysql
database, then dump to data warehouse, it is scheduled to run once a day,
but may be run manually. Also we have several other small OLTP database on
the same server.
When the big job run, it w
Hi,
We have a job that do 'select * from big-table' on a staging mysql database, then dump
to data warehouse, it is scheduled to run once a day, but may be run manually. Also we
have several other small OLTP database on the same server.
When the big job run, it would use all the physical mem and
12 matches
Mail list logo