El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:
Also also, you should be running at LEAST 7.4.13, the latest release of
7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes
your problem. Doubt it, but it could be. However, the more important
point is that
Hello,
I have a big table called products. Table size: 1123MB. Toast table
size: 32MB. Indexes size: 380MB.
I try to do a query like this:
select id,name from products where name like '%Mug%';
Yes, I know that tsearch2 is better for this, but please read on. The
above query gives this
Lazlo,
Meanwhile, iostat 5 gives something like this:
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
This is your problem. Do the following and report the
Luke Lonergan írta:
Lazlo,
Meanwhile, iostat 5 gives something like this:
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
This is your problem. Do
Heikki Linnakangas wrote:
Is there any other columns besides id and name in the table? How big
is products.txt compared to the heap file?
Yes, many other columns. The products.txt is only 59MB. It is similar to
the size of the index size (66MB).
Another question: I have a btree index on
Laszlo Nagy gandalf 'at' designaproduct.biz writes:
This is a mirrored disk with two SATA disks. In theory, writing should
be slower than reading. Is this a hardware problem? Or is it that
sync did not do the sync?
SATA disks are supposed to be capable of lying to pg's fsync (pg
asking the
Laszlo Nagy gandalf 'at' designaproduct.biz writes:
Probably, but PostgreSQL doesn't know how to do that. Even if it
did, it depends on how many matches there is. If you scan the index
and then fetch the matching rows from the heap, you're doing random
I/O to the heap. That becomes slower
Heikki Linnakangas wrote:
Guillaume Cottenceau wrote:
Laszlo Nagy gandalf 'at' designaproduct.biz writes:
Probably, but PostgreSQL doesn't know how to do that. Even if it
did, it depends on how many matches there is. If you scan the index
and then fetch the matching rows from the heap,
Alvaro Herrera wrote:
Are you saying that an indexscan Filter only acts after getting the
heap tuple? If that's the case, then there's room for optimization
here, namely if the affected column is part of the index key, then we
could do the filtering before fetching the heap tuple.
That's
Guillaume Cottenceau wrote:
Laszlo Nagy gandalf 'at' designaproduct.biz writes:
Probably, but PostgreSQL doesn't know how to do that. Even if it
did, it depends on how many matches there is. If you scan the index
and then fetch the matching rows from the heap, you're doing random
I/O to the
On Tue, 2006-09-12 at 02:18, Piñeiro wrote:
El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:
Also also, you should be running at LEAST 7.4.13, the latest release of
7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes
your problem. Doubt it, but it could be.
On Mon, 2006-09-11 at 20:53, Merlin Moncure wrote:
On 9/11/06, Scott Marlowe [EMAIL PROTECTED] wrote:
I'd suggest two things.
one: Get a better ERP... :) or at least one you can inject some
intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which
will be released
Alvaro Herrera [EMAIL PROTECTED] writes:
Are you saying that an indexscan Filter only acts after getting the
heap tuple?
Correct.
If that's the case, then there's room for optimization
here, namely if the affected column is part of the index key, then we
could do the filtering before
=?ISO-8859-1?Q?Pi=F1eiro?= [EMAIL PROTECTED] writes:
* About Postgre version: you advice me to upgrade from 7.4.7 (postgre
version at sarge) to 8.2. Well, I don't want to be a troll, but I
upgrade from 7.2.1 (woody) to 7.4.7 and I get worse, do you really think
that upgrade to 8.1 will
Hi All
I have installed a application with postgres-8.1.4 , I
have to optimize the performance, As a measure i thought of enabling
Auto commit , is it a right decision to take , If correct please suggest
the steps that i need to follow in order to implement the Auto
Sorry I answer the message only to Scott Marlowe. I re-send the response
- Mensaje reenviado
De: Piñeiro [EMAIL PROTECTED]
Para: Scott Marlowe [EMAIL PROTECTED]
Asunto: Re: [PERFORM] Performance problem with Sarge compared with Woody
Fecha: Tue, 12 Sep 2006 17:36:41 +0200
El mar,
On Tue, 2006-09-12 at 11:06, Piñeiro wrote:
- Mensaje reenviado
De: Piñeiro [EMAIL PROTECTED]
Para: Scott Marlowe [EMAIL PROTECTED]
Asunto: Re: [PERFORM] Performance problem with Sarge compared with Woody
Fecha: Tue, 12 Sep 2006 17:36:41 +0200
El mar, 12-09-2006 a las 09:27
Laszlo Nagy [EMAIL PROTECTED] writes:
Meanwhile, iostat 5 gives something like this:
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
0 12 125.66 128
Tom Lane wrote:
Only if the index is capable of disgorging the original value of the
indexed column, a fact not in evidence in general (counterexample:
polygons indexed by their bounding boxes in an r-tree). But yeah,
it's interesting to think about applying filters at the index fetch
step for
Tom Lane wrote:
Why is that showing 85+ percent *system* CPU time?? I could believe a
lot of idle CPU if the query is I/O bound, or a lot of user time if PG
was being a hog about doing the ~~ comparisons (not too unlikely BTW).
I'm sorry, this was really confusing. I don't know what it was -
El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribió:
As Tom asked, post the explain analyze output for this query. I'm
guessing there'll be a stage that is creating millions (possibly upon
millions) of rows from a cross product.
---(end of
Lazlo,
On 9/12/06 10:01 AM, Laszlo Nagy [EMAIL PROTECTED] wrote:
zeusd1= explain analyze select id from product_search where name_desc
like '%Mug%';
QUERY PLAN
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Piñeiro
TIP 6: explain analyze is your friend
Well, yes, it is a friend, but as the select at postgre Sarge version
never finished I can't use a explain analyze. I show you the explain,
with the hope that someone has any
Piñeiro wrote:
El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribió:
As Tom asked, post the explain analyze output for this query. I'm
guessing there'll be a stage that is creating millions (possibly upon
millions) of rows from a cross product.
Well, yes, it is a friend, but
Hello All
I am getting this message in my log files for my database.
LOG: out of file descriptors: Too many open files; release and retry.
At some point the memomy didn't get released and the postmaster reset itself
terminating all client connections. I am not sure what direction to go. I can
Laszlo Nagy [EMAIL PROTECTED] writes:
Tom Lane wrote:
Why is that showing 85+ percent *system* CPU time??
I'm sorry, this was really confusing. I don't know what it was -
probably a background system process, started from cron (?). I retried
the same query and I got this:
[ around 80%
PG 8.0.3 is choosing a bad plan between a query.
I'm going to force the plan (by making one join into a function).
I'd like to know if this is unexpected; in general,
can PG see that a join on an grouped-by field
can be pushed down into the query as an indexable filter?
The query below joins a
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0
0 12 124.66 129 15.67 124.39 129 15.64 12 0
Craig A. James wrote:
There IS a bug for SATA disk drives in some versions of the Linux
kernel. On a lark I ran some of the I/O tests in this thread, and
much to my surprise discovered my write speed was 6 MB/sec ... ouch!
On an identical machine, different kernel, the write speed was 54
I am having problems performing a join on two large tables. It seems to only
want to use a sequential scan on the join, but that method seems to be slower
than an index scan. I've never actually had it complete the sequential scan
because I stop it after 24+ hours. I've run joins against large
Lazlo,
On 9/12/06 2:49 PM, Laszlo Nagy [EMAIL PROTECTED] wrote:
I'm using FreeBSD, and not just because it dynamically alters the
priority of long running processes. :-)
Understood.
Linux and FreeBSD often share some driver technology.
I have had extremely bad performance historically with
Mischa Sandberg [EMAIL PROTECTED] writes:
can PG see that a join on an grouped-by field
can be pushed down into the query as an indexable filter?
No. The GROUP BY serves as a partial optimization fence. If you're
concerned about the speed of this query, I recommend making a different
view in
Laszlo Nagy wrote:
Craig A. James wrote:
There IS a bug for SATA disk drives in some versions of the Linux
kernel. On a lark I ran some of the I/O tests in this thread, and
much to my surprise discovered my write speed was 6 MB/sec ... ouch!
On an identical machine, different kernel, the
Tom Lane wrote:
Mischa Sandberg [EMAIL PROTECTED] writes:
can PG see that a join on an grouped-by field
can be pushed down into the query as an indexable filter?
No. The GROUP BY serves as a partial optimization fence. If you're
concerned about the speed of this query, I recommend making a
34 matches
Mail list logo