On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote:
> data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM
> v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH
> FROM s.custdate) FROM view_505 v INNER JOIN r3s169 s ON v.dsiacctno =
> s.dsiacctno;
>
On Tue, Sep 12, 2006 at 03:33:08PM -0400, [EMAIL PROTECTED] wrote:
> Hello All
>
> I am getting this message in my log files for my database.
>
> LOG: out of file descriptors: Too many open files; release and retry.
>
> At some point the memomy didn't get released and the postmaster reset itsel
Tom Lane wrote:
Mischa Sandberg <[EMAIL PROTECTED]> writes:
can PG see that a join on an grouped-by field
can be pushed down into the query as an indexable filter?
No. The GROUP BY serves as a partial optimization fence. If you're
concerned about the speed of this query, I recommend making a
Laszlo Nagy wrote:
Craig A. James wrote:
There IS a bug for SATA disk drives in some versions of the Linux
kernel. On a lark I ran some of the I/O tests in this thread, and
much to my surprise discovered my write speed was 6 MB/sec ... ouch!
On an identical machine, different kernel, the w
Mischa Sandberg <[EMAIL PROTECTED]> writes:
> can PG see that a join on an grouped-by field
> can be pushed down into the query as an indexable filter?
No. The GROUP BY serves as a partial optimization fence. If you're
concerned about the speed of this query, I recommend making a different
view
Lazlo,
On 9/12/06 2:49 PM, "Laszlo Nagy" <[EMAIL PROTECTED]> wrote:
> I'm using FreeBSD, and not just because it dynamically alters the
> priority of long running processes. :-)
Understood.
Linux and FreeBSD often share some driver technology.
I have had extremely bad performance historically
I am having problems performing a join on two large tables. It seems to only
want to use a sequential scan on the join, but that method seems to be slower
than an index scan. I've never actually had it complete the sequential scan
because I stop it after 24+ hours. I've run joins against large
Craig A. James wrote:
There IS a bug for SATA disk drives in some versions of the Linux
kernel. On a lark I ran some of the I/O tests in this thread, and
much to my surprise discovered my write speed was 6 MB/sec ... ouch!
On an identical machine, different kernel, the write speed was 54 MB
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0
0 12 124.66 129 15.67 124.39 129 15.64 12 0
PG 8.0.3 is choosing a bad plan between a query.
I'm going to force the plan (by making one join into a function).
I'd like to know if this is unexpected; in general,
can PG see that a join on an grouped-by field
can be pushed down into the query as an indexable filter?
The query below joins a t
Laszlo Nagy <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Why is that showing 85+ percent *system* CPU time??
> I'm sorry, this was really confusing. I don't know what it was -
> probably a background system process, started from cron (?). I retried
> the same query and I got this:
> [ around
Hello All
I am getting this message in my log files for my database.
LOG: out of file descriptors: Too many open files; release and retry.
At some point the memomy didn't get released and the postmaster reset itself
terminating all client connections. I am not sure what direction to go. I can
Piñeiro wrote:
> El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribió:
> > As Tom asked, post the explain analyze output for this query. I'm
> > guessing there'll be a stage that is creating millions (possibly upon
> > millions) of rows from a cross product.
> >
> Well, yes, it is a frie
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Piñeiro
> > TIP 6: explain analyze is your friend
> Well, yes, it is a friend, but as the select at postgre Sarge version
> never finished I can't use a explain analyze. I show you the explain,
> with the hope that someone has an
Lazlo,
On 9/12/06 10:01 AM, "Laszlo Nagy" <[EMAIL PROTECTED]> wrote:
> zeusd1=> explain analyze select id from product_search where name_desc
> like '%Mug%';
>QUERY PLAN
> -
El mar, 12-09-2006 a las 11:20 -0500, Scott Marlowe escribió:
> As Tom asked, post the explain analyze output for this query. I'm
> guessing there'll be a stage that is creating millions (possibly upon
> millions) of rows from a cross product.
>
> ---(end of broadcast)
Tom Lane wrote:
Why is that showing 85+ percent *system* CPU time?? I could believe a
lot of idle CPU if the query is I/O bound, or a lot of user time if PG
was being a hog about doing the ~~ comparisons (not too unlikely BTW).
I'm sorry, this was really confusing. I don't know what it was -
Tom Lane wrote:
Only if the index is capable of disgorging the original value of the
indexed column, a fact not in evidence in general (counterexample:
polygons indexed by their bounding boxes in an r-tree). But yeah,
it's interesting to think about applying filters at the index fetch
step for i
Laszlo Nagy <[EMAIL PROTECTED]> writes:
> Meanwhile, "iostat 5" gives something like this:
> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
>1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
>0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
>0 12 125.66
Lazlo,
You can ignore tuning postgres and trying to use indexes, your problem is a bad
hardware / OS configuration. The disks you are using should read 4-5 times
faster than they are doing. Look to the SATA chipset driver in your FreeBSD
config - perhaps upgrading your kernel would help.
St
On Tue, 2006-09-12 at 11:06, Piñeiro wrote:
> - Mensaje reenviado
> De: Piñeiro <[EMAIL PROTECTED]>
> Para: Scott Marlowe <[EMAIL PROTECTED]>
> Asunto: Re: [PERFORM] Performance problem with Sarge compared with Woody
> Fecha: Tue, 12 Sep 2006 17:36:41 +0200
> El mar, 12-09-2006 a l
Sorry I answer the message only to Scott Marlowe. I re-send the response
- Mensaje reenviado
De: Piñeiro <[EMAIL PROTECTED]>
Para: Scott Marlowe <[EMAIL PROTECTED]>
Asunto: Re: [PERFORM] Performance problem with Sarge compared with Woody
Fecha: Tue, 12 Sep 2006 17:36:41 +0200
El m
On 9/12/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Mon, 2006-09-11 at 20:53, Merlin Moncure wrote:
> for 3 months I ran a 400M$ manufacturing company's erp off of a
> pre-beta 8.0 windows pg server converted from cobol using some hacked
> out c++ middleware. I remember having to change how
> Hi All
> I have installed a application with postgres-8.1.4 , I
> have to optimize the performance, As a measure i thought of enabling
> Auto commit , is it a right decision to take , If correct please suggest
> the steps that i need to follow in order to implement the Auto
=?ISO-8859-1?Q?Pi=F1eiro?= <[EMAIL PROTECTED]> writes:
>* About Postgre version: you advice me to upgrade from 7.4.7 (postgre
> version at sarge) to 8.2. Well, I don't want to be a troll, but I
> upgrade from 7.2.1 (woody) to 7.4.7 and I get worse, do you really think
> that upgrade to 8.1 will
Hi All
I have installed a application with postgres-8.1.4 , I
have to optimize the performance, As a measure i thought of enabling
Auto commit , is it a right decision to take , If correct please suggest
the steps that i need to follow in order to implement the Auto Vacuum.
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Are you saying that an indexscan "Filter" only acts after getting the
> heap tuple?
Correct.
> If that's the case, then there's room for optimization
> here, namely if the affected column is part of the index key, then we
> could do the filtering befor
On Mon, 2006-09-11 at 20:53, Merlin Moncure wrote:
> On 9/11/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> > I'd suggest two things.
> >
> > one: Get a better ERP... :) or at least one you can inject some
> > intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which
> > will be re
On Tue, 2006-09-12 at 02:18, Piñeiro wrote:
> El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:
>
> > Also also, you should be running at LEAST 7.4.13, the latest release of
> > 7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes
> > your problem. Doubt it, but it co
Guillaume Cottenceau wrote:
Laszlo Nagy writes:
Probably, but PostgreSQL doesn't know how to do that. Even if it
did, it depends on how many matches there is. If you scan the index
and then fetch the matching rows from the heap, you're doing random
I/O to the heap. That becomes slower than s
Alvaro Herrera wrote:
Are you saying that an indexscan "Filter" only acts after getting the
heap tuple? If that's the case, then there's room for optimization
here, namely if the affected column is part of the index key, then we
could do the filtering before fetching the heap tuple.
That's righ
Heikki Linnakangas wrote:
> Guillaume Cottenceau wrote:
> >Laszlo Nagy writes:
> >
> >>>Probably, but PostgreSQL doesn't know how to do that. Even if it
> >>>did, it depends on how many matches there is. If you scan the index
> >>>and then fetch the matching rows from the heap, you're doing rand
Laszlo Nagy writes:
> > Probably, but PostgreSQL doesn't know how to do that. Even if it
> > did, it depends on how many matches there is. If you scan the index
> > and then fetch the matching rows from the heap, you're doing random
> > I/O to the heap. That becomes slower than scanning the heap
Laszlo Nagy writes:
> This is a mirrored disk with two SATA disks. In theory, writing should
> be slower than reading. Is this a hardware problem? Or is it that
> "sync" did not do the sync?
SATA disks are supposed to be capable of lying to pg's fsync (pg
asking the kernel to synchronize a write
Heikki Linnakangas wrote:
Is there any other columns besides id and name in the table? How big
is products.txt compared to the heap file?
Yes, many other columns. The products.txt is only 59MB. It is similar to
the size of the index size (66MB).
Another question: I have a btree index on prod
Luke Lonergan írta:
Lazlo,
Meanwhile, "iostat 5" gives something like this:
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
This is your problem. Do t
Lazlo,
> Meanwhile, "iostat 5" gives something like this:
>
> tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
>1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
>0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
This is your problem. Do the following and repo
Laszlo Nagy wrote:
I made another test. I create a file with the identifiers and names of
the products:
psql#\o products.txt
psql#select id,name from product;
Then I can search using grep:
grep "Mug" products.txt | cut -f1 -d\|
There is a huge difference. This command runs within 0.5 seconds
Hello,
I have a big table called products. Table size: 1123MB. Toast table
size: 32MB. Indexes size: 380MB.
I try to do a query like this:
select id,name from products where name like '%Mug%';
Yes, I know that tsearch2 is better for this, but please read on. The
above query gives this plan
El lun, 11-09-2006 a las 17:07 -0500, Scott Marlowe escribió:
> Also also, you should be running at LEAST 7.4.13, the latest release of
> 7.4. It's possible there's a fix between 7.4.7 and 7.4.13 that fixes
> your problem. Doubt it, but it could be. However, the more important
> point is that t
40 matches
Mail list logo