Re: [PERFORM] Can the V7.3 EXPLAIN ANALYZE be trusted?

2005-02-06 Thread Leeuw van der, Tim


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steven Rosenstein
Sent: Sunday, February 06, 2005 8:51 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Can the V7.3 EXPLAIN ANALYZE be trusted?






While working on a previous question I posed to this group, I ran a number
of EXPLAIN ANALYZE's to provide as examples.  After sending up my last
email, I ran the same query *without* EXPLAIN ANALYZE.  The runtimes were
vastly different.  In the following example, I ran two identical queries
one right after the other.  The runtimes for both was very close (44.77
sec).  I then immediately ran the exact same query, but without EXPLAIN
ANALYZE.  The same number of rows was returned, but the runtime was only
8.7 sec.  I don't think EXPLAIN ANALYZE puts that much overhead on a query.
Does anyone have any idea what is going on here?

--- Steve


Caching by the OS?

(Did you try to *first* run the query w/o EXPLAIN ANALYZE, and then with? 
What's the timing if you do that?)

--Tim

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-26 Thread Leeuw van der, Tim
Hi,

What you could do is create a table containing all the fields from your SELECT, 
plus a per-session unique ID. Then you can store the query results in there, 
and use SELECT with OFFSET / LIMIT on that table. The WHERE clause for this 
temp-results table only needs to contain the per-session unique id.

This of course gives you a new problem: cleaning stale data out of the 
temp-results table. And another new problem is that users will not see new data 
appear on their screen until somehow the query is re-run (... but that might 
even be desirable, actually, depending on how your users do their work and what 
their work is).

And of course better performance cannot be guaranteed until you try it.


Would such a scheme give you any hope of improved performance, or would it be 
too much of a nightmare?

cheers,

--Tim




-Original Message-
From: [EMAIL PROTECTED] on behalf of Andrei Bintintan
Sent: Wed 1/26/2005 11:11 AM
To: [EMAIL PROTECTED]; Greg Stark
Cc: Richard Huxton; pgsql-sql@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
 
The problems still stays open.

The thing is that I have about 20 - 30 clients that are using that SQL query 
where the offset and limit are involved. So, I cannot create a temp table, 
because that means that I'll have to make a temp table for each session... 
which is a very bad ideea. Cursors somehow the same. In my application the 
Where conditions can be very different for each user(session) apart.

The only solution that I see in the moment is to work at the query, or to 
write a more complex where function to limit the results output. So no 
replace for Offset/Limit.

Best regards,
Andy.


- Original Message - 
From: Greg Stark [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Richard Huxton dev@archonet.com; Andrei Bintintan 
[EMAIL PROTECTED]; pgsql-sql@postgresql.org; 
pgsql-performance@postgresql.org
Sent: Tuesday, January 25, 2005 8:28 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???



 Alex Turner [EMAIL PROTECTED] writes:

 I am also very interesting in this very question.. Is there any way to
 declare a persistant cursor that remains open between pg sessions?
 This would be better than a temp table because you would not have to
 do the initial select and insert into a fresh table and incur those IO
 costs, which are often very heavy, and the reason why one would want
 to use a cursor.

 TANSTAAFL. How would such a persistent cursor be implemented if not by
 building a temporary table somewhere behind the scenes?

 There could be some advantage if the data were stored in a temporary table
 marked as not having to be WAL logged. Instead it could be automatically
 cleared on every database start.

 -- 
 greg

 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] FW: Index usage

2004-11-23 Thread Leeuw van der, Tim
Well you just selected a whole lot more rows... What's the total number of rows 
in the table?

In general, what I remember from reading on the list, is that when there's no 
upper bound on a query like this, the planner is more likely to choose a seq. 
scan than an index scan.
Try to give your query an upper bound like:

select date from chatlogs where date='11/23/04' and date  '12/31/99';

select date from chatlogs where date='10/23/04' and date  '12/31/99';

This should make it easier for the planner to give a proper estimate of the 
number of rows returned. If it doesn't help yet, please post 'explain analyze' 
output rather than 'explain' output, for it allows much better investigation 
into why the planner chooses what it chooses.

cheers,

--Tim


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of BBI Edwin Punzalan
Sent: Wednesday, November 24, 2004 7:52 AM
To: [EMAIL PROTECTED]
Subject: [PERFORM] FW: Index usage



Hi everyone,

Can anyone please explain postgres' behavior on our index.

I did the following query tests on our database:


db=# create index chatlogs_date_idx on chatlogs (date);
CREATE
db=# explain select date from chatlogs where date='11/23/04';
NOTICE:  QUERY PLAN:

Index Scan using chatlogs_date_idx on chatlogs  (cost=0.00..144.11 rows=36
width=4)

EXPLAIN
db=# explain select date from chatlogs where date='10/23/04';
NOTICE:  QUERY PLAN:

Seq Scan on chatlogs  (cost=0.00..23938.06 rows=253442 width=4)

EXPLAIN

Date's datatype is date.  Its just odd that I just change the actual date of
search and the index is not being used anymore.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] How to speed-up inserts with jdbc

2004-11-10 Thread Leeuw van der, Tim
Hi,

Try using parametrized prepared statements, does that make a difference? Or 
does PGSQL jdbc not support them in your version?

--Tim


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Michael Kleiser
Sent: Wednesday, November 10, 2004 2:52 PM
To: Jeff
Cc: Shane|SkinnyCorp; [EMAIL PROTECTED]
Subject: [PERFORM] How to speed-up inserts with jdbc


[...]
   Statement  st = con.createStatement();
[...]
st.executeUpdate(insert into 
history(uuid,coni,date,direction,partner,type) 
values('uuid','content','+datum+','dir','partner','type'));
[...]

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Restricting Postgres

2004-11-05 Thread Leeuw van der, Tim
To what extent would your problems be solved by having a 2nd server, a replication 
system (such as slony-1, but there are others), and some sort of load-balancer in 
front of it? The load-balancing could be as simple as round-robin DNS server, 
perhaps...

Then when you need to do maintenance such a vacuum full, you can temporarily take 1 
server out of the load-balancer (I hope) and do maintenance, and then the other.
I don't know what that does to replication, but I would venture that replication 
systems should be designed to handle a node going offline.

Load balancing could also help to protect against server-overload and 1 server 
toppling over.

Of course, I don't know to what extent having another piece of hardware is an option, 
for you.

cheers,

--Tim

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Martin Foster
Sent: Friday, November 05, 2004 3:50 AM
To: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Restricting Postgres

[...]

Now is there an administrative command in PostgreSQL that will cause it 
to move into some sort of maintenance mode?   For me that could be 
exceedingly useful as it would still allow for an admin connection to be 
made and run a VACUUM FULL and such.

Martin Foster
Creator/Designer Ethereal Realms
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?

2004-10-21 Thread Leeuw van der, Tim
Hiya,

Looking at that list, I got the feeling that you'd want to push that PG-awareness down 
into the block-io layer as well, then, so as to be able to optimise for (perhaps) 
conflicting goals depending on what the app does; for the IO system to be able to read 
the apps mind it needs to have some knowledge of what the app is / needs / wants and I 
get the impression that this awareness needs to go deeper than the FS only.

--Tim

(But you might have time to rewrite Linux/BSD as a PG-OS? just kidding!)

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Matt Clark
Sent: Thursday, October 21, 2004 9:58 AM
To: [EMAIL PROTECTED]
Subject: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?


I suppose I'm just idly wondering really.  Clearly it's against PG
philosophy to build an FS or direct IO management into PG, but now it's so
relatively easy to plug filesystems into the main open-source Oses, It
struck me that there might be some useful changes to, say, XFS or ext3, that
could be made that would help PG out.

I'm thinking along the lines of an FS that's aware of PG's strategies and
requirements and therefore optimised to make those activities as efiicient
as possible - possibly even being aware of PG's disk layout and treating
files differently on that basis.

Not being an FS guru I'm not really clear on whether this would help much
(enough to be worth it anyway) or not - any thoughts?  And if there were
useful gains to be had, would it need a whole new FS or could an existing
one be modified?

So there might be (as I said, I'm not an FS guru...):
* great append performance for the WAL?
* optimised scattered writes for checkpointing?
* Knowledge that FSYNC is being used for preserving ordering a lot of the
time, rather than requiring actual writes to disk (so long as the writes
eventually happen in order...)?


Matt



Matt Clark
Ymogen Ltd
P: 0845 130 4531
W: https://ymogen.net/
M: 0774 870 1584
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?

2004-10-21 Thread Leeuw van der, Tim
Hi,

I guess the difference is in 'severe hacking inside PG' vs. 'some unknown amount of 
hacking that doesn't touch PG code'.

Hacking PG internally to handle raw devices will meet with strong resistance from 
large portions of the development team. I don't expect (m)any core devs of PG will be 
excited about rewriting the entire I/O architecture of PG and duplicating large 
amounts of OS type of code inside the application, just to try to attain an unknown 
performance benefit.

PG doesn't use one big file, as some databases do, but many small files. Now PG would 
need to be able to do file-management, if you put the PG database on a raw disk 
partition! That's icky stuff, and you'll find much resistance against putting such 
code inside PG.
So why not try to have the external FS know a bit about PG and it's directory-layout, 
and it's IO requirements? Then such type of code can at least be maintained outside 
the application, and will not be as much of a burden to the rest of the application.

(I'm not sure if it's a good idea to create a PG-specific FS in your OS of choice, but 
it's certainly gonna be easier than getting FS code inside of PG)

cheers,

--Tim

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steinar H. Gunderson
Sent: Thursday, October 21, 2004 12:27 PM
To: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Anything to be gained from a 'Postgres Filesystem'?


On Thu, Oct 21, 2004 at 08:58:01AM +0100, Matt Clark wrote:
 I suppose I'm just idly wondering really.  Clearly it's against PG
 philosophy to build an FS or direct IO management into PG, but now it's so
 relatively easy to plug filesystems into the main open-source Oses, It
 struck me that there might be some useful changes to, say, XFS or ext3, that
 could be made that would help PG out.

This really sounds like a poor replacement for just making PostgreSQL use raw
devices to me. (I have no idea why that isn't done already, but presumably it
isn't all that easy to get right. :-) )

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Select with qualified join condition / Batch inserts

2004-10-15 Thread Leeuw van der, Tim
But he's testing with v8 beta3, so you'd expect the typecast problem not to appear?

Are all tables fully vacuumed? Should the statistics-target be raised for some 
columns, perhaps? What about the config file?

--Tim

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Matt Clark
Sent: Friday, October 15, 2004 12:37 PM
To: 'Bernd'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Select with qualified join condition / Batch inserts


 SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION 
   FROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con 
   WHERE cmp.BARCODE=con.BARCODE 
   AND cmp.WELL_INDEX=con.WELL_INDEX 
   AND cmp.MAT_ID=con.MAT_ID 
   AND cmp.MAT_ID = 3 
   AND cmp.BARCODE='910125864' 
   AND cmp.ID_LEVEL = 1;

Quick guess - type mismatch forcing sequential scan.  Try some quotes:
AND cmp.MAT_ID = '3' 
AND cmp.BARCODE='910125864' 
AND cmp.ID_LEVEL = '1';

M


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --

2004-09-15 Thread Leeuw van der, Tim
Hi,

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Michael Kleiser
Sent: Tuesday, September 14, 2004 4:23 PM
To: Leeuw van der, Tim
Cc: Steinar H. Gunderson; PostgreSQL Performance List
Subject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --


 What MySQl-table-type did you use?
 Was it MyISAM which don't supports transactions ?
 Yes I read about that bulk-inserts with this table-type are very fast.
 In Data Warehouse one often don't need transactions.

Although totally beyond the scope of this thread, we used InnoDB tables with MySQL 
because of the transaction-support.

regards,

--Tim

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --

2004-09-14 Thread Leeuw van der, Tim
Hi,

I found bulk-insert to perform very slow, when compared to MySQL / Oracle. All inserts 
were done in 1 transaction. However, mitigating factors here were:
- Application was a .Net application using ODBC drivers
- PostgreSQL 7.3 running on CYGWIN with cygipc daemon
- Probably very bad tuning in the config file, if any tuning done at all
- The application was issuing 'generic' SQL since it was generally used with Oracle 
and MySQL databases. So no tricks like using COPY or multiple rows with 1 INSERT 
statement. No stored procedures either.
- When doing queries, most of the time the results were comparable to or better than 
MySQL (the only other database that I tested with myself).


So what I can say is, that if you want fast INSERT performance from PostgreSQL then 
you'll probably have to do some trickery that you wouldn't have to do with a default 
MySQL installation.

regards,

--Tim


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Markus Schaber
Sent: Tuesday, September 14, 2004 2:15 PM
To: PostgreSQL Performance List
Subject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --


Hi, Mischa,

On Sun, 12 Sep 2004 20:47:17 GMT
Mischa Sandberg [EMAIL PROTECTED] wrote:

 On the other hand, if you do warehouse-style loading (Insert, or PG 
 COPY, into a temp table; and then 'upsert' into the perm table), I can 
 guarantee 2500 inserts/sec is no problem.

As we can forsee that we'll have similar insert rates to cope with in
the not-so-far future, what do you mean with 'upsert'? Do you mean a
stored procedure that iterates over the temp table?

Generally, what is the fastest way for doing bulk processing of 
update-if-primary-key-matches-and-insert-otherwise operations?

Thanks,
Markus Schaber

-- 
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:[EMAIL PROTECTED] | www.logi-track.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --

2004-09-14 Thread Leeuw van der, Tim

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steinar H. Gunderson
Sent: Tuesday, September 14, 2004 3:33 PM
To: PostgreSQL Performance List
Subject: Re: [PERFORM] Data Warehouse Reevaluation - MySQL vs Postgres --


 On Tue, Sep 14, 2004 at 02:42:20PM +0200, Leeuw van der, Tim wrote:
  - PostgreSQL 7.3 running on CYGWIN with cygipc daemon
 
 Isn't this doomed to kill your performance anyhow?

Yes and no, therefore I mentioned it explicitly as one of the caveats. When doing 
selects I could get performance very comparable to MySQL, so I don't want to blame 
poor insert-performance on cygwin/cygipc per se.
I'm not working on this app. anymore and don't have a working test-environment for it 
anymore so I cannot retest now with more recent versions.

regards,

--Tim

 
 /* Steinar */
 -- 
 Homepage: http://www.sesse.net/



---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


OT: Network config (WAS: RE: [PERFORM] postgresql performance with multimedia)

2004-08-25 Thread Leeuw van der, Tim
Hi,

We're now getting very much off-topic about configuration of networking, but:

- What is your OS?
- What output do you get when you type 'ping localhost' in any command-prompt?


-Original Message-

[...]
 I tried to put my_ip instead of localhost in
 bufmng.c and it seems to work (no more complaining).
[...]

regards,

--Tim

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] What is the best way to do attribute/values?

2004-08-25 Thread Leeuw van der, Tim
Hi,

On Aug 25, 2004, at 4:22 AM, Mark Kirkwood wrote:

 select
   pav1.person_id
 from
   person_attributes_vertical pav1
 where
  (pav1.attribute_id = 1
   and pav1.value_id in (2,3))
   or (pav1.attribute_id = 2
   and pav1.value_id in (2,3))


[...]

Why not combine attribute_id and value_id? Then you have nothing but an OR (or IN).

It should, AFAICS, give you much better selectivity on your indexes:

There will be a lot of attributes with the same ID; there will also be a lot of 
attributes with the same value. However, there should be much less attributes with a 
specific combination of (ID/Value).
Right now I think it will be very hard to determine which field has a better 
selectivity: attribute_id or value_id.


The combined attribute/value field could be an int8 or so, where the upper 4 bytes are 
for attribute_id and the lower 4 bytes for value_id.
Depending on the number of attributes and possible values a smaller datatype and / or 
a different split can be made. A smaller datatype will result in faster access.

What difference does that make?

regards,

--Tim

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] I could not get postgres to utilizy indexes

2004-08-19 Thread Leeuw van der, Tim
Hi,

You asked the very same question yesterday, and I believe you got some useful answers. 
Why do you post the question again?

You don't even mention your previous post, and you didn't continue the thread which 
you started yesterday.

Did you try out any of the suggestions which you got yesterday? Do you have further 
questions about, for instance, how to do casting of values? If so, please continue 
posting with the previous thread, rather than reposting the same question with a 
different subject.

regards,

--Tim


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Igor
Artimenko
Sent: dinsdag 17 augustus 2004 16:23
To: [EMAIL PROTECTED]
Subject: [PERFORM] I could not get postgres to utilizy indexes


Hi verybody!

I can't make use of indexes even I tried the same test by changing different settings 
in postgres.conf like geqo to off/on  geqo related parameters, enable_seqscan off/on 
 so on. Result is the same. 

Here is test itself:

I've created simplest table test and executed the same statement explain analyze 
select id from test where id = 5; Few times I added 100,000 records, applied 
vacuum full; and issued above explain command. 
Postgres uses sequential scan instead of index one. 
Of cause Time to execute the same statement constantly grows. In my mind index should 
not allow time to grow so much.  

Why Postgres does not utilizes primary unique index?
What I'm missing? It continue growing even there are 1,200,000 records. It should at 
least start using index at some point.


Details are below:
100,000 records:
QUERY PLAN

 Seq Scan on test  (cost=0.00..2427.00 rows=2 width=8) (actual time=99.626..199.835 
rows=1 loops=1)
   Filter: (id = 5)
 Total runtime: 199.990 ms

200,000 records:
QUERY PLAN
-
 Seq Scan on test  (cost=0.00..4853.00 rows=2 width=8) (actual time=100.389..402.770 
rows=1 loops=1)
   Filter: (id = 5)
 Total runtime: 402.926 ms


300,000 records:
QUERY PLAN
-
 Seq Scan on test  (cost=0.00..7280.00 rows=1 width=8) (actual time=100.563..616.064 
rows=1 loops=1)
   Filter: (id = 5)
 Total runtime: 616.224 ms
(3 rows)

I've created test table by script:

CREATE TABLE test
(
  id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,
  description char(50),
  CONSTRAINT users_pkey PRIMARY KEY (id)
);

CREATE SEQUENCE next_id_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 100
  START 1
  CACHE 5
  CYCLE;

I use postgres 7.4.2




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] General performance problem!

2004-08-17 Thread Leeuw van der, Tim



Hi,

Make 
multi-column indexes, using the columns from your most typical queries, putting 
the most selective columns first (ie; you don't need to make indexes with 
columns in the same order as they are used in the query).

For 
instance, an index on cp, effectif could likely benefit both queries; same for 
an index on cp, effectif, naf. (You'd need only one of these indexes I think, 
not both. Experiment to find out which one gives you most benefit in your 
queries, vs. the slowdown in inserts).
Perhaps some of the single-column keys can be 
dropped.


  -Original Message-From: 
  [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of olivier 
  HAROSent: dinsdag 17 augustus 2004 15:30To: 
  [EMAIL PROTECTED]Subject: [PERFORM] General 
  performance problem!
  Hello,
  
  I have a dedicated server for my 
  posgresql database :
  
  P4 2.4 GHZ
  HDD IDE 7200 rpm
  512 DDR 2700
  
  I have a problem whith one table 
  of my database :
  
  CREATE SEQUENCE "base_aveugle_seq" START 
  1;CREATE TABLE "base_aveugle" ("record_id" integer DEFAULT 
  nextval('"base_aveugle_seq"'::text) NOT NULL,"dunsnumber" integer 
  NOT NULL,"cp" text NOT NULL,"tel" text NOT 
  NULL,"fax" text NOT NULL,"naf" text NOT 
  NULL,"siege/ets" text NOT NULL,"effectif" integer NOT 
  NULL,"ca" integer NOT NULL,Constraint "base_aveugle_pkey" 
  Primary Key ("record_id"));CREATE INDEX base_aveugle_dunsnumber_key ON 
  base_aveugle USING btree (dunsnumber);CREATE INDEX base_aveugle_cp_key ON 
  base_aveugle USING btree (cp);CREATE INDEX base_aveugle_naf_key ON 
  base_aveugle USING btree (naf);CREATE INDEX base_aveugle_effectif_key ON 
  base_aveugle USING btree (effectif);
  
  
  This table contains 5 000 000 
  records
  
  I have a PHP application which 
  often makes queries on this table (especially on the "cp","naf","effectif" 
  fields)
  
  Querries are like :
   
  select (distint cp) from base_aveugle where cp='201A' and effectif between 1 
  and 150
   
  select (*) from base_aveugle where naf in ('721A','213F','421K') and cp in 
  ('54210','21459','201A') and effectif  150
  
  I think it is possible to 
  optimize the performance of this queries before changing thehardware (I 
  now I will...) but I don't know how, even after having read lot of things 
  about postgresql ...
  
  Thanks ;)
  
  ---Outgoing mail is 
  certified Virus Free.Checked by AVG anti-virus system (http://www.grisoft.com).Version: 6.0.737 
  / Virus Database: 491 - Release Date: 
11/08/2004


Re: [PERFORM] No index usage with left join

2004-08-02 Thread Leeuw van der, Tim
Cannot you do a cast in your query? Does that help with using the indexes?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of
[EMAIL PROTECTED]
Sent: maandag 2 augustus 2004 14:09
To: [EMAIL PROTECTED]
Subject: [PERFORM] No index usage with left join


We have a companies and a contacts table with about 3000 records
each.

We run the following SQL-Command which runs about 2 MINUTES !:

SELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =
companies.intfield01

contacts.sid (type text, b-tree index on it)
companies.intfield01 (type bigint, b-tree index on it)

comfire= explain analyze SELECT count(*) FROM prg_contacts LEFT JOIN
prg_addresses ON prg_contacts.sid=prg_addresses.intfield01;
NOTICE:  QUERY PLAN:

Aggregate  (cost=495261.02..495261.02 rows=1 width=15) (actual
time=40939.38..40939.38 rows=1 loops=1)
  -  Nested Loop  (cost=0.00..495253.81 rows=2885 width=15) (actual
time=0.05..40930.14 rows=2866 loops=1)
-  Seq Scan on prg_contacts  (cost=0.00..80.66 rows=2866
width=7) (actual time=0.01..18.10 rows=2866 loops=1)
-  Seq Scan on prg_addresses  (cost=0.00..131.51 rows=2751
width=8) (actual time=0.03..6.25 rows=2751 loops=2866)
Total runtime: 40939.52 msec

EXPLAIN

Note:
- We need the left join because we need all contacts even if they are
not assigned to a company
- We are not able to change the datatypes of the joined fields
because we use a standard software (btw who cares: SuSE Open Exchange
Server)
- When we use a normal join (without LEFT or a where clause) the SQL
runs immediately using the indexes

How can I force the usage of the indexes when using left join. Or
any other SQL construct that does the same !? Can anybody please give
us a hint !?

Thanks in forward.

Greetings
Achim

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] PostgreSQL on VMWare vs Windows vs CoLinux

2004-06-02 Thread Leeuw van der, Tim
Hi All,

I think it would actually be interesting to see the performance of the Cygwin version 
for these same benchmarks, then we've covered all ways to run PostgreSQL on Windows 
systems. (I expect though that performance of Cygwin-PostgreSQL will improve 
considerably when an updated version is released that uses Cygwin native IPC instead 
of the ipc-daemon.)

regards,

--Tim

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] single index on more than two coulumns a bad thing?

2004-04-04 Thread Leeuw van der, Tim
Hi Aaron,

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of 
 Aaron Werman
 Sent: vrijdag 2 april 2004 13:57
 
 
 another thing that I have all over the place is a hierarchy:
 index on grandfather_table(grandfather)
 index on father_table(grandfather, father)
 index on son_table(grandfather, father, son)
 

It depends on your data-distribution, but I find that in almost all cases it's 
beneficial to have your indexes the other way round in such cases:

index on grandfather_table(grandfather)
index on father_table(father, grandfather)
index on son_table(son, father, grandfather)

That usually gives a less common, more selective value at the start of the index, 
making the initial selection in the index smaller.

And AFAIK I don't have to rewrite my queries for that; the planner doesn't care about 
the order of expressions in the query that are on the same level.

That said, I tend to use 'surrogate keys'; keys generated from sequences or 
auto-number columns for my tables. It makes the tables less readable, but the indexes 
remain smaller.


Greetings,

--Tim



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] UPDATE with subquery too slow

2004-02-17 Thread Leeuw van der, Tim
Hi,

This is not going to answer your question of course but did you already try to do this 
in 2 steps?

You said that the subquery itself doesn't take very long, so perhaps you can create a 
temporary table based on the subquery, then in the update do a join with the temporary 
table?

This might not be desirable in the end, but it might be useful just to check the 
performance of it.

And - isn't it an option to upgrade to 7.4.1 instead?


regards,

--Tim

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and 
is thus for use only by the intended recipient. If you received this in error, please 
contact the sender and delete the e-mail and its attachments from all computers. 



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Eric Jain
Sent: dinsdag 17 februari 2004 13:38
To: pgsql-performance
Subject: [PERFORM] UPDATE with subquery too slow


I can't get the following statement to complete with reasonable time.
I've had it running for over ten hours without getting anywhere. I
suspect (hope) there may be a better way to accomplish what I'm trying
to do (set fields containing unique values to null):

  UPDATE requests
  SET session = NULL
  WHERE session IN
  (
SELECT session
FROM requests
GROUP BY session
HAVING COUNT(*) = 1
  );

[...]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] Insert Times

2004-01-27 Thread Leeuw van der, Tim
Hi,

My personal feeling on this is, that the long time taken for the first query
is for loading all sorts of libraries, JVM startup overhead etc.

What if you first do some SELECT (whatever), on a different table, to warm
up the JVM and the database?

regards,

--Tim

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is thus for use only by the intended recipient. If you received
this in error, please contact the sender and delete the e-mail and its
attachments from all computers. 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match