Re: [GENERAL] Weird query plan

2003-09-17 Thread Dmitry Tkach
P.S. I also tried to look at the stats of that other database I 
mentioned... The stats for b look similar:
stavalues1  | 
{1028104,25100079,50685614,78032989,105221902,135832793,199827486,611968165,807597786,884897604,969971779} 

But the stats for a are just *not there at all* (is it even possible?)
Could it be the reason why it works on that database (because it uses 
the default stats instead of the real thing)?



I  ran 'analyze a;' on that database...
I now have the same problem with it, but the other way around - the 
query with a condition on a  runs quickly, and one with a condition on b 
does not... and the query plans are the same, and have a as outer table...
The new stats for a look like:

stavalues1  | 
{1003284,61663485,126262679,211106732,300624079,392709544,469196539,572479496,697890767,842087009,989170923}

... but actually, I never tried it with a condition on b on that 
database before analyze :-(
I just tried the "a - variant" (the one that wasn't working originally), 
and it worked...
Now, I tried deleting all the entries for a from pg_statistic, and 
running the b-variant... and it still doesn't work.

So, it probably has nothing to do with that analyze I ran...

Dima

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] psql and blob

2003-09-17 Thread Jonathan Bartlett
> if you, then write a program for that. psql is a database server, not a
> client program.

No, psql is a client program.  postmaster is the database server.

Jon


>
> C.
>
> Daniel Schuchardt wrote, On 9/17/2003 8:37 PM:
> > Hm, but lo_import/lo_export imports/exports from the local filesystem.
> > I want to upload/download a clientfile to/from the Server Database with
> > psql. I think this is a feature in psql. But i can't remember...
> >
> > Daniel
> >
>
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Weird query plan

2003-09-17 Thread Dmitry Tkach
Tom Lane wrote:

Dmitry Tkach <[EMAIL PROTECTED]> writes:
 

Also, I have another copy (not exact copy, but identical schema, and 
similar content... but about twice smaller) of the original database... 
I tried my query on it, and it works right too.
   

 

So, there must be something wrong with that particular database I suppose...
   

Hmm.  Take a look at the pg_stats statistics for the id columns in each
case.  Could the ones for the misbehaving tables be out of whack
somehow?  I'm wondering for example if the planner discounted the >=
condition because it thought it would match all the rows.
 

Well... It *does* match (almost) all the rows (there are about a million 
rows before that key, and the remaining 79 mil after)...
The stats look in synch with that:

for a:
stavalues1  | 
{1000488,33495482,69111011,99286820,129611281,204441828,331968789,508451171,782660252,869480434,989787700}
for b:
stavalues1  | 
{1008692,54892364,110119463,192551141,300490851,389609207,465139533,570442801,706876547,849087358,989851076}

(The key in the criteria was 7901288 - somewhere in the first bucket)

*But* isn't  my 'limit' clause supposed to affect that decision? I mean, 
even though the filter isn't very selective, it should still speed up 
getting the *first* match...

Thanks!

Dima

P.S. I also tried to look at the stats of that other database I 
mentioned... The stats for b look similar:
stavalues1  | 
{1028104,25100079,50685614,78032989,105221902,135832793,199827486,611968165,807597786,884897604,969971779}

But the stats for a are just *not there at all* (is it even possible?)
Could it be the reason why it works on that database (because it uses 
the default stats instead of the real thing)?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] State of Beta (2)

2003-09-17 Thread Joshua D. Drake
Hello,

 O.k. here are my thoughts on how this could work:

 Command Prompt will set up an escrow account online at www.escrow.com.
 When the Escrow account totals 2000.00 and is released, Command Prompt 
will dedicate a
 programmer for one month to debugging, documenting, reviewing, 
digging, crying,
 screaming, begging and bleeding with the code. At the end of the month 
and probably during
 depending on how everything goes Command Prompt will release its 
findings.  The findings
 will include a project plan on moving forward over the next 5 months 
(if that is what it takes) to
 produce the first functional pg_upgrade.

 If the project is deemed as moving in the right direction by the 
community members and specifically
 the core members we will setup milestone payments for the project.

  What does everyone think?

  Sincerely,

  Joshua D. Drake

Dennis Gearon wrote:

I had already committed $50/mo.

Robert Creager wrote:

Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <[EMAIL PROTECTED]> uttered something amazingly 
similar to:

 

Robert Creager wrote:

  

Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
"Joshua D. Drake" <[EMAIL PROTECTED]> uttered something 
amazingly similar
to:





If someone is willing to pony up 2000.00 per month for a period of 
at   
Well, if you're willing to set up some sort of escrow, I'll put in 
$100.  I

Is that $100 times once, or $100 X 6mos anticiapated develop time.
  


That's $100 once.  And last I looked, there are well over 1800 
subscribers on
this list alone.  On the astronomically small chance everyone one of 
them did
what I'm doing, it would cover more than 6 months of development time 
;-)  This
strikes me as like supporting public radio.  The individuals do some, 
and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than 
complaining that
it's not done.  Just like Joshua is doing.  You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.

There have been a couple of other bytes from small businesses, so who 
knows!

You game?

Cheers,
Rob
 

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] psql and blob

2003-09-17 Thread Nigel J. Andrews
On Wed, 17 Sep 2003, Daniel Schuchardt wrote:

> Hm, but lo_import/lo_export imports/exports from the local filesystem. 
> I want to upload/download a clientfile to/from the Server Database with
> psql. I think this is a feature in psql. But i can't remember...


Yes it is. You want to look up \lo_import in the help/manpage/manual of psql.


Nigel Andrews



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] psql and blob

2003-09-17 Thread Daniel Schuchardt
Hi @ all,

i'm sure there was a psql-function to transfere my Blob-Data to the
server but I can't remember.

I have a script like this :

UPDATE xy SET z = lo_import('localpath_and_file');

but i want to execute this script from the client and so my blob-data is
on the client and lo_import fails (the server doesn't have this file).
Hm. 

Thanks for tips - Daniel.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Job opportunity for PostgreSQL developer

2003-09-17 Thread Kyle
We may be hiring a developer in the next few months to work our our 
current ERP which is implemented in TCL/TK and PostgreSQL.  Anyone 
interested could send a resume to me.  You might also look at an older 
iteration of part of our work at www.wyatterp.com.

Kyle Bateman
President, Action Target Inc.
PO Box 636
Provo, UT 84603


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Where are PL/pgSQL functions stored?

2003-09-17 Thread David Shadovitz
Where are PL/pgSQL functions stored?  I want to retrieve the text of a
function that I've created.

Thanks.
-David

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] psql and blob

2003-09-17 Thread CoL
Hi,

if you, then write a program for that. psql is a database server, not a 
client program.

C.

Daniel Schuchardt wrote, On 9/17/2003 8:37 PM:
Hm, but lo_import/lo_export imports/exports from the local filesystem. 
I want to upload/download a clientfile to/from the Server Database with
psql. I think this is a feature in psql. But i can't remember...

Daniel



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Weird query plan

2003-09-17 Thread Tom Lane
Dmitry Tkach <[EMAIL PROTECTED]> writes:
> Also, I have another copy (not exact copy, but identical schema, and 
> similar content... but about twice smaller) of the original database... 
> I tried my query on it, and it works right too.

> So, there must be something wrong with that particular database I suppose...

Hmm.  Take a look at the pg_stats statistics for the id columns in each
case.  Could the ones for the misbehaving tables be out of whack
somehow?  I'm wondering for example if the planner discounted the >=
condition because it thought it would match all the rows.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] State of Beta 2

2003-09-17 Thread Dennis Gearon
I had already committed $50/mo.

Robert Creager wrote:

Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <[EMAIL PROTECTED]> uttered something amazingly similar to:
 

Robert Creager wrote:

   

Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
"Joshua D. Drake" <[EMAIL PROTECTED]> uttered something amazingly similar
to:


 

If someone is willing to pony up 2000.00 per month for a period of at 
   

Well, if you're willing to set up some sort of escrow, I'll put in $100.  I
 

Is that $100 times once, or $100 X 6mos anticiapated develop time.
   

That's $100 once.  And last I looked, there are well over 1800 subscribers on
this list alone.  On the astronomically small chance everyone one of them did
what I'm doing, it would cover more than 6 months of development time ;-)  This
strikes me as like supporting public radio.  The individuals do some, and the
corporations do a bunch.
I'm just putting my money toward a great product, rather than complaining that
it's not done.  Just like Joshua is doing.  You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.
There have been a couple of other bytes from small businesses, so who knows!

You game?

Cheers,
Rob
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Visual database structure browser for postgresql?

2003-09-17 Thread Vaclav Frolik
Hello Bart,

please feel free to try our CASE Studio 2

http://www.casestudio.com

this database design tool allows you to reverse
engineer already existing database structures
and display all the tables, relationships etc.
Of course, you can generate SQL scripts and/or 
very detailed HTML/RTF reports also. 
The demo version has no time limitation and 
you can load and browse whole the structure 
at once. 

Regards,

Vaclav Frolik
CHARONWARE, s.r.o.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Weird query plan

2003-09-17 Thread Dmitry Tkach
Hmmm...
Indeed. I tried it in 7.2.4 on a couple of empty tables, and it does do 
the right thing...
Also, I have another copy (not exact copy, but identical schema, and 
similar content... but about twice smaller) of the original database... 
I tried my query on it, and it works right too.
So, there must be something wrong with that particular database I suppose...

Any ideas what I should look at?

Thanks a lot!

Dima

Tom Lane wrote:

Dmitry Tkach <[EMAIL PROTECTED]> writes:
 

The query plan looks identical in both cases:
   

 

Limit  (cost=0.00..12.51 rows=1 width=8)
 ->  Nested Loop  (cost=0.00..1009772807.91 rows=80740598 width=8)
   ->  Index Scan using b_pkey on b  (cost=0.00..375410773.29 
rows=80740598 width=4)
   ->  Index Scan using a_pkey on a  (cost=0.00..6.85 rows=1 width=4)
   

 

... which makes me think that it decides to use  b as the outer table 
for both cases (which would obviously make it suck in the first one)... :-(
   

That's what it says, all right, which seems odd to me.  Are you sure you
looked at the right plans?
 

This happens on 7.2.4... I have a 7.3 database with the same schema, but 
it is not populated with data, so I could not test it on 7.3...
   

I could not reproduce a problem on 7.2.4.  I get (using toy tables, and
suppressing the planner's urge to use mergejoin instead)
lo=# explain select * from a, b where a.id >=  7901288 and a.id=b.id limit 1;
NOTICE:  QUERY PLAN:
Limit  (cost=0.00..4.97 rows=1 width=8)
 ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
   ->  Index Scan using a_pkey on a  (cost=0.00..45.50 rows=333 width=4)
   ->  Index Scan using b_pkey on b  (cost=0.00..4.82 rows=1 width=4)
EXPLAIN
lo=# explain select * from a, b where b.id >=  7901288 and a.id=b.id limit 1;
NOTICE:  QUERY PLAN:
Limit  (cost=0.00..4.97 rows=1 width=8)
 ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
   ->  Index Scan using b_pkey on b  (cost=0.00..45.50 rows=333 width=4)
   ->  Index Scan using a_pkey on a  (cost=0.00..4.82 rows=1 width=4)
EXPLAIN

which looks like the right thing.

			regards, tom lane
 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Database Recovery Procedures

2003-09-17 Thread Network Administrator
Quoting Tom Lane <[EMAIL PROTECTED]>:

> Network Administrator <[EMAIL PROTECTED]> writes:
> >>> PANIC:  read of clog file 5, offset 16384 failed: Success
> 
> >> Hm, not good :-(.  What files actually exist in $PGDATA/pg_clog/ (names
> >> and sizes)?
> 
> > -rw---1 postgres users  262144 Jul 20 15:53 
> > -rw---1 postgres users  262144 Jul 31 12:57 0001
> > -rw---1 postgres users  262144 Aug 12 17:32 0002
> > -rw---1 postgres users  262144 Aug 26 00:15 0003
> > -rw---1 postgres users  262144 Sep  9 23:44 0004
> > -rw---1 postgres users   16384 Sep 10 21:21 0005
> 
> Okay, it's trying to read off the end of the clog, no doubt looking for
> a transaction number just slightly larger than what's known to clog.
> This probably indicates more serious problems (because WAL replay really
> should have prevented such an inconsistency), but you can get past the
> immediate panic relatively easily: just append an 8k page of zeroes to
> clog.  Assuming your system has /dev/zero, something like this should
> do it:
> 
>   dd bs=8k count=1 < /dev/zero >> $PGDATA/pg_clog/0005
> 
> (do this with care of course, and you should probably shut down the
> postmaster first).  You might possibly have to add more than one page,
> if you then get similar PANICs with larger offsets, but try one page
> for starters.

Well whatdoyaknow!  That did it- EVERYTHING is there!  I only needed the one
page.  I'm going to have to read up on pg_clog (WAL) so that I understand what
it does a little better.

It makes total sense too because obvious if there is no more space to write too
anything on disk get "frozen" where as application is just going to keep moving
along.  At least for a time.

> If this does suppress the failure messages, you are still not really out
> of the woods; you should do what you can to check for data consistency.
> A paranoid person would probably take a complete pg_dump and try to diff
> it against the last known good dump.  At the very least, I'd treat the
> table involved in the problem with great suspicion.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings

Well, I fortunately was on the side of the road and not in the woods in this
case.  I'm assuming the next thing to do is run a vacuum analyse and they a dump
all.  I'll see how it things perform over the next 48 hours or so.

Now that we've done that, how should should this surgery be documented?  I would
 think a "tip" like this should be somewhere in the docs (not just the archive).
 I'd be more than will to write up this case but I'm trying to establish a long
goal here- if I may be so bold as to suggest one.

Thanks again.

-- 
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Weird query plan

2003-09-17 Thread Tom Lane
Dmitry Tkach <[EMAIL PROTECTED]> writes:
> The query plan looks identical in both cases:

> Limit  (cost=0.00..12.51 rows=1 width=8)
>   ->  Nested Loop  (cost=0.00..1009772807.91 rows=80740598 width=8)
> ->  Index Scan using b_pkey on b  (cost=0.00..375410773.29 
> rows=80740598 width=4)
> ->  Index Scan using a_pkey on a  (cost=0.00..6.85 rows=1 width=4)

> ... which makes me think that it decides to use  b as the outer table 
> for both cases (which would obviously make it suck in the first one)... :-(

That's what it says, all right, which seems odd to me.  Are you sure you
looked at the right plans?

> This happens on 7.2.4... I have a 7.3 database with the same schema, but 
> it is not populated with data, so I could not test it on 7.3...

I could not reproduce a problem on 7.2.4.  I get (using toy tables, and
suppressing the planner's urge to use mergejoin instead)

lo=# explain select * from a, b where a.id >=  7901288 and a.id=b.id limit 1;
NOTICE:  QUERY PLAN:

Limit  (cost=0.00..4.97 rows=1 width=8)
  ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
->  Index Scan using a_pkey on a  (cost=0.00..45.50 rows=333 width=4)
->  Index Scan using b_pkey on b  (cost=0.00..4.82 rows=1 width=4)

EXPLAIN
lo=# explain select * from a, b where b.id >=  7901288 and a.id=b.id limit 1;
NOTICE:  QUERY PLAN:

Limit  (cost=0.00..4.97 rows=1 width=8)
  ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
->  Index Scan using b_pkey on b  (cost=0.00..45.50 rows=333 width=4)
->  Index Scan using a_pkey on a  (cost=0.00..4.82 rows=1 width=4)

EXPLAIN

which looks like the right thing.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] psql and blob

2003-09-17 Thread Daniel Schuchardt
Hm, but lo_import/lo_export imports/exports from the local filesystem. 
I want to upload/download a clientfile to/from the Server Database with
psql. I think this is a feature in psql. But i can't remember...

Daniel



-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
rg] Im Auftrag von Darko Prenosil
Gesendet: Mittwoch, 17. September 2003 20:07
An: Daniel Schuchardt; [EMAIL PROTECTED]
Betreff: Re: [GENERAL] psql and blob


http://www.postgresql.org/docs/7.3/interactive/largeobjects.html 
is what You need. 

Regards !

---(end of broadcast)---
TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] converting PostgreSQ- timestamp to UNIX-timestamp ?

2003-09-17 Thread Tom Lane
"Cornelia Boenigk" <[EMAIL PROTECTED]> writes:
> To convert PostgreSQL timestmp to UNIX timestamp I use
> select ((today::abstime)::int4) from table ...
> In the documentation there is a notice that abstime shouldn't be used.
> Is there another way to convert?

"SELECT EXTRACT(EPOCH FROM timestamp)" is the recommended way to do it.

> Also I tried
> select timetz_hash(today) from table ..
> and got negative values for some timestamps. Does anybody know why?

Why not?  If the hash function didn't use all 32 available bits, I'd
think it faulty ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Bug while installing in AIX 5.1

2003-09-17 Thread jason dang
I found a bug on the psqlODBC version 07.03.0100 installation package

in the header file: statement.h, line 117 and 125 contain an extra ','. 
Linux will ignore this but this will give error if it's installed on a 
AIX machine.  

Regards,



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] psql and blob

2003-09-17 Thread Darko Prenosil
On Wednesday 17 September 2003 18:32, Daniel Schuchardt wrote:
> Hi @ all,
>
> i'm sure there was a psql-function to transfere my Blob-Data to the
> server but I can't remember.
>
> I have a script like this :
>
> UPDATE xy SET z = lo_import('localpath_and_file');
>
> but i want to execute this script from the client and so my blob-data is
> on the client and lo_import fails (the server doesn't have this file).
> Hm.
>
> Thanks for tips - Daniel.
>
http://www.postgresql.org/docs/7.3/interactive/largeobjects.html 
is what You need. 

Regards !

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Weird query plan

2003-09-17 Thread Dmitry Tkach
Hi, everybody!

Here is a weird problem, I ran into...
I have two huge (80 million rows each) tables (a and b), with id as a PK 
on both of them and also an FK from b referencing a.
When I try to run a query like:

select * from a, b where a.id >=  7901288 and a.id=b.id limit 1;

The query takes *forever*.
If I do
select * from a,b where b.id >= 7901288 and a.id=b.id limit 1;
then it returns right away.

The query plan looks identical in both cases:

Limit  (cost=0.00..12.51 rows=1 width=8)
 ->  Nested Loop  (cost=0.00..1009772807.91 rows=80740598 width=8)
   ->  Index Scan using b_pkey on b  (cost=0.00..375410773.29 
rows=80740598 width=4)
   ->  Index Scan using a_pkey on a  (cost=0.00..6.85 rows=1 width=4)

... which makes me think that it decides to use  b as the outer table 
for both cases (which would obviously make it suck in the first one)... :-(

This happens on 7.2.4... I have a 7.3 database with the same schema, but 
it is not populated with data, so I could not test it on 7.3...
I looked at the 7.3's query plans though, and they look better to me:

Limit  (cost=0.00..4.97 rows=1 width=8)
  ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
->  Index Scan using b_pkey on b  (cost=0.00..45.50 rows=333 
width=4)
  Index Cond: (id >= 7901288)
->  Index Scan using a_pkey on a  (cost=0.00..4.82 rows=1 width=4)
  Index Cond: (a.id = "outer".id)

in the second case, and

Limit  (cost=0.00..4.97 rows=1 width=8)
  ->  Nested Loop  (cost=0.00..1657.34 rows=333 width=8)
->  Index Scan using a_pkey on a  (cost=0.00..45.50 rows=333 
width=4)
  Index Cond: (id >= 7901288)
->  Index Scan using b_pkey on b  (cost=0.00..4.82 rows=1 width=4)
  Index Cond: ("outer".id = b.id)

in the first case... (looks like it does swap them around as I expected)...

Do you know of anything that got fixed between 7.2.4 and 7.3, related to 
this problem?

I also noticed that changing a,b to b,a in the from clause doesn't 
affect anything... and (what's even more weird) even using an explicit 
join doesn't help:

explain select a.duns from a natural join b dm where a.id >= 7901288 
limit 1;
NOTICE:  QUERY PLAN:

Limit  (cost=0.00..12.78 rows=1 width=8)
 ->  Nested Loop  (cost=0.00..1023061272.15 rows=80049919 width=8)
   ->  Index Scan using b_pkey on b  (cost=0.00..380070641.01 
rows=81786784 width=4)
   ->  Index Scan using a_pkey on a  (cost=0.00..6.86 rows=1 width=4)

:-(

Any ideas?

Thanks a lot!

Dima





---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Why does adding SUM and GROUP BY destroy performance?

2003-09-17 Thread David Link
Hi,

Why does adding SUM and GROUP BY destroy performance?
details follow. 
Thanks, David Link

s1.sql:
SELECT 
t.tid, t.title,
COALESCE(s0c100r100.units, 0) as w0c100r100units, 
(COALESCE(r1c2r100.units, 0) + COALESCE(y0c2r100.units, 0)) 
as r0c2r100units 
FROM 
title t 
JOIN upc u1 ON t.tid = u1.tid 
LEFT OUTER JOIN sale_200331 s0c100r100 ON u1.upc = s0c100r100.upc 
AND s0c100r100.week = 200331 AND s0c100r100.channel = 100 
AND s0c100r100.region = 100 
LEFT OUTER JOIN rtd r1c2r100 ON u1.upc = r1c2r100.upc 
AND r1c2r100.year = 2002 AND r1c2r100.channel = 2 
AND r1c2r100.region = 100 
LEFT OUTER JOIN ytd_200331 y0c2r100 ON u1.upc = y0c2r100.upc 
AND y0c2r100.week = 200331 AND y0c2r100.channel = 2 
AND y0c2r100.region = 100 
LEFT OUTER JOIN media m ON t.media = m.key 
LEFT OUTER JOIN screen_format sf ON t.screen_format = sf.key 
WHERE 
t.distributor != 'CONTROL LABEL' 
ORDER BY 
t.title ASC 
LIMIT 50
;


s2.sql:
SELECT
t.tid, t.title,
SUM(COALESCE(s0c100r100.units, 0)) as w0c100r100units, 
SUM((COALESCE(r1c2r100.units, 0) + COALESCE(y0c2r100.units, 0))) 
as r0c2r100units 
FROM 
title t 
JOIN upc u1 ON t.tid = u1.tid 
LEFT OUTER JOIN sale_200331 s0c100r100 ON u1.upc = s0c100r100.upc 
AND s0c100r100.week = 200331 AND s0c100r100.channel = 100 
AND s0c100r100.region = 100 
LEFT OUTER JOIN rtd r1c2r100 ON u1.upc = r1c2r100.upc 
AND r1c2r100.year = 2002 AND r1c2r100.channel = 2 
AND r1c2r100.region = 100 
LEFT OUTER JOIN ytd_200331 y0c2r100 ON u1.upc = y0c2r100.upc 
AND y0c2r100.week = 200331 AND y0c2r100.channel = 2 
AND y0c2r100.region = 100 
LEFT OUTER JOIN media m ON t.media = m.key 
LEFT OUTER JOIN screen_format sf ON t.screen_format = sf.key 
WHERE 
t.distributor != 'CONTROL LABEL' 
GROUP BY 
t.tid, t.title
ORDER BY 
t.title ASC 
LIMIT 50
;


Times:
 s1.sql takes 0m0.124s
 s2.sql takes 1m1.450s

Stats:
 title table: 68,000 rows
 sale_200331 table: 150,000 rows
 ytd_200331 table: 0 rows
 rtd table: 650,000 rows

Indexes are in place.

s1 explain plan:   
   QUERY PLAN  


 Limit  (cost=0.00..65105.51 rows=50 width=132)
   ->  Nested Loop  (cost=0.00..91726868.54 rows=70445 width=132)
 Join Filter: ("outer".screen_format = "inner"."key")
 ->  Nested Loop  (cost=0.00..91651668.74 rows=70445 width=127)
   Join Filter: ("outer".media = "inner"."key")
   ->  Nested Loop  (cost=0.00..91578053.95 rows=70445
width=122)
 ->  Nested Loop  (cost=0.00..91236359.89
rows=70445 width=98)
   ->  Nested Loop  (cost=0.00..90894665.82
rows=70445 width=74)
 ->  Nested Loop 
(cost=0.00..90539626.76 rows=70445 width=50)
   ->  Index Scan using
title_title_ind on title t  (cost=0.00..193051.67 rows=68775 width=38)
 Filter: (distributor <>
'CONTROL LABEL'::character varying)
   ->  Index Scan using
davids_tid_index on upc u1  (cost=0.00..1309.24 rows=353 width=12)
 Index Cond: ("outer".tid =
u1.tid)
 ->  Index Scan using
sale_200331_upc_wk_chl_reg_ind on sale_200331 s0c100r100 
(cost=0.00..5.02 rows=1 width=24)
   Index Cond: (("outer".upc =
s0c100r100.upc) AND (s0c100r100.week = 200331) AND (s0c100r100.channel
= 100) AND (s0c100r100.region = 100))
   ->  Index Scan using
rtd_upc_year_chl_reg_ind on rtd r1c2r100  (cost=0.00..4.83 rows=1
width=24)
 Index Cond: (("outer".upc =
r1c2r100.upc) AND (r1c2r100."year" = 2002) AND (r1c2r100.channel = 2)
AND (r1c2r100.region = 100))
 ->  Index Scan using ytd_200331_upc_wkchlreg_ind
on ytd_200331 y0c2r100  (cost=0.00..4.83 rows=1 width=24)
   Index Cond: (("outer".upc = y0c2r100.upc)
AND (y0c2r100.week = 200331) AND (y0c2r100.channel = 2) AND
(y0c2r100.region = 100))
   ->  Seq Scan on media m  (cost=0.00..1.02 rows=2
width=5)
 ->  Seq Scan on screen_format sf  (cost=0.00..1.03 rows=3
width=5)
(21 rows)


s2 explain plan:
   

Re: [GENERAL] Database Recovery Procedures

2003-09-17 Thread Tom Lane
Network Administrator <[EMAIL PROTECTED]> writes:
>>> PANIC:  read of clog file 5, offset 16384 failed: Success

>> Hm, not good :-(.  What files actually exist in $PGDATA/pg_clog/ (names
>> and sizes)?

> -rw---1 postgres users  262144 Jul 20 15:53 
> -rw---1 postgres users  262144 Jul 31 12:57 0001
> -rw---1 postgres users  262144 Aug 12 17:32 0002
> -rw---1 postgres users  262144 Aug 26 00:15 0003
> -rw---1 postgres users  262144 Sep  9 23:44 0004
> -rw---1 postgres users   16384 Sep 10 21:21 0005

Okay, it's trying to read off the end of the clog, no doubt looking for
a transaction number just slightly larger than what's known to clog.
This probably indicates more serious problems (because WAL replay really
should have prevented such an inconsistency), but you can get past the
immediate panic relatively easily: just append an 8k page of zeroes to
clog.  Assuming your system has /dev/zero, something like this should
do it:

dd bs=8k count=1 < /dev/zero >> $PGDATA/pg_clog/0005

(do this with care of course, and you should probably shut down the
postmaster first).  You might possibly have to add more than one page,
if you then get similar PANICs with larger offsets, but try one page
for starters.

If this does suppress the failure messages, you are still not really out
of the woods; you should do what you can to check for data consistency.
A paranoid person would probably take a complete pg_dump and try to diff
it against the last known good dump.  At the very least, I'd treat the
table involved in the problem with great suspicion.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] converting PostgreSQ- timestamp to UNIX-timestamp ?

2003-09-17 Thread Cornelia Boenigk
Hi everybody

To convert PostgreSQL timestmp to UNIX timestamp I use

select ((today::abstime)::int4) from table ...

In the documentation there is a notice that abstime shouldn't be used.
Is there another way to convert?

Also I tried

select timetz_hash(today) from table ..

and got negative values for some timestamps. Does anybody know why?

RedHat 7.3, PostgreSQL 7.2.1


Regards
Conni


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Database Recovery Procedures

2003-09-17 Thread Network Administrator
Quoting Tom Lane <[EMAIL PROTECTED]>:

> Network Administrator <[EMAIL PROTECTED]> writes:
> > ..if I using the psql client, and issue a "select * from  name>
> > limit 5" if get this...
> 
> > PANIC:  read of clog file 5, offset 16384 failed: Success
> 
> Hm, not good :-(.  What files actually exist in $PGDATA/pg_clog/ (names
> and sizes)?

Here you go...

-rw---1 postgres users  262144 Jul 20 15:53 
-rw---1 postgres users  262144 Jul 31 12:57 0001
-rw---1 postgres users  262144 Aug 12 17:32 0002
-rw---1 postgres users  262144 Aug 26 00:15 0003
-rw---1 postgres users  262144 Sep  9 23:44 0004
-rw---1 postgres users   16384 Sep 10 21:21 0005


> > So back to my original question.  What are the recover procedures (if
> > any) that should be tried before I grab my PGDATA path from tape?
> 
> You may be up against having to do that, but some investigation first
> seems called for.

Yep, its ready to go.  When and if nothing surgical can be done.

>   regards, tom lane
> 


-- 
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] psql and blob

2003-09-17 Thread Daniel Schuchardt
Hi @ all,

i'm sure there was a psql-function to transfere my Blob-Data to the
server but I can't remember.

I have a script like this :

UPDATE xy SET z = lo_import('localpath_and_file');

but i want to execute this script from the client and so my blob-data is
on the client and lo_import fails (the server doesn't have this file).
Hm. 

Thanks for tips - Daniel.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] State of Beta 2

2003-09-17 Thread Joshua D. Drake

I have no doubt that a competent programmer could learn the Postgres
innards well enough to do the job; as someone pointed out earlier in
this thread, none of the core committee was born knowing Postgres.
I do, however, doubt that it can be done in six months if one has
any significant learning curve to climb up first.
 

Hello,

 This is a completely reasonable statement. However we have
three full time programmers right now that are fairly familiar with
the internals of PostgreSQL. They are the programmers that
are currently coding our transactional replication engine (which
is going beta in about 3 weeks), plPHP, and also did the work on
S/ODBC, S/JDBC and PgManage.
 I am not going to say that we are neccessarily Tom Lane material ;)
but my programmers are quite good and learning more everyday. They
have been in the guts of PostgreSQL for 9 months straight, 40 hours
a week now.
Sincerely,

Joshua Drake




			regards, tom lane
 

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Trying to create a GiST index in 7.3

2003-09-17 Thread Tom Lane
Christopher Murtagh <[EMAIL PROTECTED]> writes:
> On Fri, 2003-08-08 at 16:08, Tom Lane wrote:
>> You should not be using the 7.2 methods anymore --- there is a CREATE
>> OPERATOR CLASS, use that instead.  (See the contrib gist classes for
>> examples.)

>  I'm having the same problem as Dmitry, but I've been unable to find a
> solution. I've looked everywhere googleable for info on setting up GiST
> indexes, but haven't found any info that doesn't look like post-doc
> papers on the theory of indexability. 

That's about what there is AFAIK :-(.  GiST suffers from a horrible lack
of documentation other than the original academic papers, which is one
of the reasons it's still not mainstream (although I'm not entirely sure
which is cause and which is effect here...)

However, if you have a working 7.2 opclass definition, it shouldn't be
that hard to make it into a 7.3 CREATE OPERATOR CLASS command.  Compare
the 7.2 and 7.3 versions of any of the contrib GiST modules' sql.in
files, and all should become reasonably clear.  The same basic
information is being supplied in both cases (operator names and strategy
numbers), 7.3 just does it with a much higher-level notation.  For
instance, this part of 7.2's contrib/cube/cube.sql.in:

-- cube_left
INSERT INTO pg_amop (amopclaid, amopstrategy, amopreqcheck, amopopr)
   SELECT opcl.oid, 1, false, c.opoid
   FROM pg_opclass opcl, gist_cube_ops_tmp c
   WHERE
  opcamid = (SELECT oid FROM pg_am WHERE amname = 'gist')
  and opcname = 'gist_cube_ops' 
  and c.oprname = '<<';

is replaced by a single line in 7.3's CREATE OPERATOR CLASS command:

OPERATOR1   << ,

In particular look at this diff:
http://developer.postgresql.org/cvsweb.cgi/pgsql-server/contrib/cube/cube.sql.in.diff?r1=1.4&r2=1.5

You might also want to study the docs for the pg_opclass, pg_amop,
and pg_amproc system catalogs, to clarify what the original code was
doing.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Trying to create a GiST index in 7.3

2003-09-17 Thread Oleg Bartunov
Hi there,

I'm back from vacation and  clearing my mbox.
I intended to write documentation about GiST, but other things grab attention :)
There is quite short intro in Russian
http://www.sai.msu.su/~megera/postgres/gist/doc/gist-inteface-r.shtml
and a bunch of GiST modules
http://www.sai.msu.su/~megera/postgres/gist/
so you may learn by examples.

Oleg

On Wed, 17 Sep 2003, Christopher Murtagh wrote:

> On Fri, 2003-08-08 at 16:08, Tom Lane wrote:
> > Dmitry Tkach <[EMAIL PROTECTED]> writes:
> > > I am trying to create a custom GiST index in 7.3, but getting an error,
> > > ...
> > > I have done all the setup that was required in 7.2.4:
> >
> > You should not be using the 7.2 methods anymore --- there is a CREATE
> > OPERATOR CLASS, use that instead.  (See the contrib gist classes for
> > examples.)
>
>  I'm having the same problem as Dmitry, but I've been unable to find a
> solution. I've looked everywhere googleable for info on setting up GiST
> indexes, but haven't found any info that doesn't look like post-doc
> papers on the theory of indexability.
>
> I'd be happy with an RTFM response, if I could just find TFM. :-)
>
> Any info would be much appreciated.
>
> Cheers,
>
> Chris
>
>

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


PHP db support was Re: [GENERAL] Dreamweaver

2003-09-17 Thread Christopher Murtagh
On Sat, 2003-08-16 at 12:34, Ken Harris wrote:
> The closet thing PHP has is PEAR - it has the Dataobject concept and is as 
> close to "neutral" as PHP gets.  A long time ago, I wrote a "DB Neutral set of 
> routines which works with Oracle, Postgresql and, untested, MySQL.  That just 
> seemed to be the easiest thing to do, but I have not tried to plug it into 
> Dreamweaver.

 PHP also has a module called dbx that allows you to do this as well. It
has some serious performance advantages over the PEAR library because it
is a module (written in C) rather than a bunch of PHP classes.

 More on dbx here:

 http://ca.php.net/manual/en/ref.dbx.php

Cheers,

Chris

-- 
Christopher Murtagh
Enterprise Systems Administrator
ISR / Web Communications Group 
McGill University
Montreal, Quebec
Canada

Tel.: (514) 398-3122
Fax:  (514) 398-2017

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Database Recovery Procedures

2003-09-17 Thread Tom Lane
Network Administrator <[EMAIL PROTECTED]> writes:
> ..if I using the psql client, and issue a "select * from 
> limit 5" if get this...

> PANIC:  read of clog file 5, offset 16384 failed: Success

Hm, not good :-(.  What files actually exist in $PGDATA/pg_clog/ (names
and sizes)?

> So back to my original question.  What are the recover procedures (if
> any) that should be tried before I grab my PGDATA path from tape?

You may be up against having to do that, but some investigation first
seems called for.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] CONCAT function

2003-09-17 Thread Dennis Gearon
You're not from Texas, are you? :-)

Christopher Browne wrote:

In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] ("scott.marlowe") transmitted:
 

Given the third choice, I'd throw the switch.
   

Why is it that I think of something _completely_ different when you
use the phrase "throw the switch?"
:-)
 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Graphical Mapping a Database

2003-09-17 Thread Christopher Murtagh
On Fri, 2003-08-15 at 00:17, David Fetter wrote:
> "Tim Edwards" <[EMAIL PROTECTED]> wrote:
> > Does anyone have a utility to map out database structures and put it
> > in a printable format ? PDF, GIF, JPG, etc.
> 
> Try DBVisualizer at .

 FWIW, we bought a site license for DBVis and it has been pretty great.
It runs on my YellowDog Linux (PPC) and RedHat desktops as well as our
MacOS X machines. Well worth the $$ IMO.

Cheers,

Chris

-- 
Christopher Murtagh
Enterprise Systems Administrator
ISR / Web Communications Group 
McGill University
Montreal, Quebec
Canada

Tel.: (514) 398-3122
Fax:  (514) 398-2017

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] State of Beta 2

2003-09-17 Thread Mark Cave-Ayland
> Date: Tue, 16 Sep 2003 14:39:47 -0700
> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
> To: Andrew Rawnsley <[EMAIL PROTECTED]>
> Cc: "Marc G. Fournier" <[EMAIL PROTECTED]>,
>   PgSQL General ML <[EMAIL PROTECTED]>
> Subject: Re: State of Beta 2
> Message-ID: <[EMAIL PROTECTED]>
>
> >
> > Tying to my last post, concerning Joshua's offer to put up the labor

> > if we can put up the dough, given the
> > fact that Postgres is still in flux, do you think its even possible
to 
> > do some sort of in-place upgrade, not knowing
> > what may come up when you're writing 7.6?
> >
> > In other words, if we pony up and get something written now, will it

> > need further development every time an x.y release comes up.
>
> There is probably no question that it will need further development. 
> However, I would imagine that once the intial grunt work is done it 
> would be much easier to migrate the code (especially if it is 
> continually maintained) to newer releases.
>
> My thought process is that we would start with 7.4 codebase and as it 
> migrates to 7.5 move the work directly to 7.5 and if possible release 
> for 7.5 (although that really may be pushing it).
>
> J

While everyone is throwing around ideas on this one.

Would it not be possible to reserve the first few pages of each file
that stores tuples to store some metadata that describes the on-disk
structure and the DB version? If the DB version in the existing files
doesn't match the current version of the postmaster then it
automatically launches pg_upgrade on startup.

Hopefully this would minimise the work that would need to be done to
pg_upgrade between versions, since the only changes between versions
would be to provide the mappings between the on-disk structures of the
existing files (which could easily be determined by parsing the metadata
from the existing files) and the modified on-disk structure required by
the new version. (Ok I know this doesn't deal with the catalog issues
but hopefully it would be a step in the right direction).


Cheers,

Mark.

---

Mark Cave-Ayland
Webbased Ltd.
Tamar Science Park
Derriford
Plymouth
PL6 8BX
England

Tel: +44 (0)1752 764445
Fax: +44 (0)1752 764446


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender. You
should not copy it or use it for any purpose nor disclose or distribute
its contents to any other person.



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] char o varchar

2003-09-17 Thread Andreas Fromm
As of my knowlege, varchar as the choice when you have varying lenght 
strings, because only the real string lenght is stored in the db, while 
char is usefull when most of your records ar exactly the stringlenght 
defined by the column, because no overhead is to be kept for storing the 
actual stringlenght.

Regards

Edwin Quijada wrote:





Hoii!!

I am creating my struct and I have the question :
What I must use for character fields char or varchar?
Which are the differences? and the better
TIA
*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
* " Si deseas lograr cosas excepcionales debes de hacer cosas fuera de 
lo comun"
*---*

_
Charla con tus amigos en línea mediante MSN Messenger: 
http://messenger.yupimsn.com/

---(end of broadcast)---
TIP 8: explain analyze is your friend


--
Andreas Fromm
-
Drink wet cement...
  ... and get stoned


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org