[GENERAL] Create a table B with data coming from table A

2007-06-11 Thread [EMAIL PROTECTED]
My original table is like that:

ID  A1  A2  A3  cnt
12341   0   0   4
12341   0   1   8
12341   1   1   5
12351   0   0   6
12351   0   1   7
12351   1   1   12

I have to create a new table B:

ID  B1  B2  B3  S
12344   8   5   17
12356   7   12  25

The combination (A1=1,A2=0,A3=0) gives B1
The combination (A1=1,A2=0,A3=0) gives B2
The combination (A1=1,A2=1,A3=1) gives B3

S = B1+B2+B3

I think it's a classical problem, but i can't see to problem key

Thx,
Lhaj


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Suppress checking of chmod 700 on data-dir?

2007-06-11 Thread Johannes Konert

Oliver Elphick wrote:

You could run the backup as postgres and pipe the output to another
program owned by the other user and with suid set in its permissions.
The suid means that the receiving program would have access where you
don't want postgres to go.

Thanks Oliver,
that was a good hint. Suids are not working on bash-scripts, but with a 
restricted entry in /etc/sudoers now the backup-user can execute a 
copy-and-access-script to get the files from within PGDATA-dir.

Regards Johannes

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] When should I worry?

2007-06-11 Thread Greg Smith

On Mon, 11 Jun 2007, Tom Allison wrote:

All of this was run on a Pentium II 450 MHz with 412MB RAM and a software 
linear 0 pair or UDMA 66 7200RPM 8MB Cache drives (really old) on seperate 
IDE channels with ReiserFS disk format.


Sometimes it's not clear if someone can speed up what they're doing simply 
by using more expensive hardware.  In your case, I think it's safe to say 
you've got quite a bit of margin for improvement that way when you run 
into a problem.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] When should I worry?

2007-06-11 Thread Tom Allison

Greg Smith wrote:


On Mon, 11 Jun 2007, Steve Crawford wrote:


In my experience the more common situation is to "go off a cliff."


Yeah, I think the idea that you'll notice performance degrading and be 
able to extrapolate future trends using statistical techniques is a 
bit...optimistic.


Anyway, back to the original question here.  If you're worried about 
catching when performance starts becoming an issue, you need to do some 
sort of logging of how long statements are taking to execute.  The main 
choice is whether to log everything, at which point the logging and 
sorting through all the data generated may become its own performance 
concern, or whether to just log statements that take a long time and 
then count how many of them show up.  Either way will give you some sort 
of early warning once you get a baseline; it may take a bit of tweaking 
to figure out where to draw the line at for what constitutes a "long" 
statement if you only want to see how many of those you get.


There are two tools you should look at initially to help process the 
logging information you get back:  pgFouine and PQA.  Here are intros to 
each that also mention how to configure the postgresql.conf file:


http://pgfouine.projects.postgresql.org/tutorial.html
http://www.databasejournal.com/features/postgresql/article.php/3323561

As they're similar programs, which would work better for you is hard to 
say; check out both and see which seems more practical or easier to get 
running.  For example, if you only have one of PHP/Ruby installed, that 
may make one tool or the easier preferred.


If you can get yourself to the point where you can confidently say 
something like "yesterday we had 346 statements that took more then 
200ms to execute, which is 25% above this month's average", you'll be in 
a positition to catch performance issues before they completely 
blindside you; makes you look good in meetings, too.




Starting to sound like a sane idea.
I've been running a test job for almost 24 hours and have accumulated only 8 
million rows.  That's another 125 days to get to the big 'B'.  I think by then 
I'll have blown a hard drive or worse.  I'm running this on some very old 
hardware that I have available (more of this at the bottom).


However, at this point the machine is running all of the SQL at < 0.2 seconds 
each.  Which I consider just fine for 7,599,519 rows.


Here's some specifics about the tables:
count() from headers: 890300
count() from tokens:  89
count() from header_token: 7599519


CREATE TABLE header_token (
header_idx integer NOT NULL,
token_idx integer NOT NULL
);

CREATE TABLE headers (
idx serial NOT NULL,
hash character varying(64) NOT NULL
);

CREATE TABLE tokens (
idx bigserial NOT NULL,
hash character varying(64) NOT NULL
);

ALTER TABLE ONLY headers
ADD CONSTRAINT headers_hash_key UNIQUE (hash);
ALTER TABLE ONLY headers
ADD CONSTRAINT headers_pkey PRIMARY KEY (idx);
ALTER TABLE ONLY header_token
ADD CONSTRAINT pkey_header_token PRIMARY KEY (header_idx, token_idx);
ALTER TABLE ONLY tokens
ADD CONSTRAINT tokens_hash_key UNIQUE (hash);
ALTER TABLE ONLY tokens
ADD CONSTRAINT tokens_pkey PRIMARY KEY (idx);
ALTER TABLE ONLY header_token
ADD CONSTRAINT header_token_header_idx_fkey FOREIGN KEY (header_idx) 
REFERENCES headers(idx) ON DELETE CASCADE;

ALTER TABLE ONLY header_token
ADD CONSTRAINT header_token_token_idx_fkey FOREIGN KEY (token_idx) 
REFERENCES tokens(idx) ON DELETE CASCADE;




The SQL I was timing were:
select t.hash, h.hash
from headers h, header_token ht, tokens t
where h.idx = ht.header_idx
and ht.token_idx = t.idx
and h.idx = ?


insert into header_token
select $header, idx from tokens where idx in (...)

The SELECT was <0.2
The INSERT was easily <.7 (most of the time -- ranged because the idx IN (..) 
varied from 200 to 700.  The min was <2 and the max was >1.0 from a few minutes 
of observation.



All of this was run on a Pentium II 450 MHz with 412MB RAM and a software linear 
0 pair or UDMA 66 7200RPM 8MB Cache drives (really old) on seperate IDE channels 
with ReiserFS disk format.  The actual script was running on a seperate machine 
across a 100-base-T full duplex network through a firewall machine between the 
two subnets.


I can't imagine how long it would take to run:
delete from tokens;
with the CASCADE option...

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Schema as versioning strategy

2007-06-11 Thread Angelo

What about creating NOW empty schemas 'till 2038?
Your application will move automatically on the new empty schema on the new
year without any changes to the db structure.

On 4/26/07, Owen Hartnett <[EMAIL PROTECTED]> wrote:


At 9:23 AM +0100 4/26/07, Richard Huxton wrote:
>Jonathan Vanasco wrote:
>>
>>On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote:
>>
>>>Owen Hartnett wrote:
I want to "freeze" a snapshot of the database every year (think
of end of year tax records).  However, I want this frozen version
(and all the previous frozen versions) available to the database
user as read-only.  My thinking is to copy the entire public
schema (which is where all the current data lives) into a new
schema, named 2007 (2008, etc.)
>>>
>>>Sounds perfectly reasonable. You could either do it as a series of:
>>>   CREATE TABLE archive2007.foo AS SELECT * FROM public.foo;
>>>or do a pg_dump of schema "public", tweak the file to change the
>>>schema names and restore it.
>>
>>the create table method won't copy the constraints + fkeys .
>
>Shouldn't matter for an archive though, since you'd not want anyone
>to have permissions. Still, pg_dump is my preference. Apart from
>anything else, you can keep a copy of the dump around too.


Thanks to everyone for all the replies.  You've been most helpful.
It looks like pg_dump is the way to go, though I'll have to think
about it because I'm ultimately looking for a mechanical process that
will automatically tweak the schema names.  I don't want to have to
visit clients every year to archive their data.  Since the pg_dump
file might change, my program may have to be version dependent.

-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly





--
Angelo Rossi
Bluemetrix Ltd
Northpoint House
Northpoint Business Park
Mallow Road
Cork
Ireland

Ph: +353 021 4640107
Fax: +353 21 4309131
Web: www.bluemetrix.com

The content of this e-mail may be confidential or legally privileged. If you
are not the named addressee or the intended recipient please do not copy it
or forward it to anyone. If you have received this email in error please
destroy it and kindly notify the sender. Email cannot be guaranteed to be
secure or error-free, it is your responsibility to ensure that the message
(including attachments) is safe and authorised for use in your environment.
Bluemetrix Ltd, Registered in Ireland at Northpoint House, Northpoint
Business Park, Mallow Road, Cork

Co Reg No.: 335879


Re: [GENERAL] trying to track down postgres deaths

2007-06-11 Thread Geoffrey

Tom Lane wrote:

Geoffrey <[EMAIL PROTECTED]> writes:

Question is, when I see:
#17 0x08151bc5 in ClosePostmasterPorts ()
in the stack trace of the process, is this an indicator that the backend 
process was trying to shutdown?


No; that's a function that's executed immediately after fork() to close
postmaster sockets that the backend doesn't need to have open.  It's
highly unlikely that you'd get a crash in there, and even more improbable
that it'd be 17 levels down the call stack.  My bet is that you are
trying to debug with a non-debug-enabled build, and gdb is giving you a
less than accurate stack trace.


This is highly likely as you are correct, we are not running with debug 
turned on.  It's on the list, but high availability for these services 
has not permitted us to do this.


We know we have a problem with a third party vendor's code we have used 
to build into the backend.  It doesn't properly terminate when you run 
their application from a shell, thus we are pretty certain this is 
causing the problem with the backend processes not properly terminating 
when our application terminates.


Rebuild with --enable-debug, and maybe also --enable-cassert, and see
what you can learn.


Thanks, as noted, that is the plan.



regards, tom lane




--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] trying to track down postgres deaths

2007-06-11 Thread Tom Lane
Geoffrey <[EMAIL PROTECTED]> writes:
> Question is, when I see:
> #17 0x08151bc5 in ClosePostmasterPorts ()
> in the stack trace of the process, is this an indicator that the backend 
> process was trying to shutdown?

No; that's a function that's executed immediately after fork() to close
postmaster sockets that the backend doesn't need to have open.  It's
highly unlikely that you'd get a crash in there, and even more improbable
that it'd be 17 levels down the call stack.  My bet is that you are
trying to debug with a non-debug-enabled build, and gdb is giving you a
less than accurate stack trace.

Rebuild with --enable-debug, and maybe also --enable-cassert, and see
what you can learn.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to speed up query

2007-06-11 Thread Erwin Brandstetter
On Jun 11, 2:23 pm, "Andrus" <[EMAIL PROTECTED]> wrote:
> I tried
>
> CREATE TEMP TABLE mydel AS
>  SELECT r.dokumnr
>  FROM rid r
>  LEFT JOIN dok d USING (dokumnr)
>  WHERE d.dokumnr IS NULL;
> DELETE FROM rid USING mydel WHERE rid.dokumnr =mydel.dokumnr;
> drop table mydel;
>
> and this runs 1 seconds intead for 2.2 hours.
>
> Thank you very much.
> This works!
>
> It's sad that PostgreSQL cannot optimize this delete statement
> automatically.


1 second does sound a lot better than 2 hours, doesn't it? :)
As to why Postgres seems to fail, I cannot say any more, as your
description is unclear. I am pretty sure there is some
misunderstanding, though.


Regards
Erwin


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] how to speed up query

2007-06-11 Thread Erwin Brandstetter
On Jun 11, 2:01 pm, "Andrus" <[EMAIL PROTECTED]> wrote:
(...)
> > This index makes no sense at all:
> > CREATE INDEX dok_dokumnr_idx  ON firma1.dok  USING btree (dokumnr);
>
> I listed table structure and constraints partially.
> Theis is also primary key constraint in dok table:
>
>  CONSTRAINT dok_pkey PRIMARY KEY (dokumnr),


On a sidenote: this primary implements a unique index anyway. The
additional index is useless. You can delete it to save time and
storage.
(Or maybe this is just another discrepancy between reality and problem
description.)


Regards
Erwin


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] list all columns in db

2007-06-11 Thread Jonathan Vanasco


Thank you Jon -- thats the exact sort of trick I was hoping for.

Cheers!

On Jun 7, 2007, at 6:36 PM, Jon Sime wrote:


Jonathan Vanasco wrote:

Does anyone have a trick to list all columns in a db ?


No trickery, just exploit the availability of the SQL standard  
information_schema views:


select table_schema, table_name, column_name
from information_schema.columns
where table_schema not in ('pg_catalog','information_schema')
order by 1,2,3

If you want an equivalent that uses pg_catalog (non-portable  
outside of PostgreSQL) you could instead do:


select n.nspname as table_schema, c.relname as table_name,
a.attname as column_name
from pg_catalog.pg_attribute a
join pg_catalog.pg_class c on (a.attrelid = c.oid)
join pg_catalog.pg_namespace n on (c.relnamespace = n.oid)
where c.relkind in ('r','v') and a.attnum > 0
and n.nspname not in ('pg_catalog','information_schema')
order by 1,2,3

-Jon

--
Senior Systems Developer
Media Matters for America
http://mediamatters.org/


// Jonathan Vanasco

| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -

|   CEO/Founder SyndiClick Networks
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -

|  FindMeOn.com - The cure for Multiple Web Personality Disorder
|  Web Identity Management and 3D Social Networking
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -

|  RoadSound.com - Tools For Bands, Stuff For Fans
|  Collaborative Online Management And Syndication Tools
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] trying to track down postgres deaths

2007-06-11 Thread Geoffrey
We have a problem with the postgres backend process terminating and 
dropping core.  We believe it may be related to another problem we have 
where the postgres backend process can not be properly terminated.


Question is, when I see:

#17 0x08151bc5 in ClosePostmasterPorts ()

in the stack trace of the process, is this an indicator that the backend 
process was trying to shutdown?


I'm assuming that if it's trying to 'close ports' then it would be 
shuting down the process.


Are there multiple scenarios where this routine would be called?

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] When should I worry?

2007-06-11 Thread Greg Smith

On Mon, 11 Jun 2007, Steve Crawford wrote:


In my experience the more common situation is to "go off a cliff."


Yeah, I think the idea that you'll notice performance degrading and be 
able to extrapolate future trends using statistical techniques is a 
bit...optimistic.


Anyway, back to the original question here.  If you're worried about 
catching when performance starts becoming an issue, you need to do some 
sort of logging of how long statements are taking to execute.  The main 
choice is whether to log everything, at which point the logging and 
sorting through all the data generated may become its own performance 
concern, or whether to just log statements that take a long time and then 
count how many of them show up.  Either way will give you some sort of 
early warning once you get a baseline; it may take a bit of tweaking to 
figure out where to draw the line at for what constitutes a "long" 
statement if you only want to see how many of those you get.


There are two tools you should look at initially to help process the 
logging information you get back:  pgFouine and PQA.  Here are intros to 
each that also mention how to configure the postgresql.conf file:


http://pgfouine.projects.postgresql.org/tutorial.html
http://www.databasejournal.com/features/postgresql/article.php/3323561

As they're similar programs, which would work better for you is hard to 
say; check out both and see which seems more practical or easier to get 
running.  For example, if you only have one of PHP/Ruby installed, that 
may make one tool or the easier preferred.


If you can get yourself to the point where you can confidently say 
something like "yesterday we had 346 statements that took more then 200ms 
to execute, which is 25% above this month's average", you'll be in a 
positition to catch performance issues before they completely blindside 
you; makes you look good in meetings, too.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Pit M." <[EMAIL PROTECTED]> writes:
>> The goal is to maintain a valid cursor that can be used to FETCH other 
>> records.
>> It would be ideal to skip this record an continue fetching the next record ->
>> but how can i achieve this?

> Well ideally the answer would be to wrap each FETCH in a savepoint and roll
> back to the savepoint if it fails. However I just tried it and this doesn't
> work. So I think the answer is you can't, sorry.

The problem is to distinguish failures that don't imply the cursor
itself is broken from failures that do.  What I'd suggest is reducing
the cursor definition to a simple "SELECT * FROM ..." and then doing
the failure-prone calculations separately.  The cursor won't get marked
PORTAL_FAILED unless the error occurs during a FETCH from it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Materializing the relation

2007-06-11 Thread Gregory Stark

>>QUERY PLAN
>> 
>>  Nested Loop  (cost=1.03..2.34 rows=3 width=24)
>>Join Filter: (tb1.c1 = tb2.c1)
>>->  Seq Scan on tb2  (cost=0.00..1.04 rows=4 width=12)
>>->  Materialize  (cost=1.03..1.06 rows=3 width=12)
>>  ->  Seq Scan on tb1  (cost= 0.00..1.03 rows=3 width=12)
>>
>>
>> In this above plan, what does 'Material' mean?

It means it will read it all in to memory (or disk if it runs out of memory)
and use that instead of referring to the original table subsequently. In this
case it will only make a small difference in performance because the temporary
storage will be very similar to the original table. It just allows Postgres to
pack the data a bit more densely and skip MVCC visibility checks.

Other times Materialize nodes are used where they make a bigger difference are
when the plan beneath it is quite expensive and we don't want to have to
re-execute it more than necessary.

"Shoaib Mir" <[EMAIL PROTECTED]> writes:

> From /src/include/utils/tuplestore.h
>
> "The materialize shields the sort from the need to do mark/restore and
> thereby allows it to perform its final merge pass on-the-fly; while the
> materialize itself is normally cheap since it won't spill to disk unless the
> number of tuples with equal key values exceeds work_mem"

This comment actually refers to a new optimization which isn't in the released
8.2 yet. It introduces a Materialize node above a sort to allow the sort to
skip the final merge step. Instead it merges as the query runs and the
Materialize throws away data which isn't needed any more since merge joins
only need the current key value and they don't need to refer back to previous
key values.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] When should I worry?

2007-06-11 Thread Steve Crawford
Alexander Staubo wrote:
> 
> For the monitoring, however, you can log your queries along with
> timings and timestamps, and copy them into a tool like R to
> statistically analyze your performance over time. You will be able to
> predict the point at which your system will be too slow to use, if
> indeed the performance degradation is expontential.
> ...


In my experience the more common situation is to "go off a cliff."
Everything hums along fine and the increases in table-size and user-base
have very little impact on your response times. Then suddenly you run
out of some resource (usually memory first).You hit swap and as your
few-millisecond query takes seconds or minutes your request queue backs
up, new connections are denied and everything goes downhill fast.

I think that keeping an eye on system resource trends via sar or similar
is more likely to provide the desired warnings of "sudden dropoff ahead".

Cheers,
Steve

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Gregory Stark

"Pit M." <[EMAIL PROTECTED]> writes:

> The goal is to maintain a valid cursor that can be used to FETCH other 
> records.
> It would be ideal to skip this record an continue fetching the next record ->
> but how can i achieve this?

Well ideally the answer would be to wrap each FETCH in a savepoint and roll
back to the savepoint if it fails. However I just tried it and this doesn't
work. So I think the answer is you can't, sorry.

One alternative would be to store the result set in a temporary table with an
index on it. Then you could execute a separate query for each record you want
to look up and wrap that query in a savepoint if you want to do it in a
transaction and recover from errors.

It wouldn't have the same performance profile as using a cursor. It would use
more resources in most cases and the lookups will be slower. Though if you
keep this data around for a long time you might actually be better off since
you could use a real non-temporary table and not be forced to keep around
transactions for long periods of time tying up other resources.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] When should I worry?

2007-06-11 Thread Steve Crawford
Filip Rembiałkowski wrote:

>> For the monitoring, however, you can log your queries along with
>> timings and timestamps, and copy them into a tool like R to
>> statistically analyze your performance over time. You will be able to
>> predict the point at which your system will be too slow to use, if
>> indeed the performance degradation is expontential.
> 
> Could you please share some details about this "tool like R"? Maybe
> some links or usage examples?


Find R at http://www.r-project.org/. Or use any other analysis and/or
graphing tool of your choosing (gnumeric, OO-calc, gnuplot, roll-your-own).

Cheers,
Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] When should I worry?

2007-06-11 Thread Filip Rembiałkowski

2007/6/10, Alexander Staubo <[EMAIL PROTECTED]>:

On 6/10/07, Tom Allison <[EMAIL PROTECTED]> wrote:
> The table itself is small (two columns: bigint, int) but I'm wondering when 
I'll
> start to hit a knee in performance and how I can monitor that.

You don't say anything about what the data is in the table or what
queries you run against it, so there's not much here to give advice
about.

For the monitoring, however, you can log your queries along with
timings and timestamps, and copy them into a tool like R to
statistically analyze your performance over time. You will be able to
predict the point at which your system will be too slow to use, if
indeed the performance degradation is expontential.


Could you please share some details about this "tool like R"? Maybe
some links or usage examples?

TIA.



--
Filip Rembiałkowski

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Materializing the relation

2007-06-11 Thread Shoaib Mir

From /src/include/utils/tuplestore.h


"The materialize shields the sort from the need to do mark/restore and
thereby allows it to perform its final merge pass on-the-fly; while the
materialize itself is normally cheap since it won't spill to disk unless the
number of tuples with equal key values exceeds work_mem"

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 6/11/07, rupesh bajaj <[EMAIL PROTECTED]> wrote:


Hi,

What is the meaning of 'materializing' a relation after sequential scan?



explain select * from tb1, tb2  where tb1.c1 = tb2.c2;

   QUERY PLAN

 Nested Loop  (cost=1.03..2.34 rows=3 width=24)
   Join Filter: (tb1.c1 = tb2.c1)
   ->  Seq Scan on tb2  (cost=0.00..1.04 rows=4 width=12)
   ->  Materialize  (cost=1.03..1.06 rows=3 width=12)
 ->  Seq Scan on tb1  (cost= 0.00..1.03 rows=3 width=12)


In this above plan, what does 'Material' mean?

Thanks,
Rupesh



[GENERAL] Materializing the relation

2007-06-11 Thread rupesh bajaj

Hi,

What is the meaning of 'materializing' a relation after sequential scan?



explain select * from tb1, tb2  where tb1.c1 = tb2.c2;

  QUERY PLAN

Nested Loop  (cost=1.03..2.34 rows=3 width=24)
  Join Filter: (tb1.c1 = tb2.c1)
  ->  Seq Scan on tb2  (cost=0.00..1.04 rows=4 width=12)
  ->  Materialize  (cost=1.03..1.06 rows=3 width=12)
->  Seq Scan on tb1  (cost=0.00..1.03 rows=3 width=12)


In this above plan, what does 'Material' mean?

Thanks,
Rupesh


Re: [GENERAL] track ddl changes on single database

2007-06-11 Thread Rikard Pavelic
Tom Lane wrote:
> Rikard Pavelic <[EMAIL PROTECTED]> writes:
>   
>> I'm looking for recommendation for tracking DDL changes on
>> single database instance.
>> 
>
> Perhaps "ALTER DATABASE mydb SET log_statement = ddl" would do what
> you want.
>
>   regards, tom lane
>
>   
;(
I'm having trouble with this one.

If postgresql.conf has log_statement=none
and I execute log_statement=ddl on mydb
I don't see any ddl in pg_log

So I tried setting log_statement=ddl in postgresql.conf
and I see ddl from all databases in pg_log
I tried "Alter database other_db set log_statement=none"
but it's ddl changes still appears in pg_log

Anyway, it's not a big deal,
I can extract ddl which I need based on log_line_prefix and %d

Regards,
Rikard

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] odbc with encrypted ssl key?

2007-06-11 Thread Andrei Kovalevski

Hi!

   You may try https://projects.commandprompt.com/public/odbcng/. This 
PostgreSQL ODBC driver's connection string can contain parameters you need:

   SSL_CERTIFICATE=[string] - path to SSL certificate file
   SSL_PRIVATE_KEY=[string] - your SSL private key
   SSL_PASSPHRASE=[string] - your SSL password phrase

   You can either use these parameters in connection string or 
configure DSN.


   Andrei.

Andreas wrote:

Hi,

is there a way to have MS-Access use ODBC and still use a passphrase
encrypted private-key?

Right now ODBC works with unencrypted key.

For security reasons I'd rather have my private key stored encrypted.
I suppose to do this Access had to tell the odbc driver the passphrase
so that it can decrypt the key.

Until now I just found 2 connection string parameters "ssl" and
"sslmode" but nothing resembling ssl-passphrase.


Regards
A.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pit M.



On Mon, Jun 11, 2007 at 03:28:22PM +0200, Pit M. wrote:
We handle exceptions errors through libpq, and if a FETCH leads to such 
a runtime error, we try to FETCH the first record again.
The problem is that we can't use this cursor any more -> it seems to be 
corrupt after that error.


An aborted transaction is an aborted transaction, you have to rollback
to a known state before you can go on.

I'm not sure why you can't just change the queries, but in the
particluar case you give, why not just treat them as strings to start
with:

   WHERE "CUSTOMERS"."ZIP" >= '1'
AND "CUSTOMERS"."ZIP" < '2'

That will produce the same result, but without any chance of errors...



Thank you Martijn!

I know that CAST is not ideal for this query :-) , but my customers are 
free to define their own queries. PG hat thousands of functions to use 
in queries. Which one of them will lead to similar problems??


>>An aborted transaction is an aborted transaction,
Why was it completely aborted?

Transactions are a problem using cursors, i think PG doesn't support 
this properly. We had to use savepoints to handle syntax errors in the 
cursor declaration


SAVEPOINT tsp_020DE240

DECLARE c020DE860 SCROLL CURSOR FOR
   SELECT "CUSTOMERS"."ID","CUSTOMERS"."ZIP"
   FROM "CUSTOMERS"
   WHERE Cast("CUSTOMERS"."ZIP" as integer) >= 1
AND Cast("CUSTOMERS"."ZIP" as integer) < 2

FETCH  FROM c020DE860

RELEASE SAVEPOINT tsp_020DE240

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Tom Lane
Jeremy Nix <[EMAIL PROTECTED]> writes:
> I see what you're doing, but I'm not quite sure how to adapt it to what 
> I'm doing.  Here's simplified snippet of my code.  Can elaborate on how 
> I can return a recordset and the output parameters.?

I suppose what you need is something like

CREATE OR REPLACE FUNCTION Search (OUT TotalRecords int, OUT TotalPages int)
RETURNS SETOF record AS
$BODY$
 FOR myRecord IN
SELECT cols FROM searchResults
LOOP
TotalRecords := myRecord.TotalRecords;
TotalPages := myRecord.TotalPages;
RETURN NEXT;
END LOOP;

Anyway the point is that when you are using OUT parameters you do not
say anything in RETURN or RETURN NEXT.  Whatever you last assigned to
the parameter variables is what's returned.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] track ddl changes on single database

2007-06-11 Thread Tom Lane
Rikard Pavelic <[EMAIL PROTECTED]> writes:
> I'm looking for recommendation for tracking DDL changes on
> single database instance.

Perhaps "ALTER DATABASE mydb SET log_statement = ddl" would do what
you want.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Distributing PostGres database to various customers

2007-06-11 Thread Sean Davis
Mike Gould wrote:
> All,
> 
> I am new to PostGres 8 (using 8.2.4 windows version).  We have for
> several years been using iAnywhere's SQL Anywhere product with our
> commercial transportation software.  With ASA there are 2 files that
> must be distributed for the database, a filename.db and a filename.log. 
> When we do a new installation we normally try and preload the database
> with data used for lookups, some registration data and if a customer is
> moving from another software where we've been contracted to convert
> their old data to our system we preload that.  Once that is done we can
> distribute the database as part of the setup process.
> 
> How can we do this with PostGres?  Other than backup and restore or
> creating SQL scripts I haven't been able to find another method.  Some
> of these tables may have over a million rows in them initially if we
> convert old data.

I would give backup/restore a try.  A million rows isn't very big in
postgres terms.  On relatively low-end hardware, I am routinely able to
backup about 300Gb in about 35 databases in under 3 hours.

Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Martijn van Oosterhout
On Mon, Jun 11, 2007 at 03:28:22PM +0200, Pit M. wrote:
> We handle exceptions errors through libpq, and if a FETCH leads to such 
> a runtime error, we try to FETCH the first record again.
> The problem is that we can't use this cursor any more -> it seems to be 
> corrupt after that error.

An aborted transaction is an aborted transaction, you have to rollback
to a known state before you can go on.

I'm not sure why you can't just change the queries, but in the
particluar case you give, why not just treat them as strings to start
with:

   WHERE "CUSTOMERS"."ZIP" >= '1'
AND "CUSTOMERS"."ZIP" < '2'

That will produce the same result, but without any chance of errors...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pit M.

Thank you Pavel!

The problem is, that the CAST function may be used in any SQL query, 
since our software allows using a free SQL-query. The query itself 
doesn't fail, because the syntax is correct.
The problem is that there may be other functions in a query that can 
lead to a runtime error during a FETCH.
The goal is to maintain a valid cursor that can be used to FETCH other 
records.
It would be ideal to skip this record an continue fetching the next 
record -> but how can i achieve this?


Thanks

Pit




It's not good advice. I tested it, and problem is in where clause. I
don't understand problem well, but one possibility is change from cast
to to_number function like:

postgres=# begin;
BEGIN
postgres=# declare c cursor for select * from fx where 
to_number(b,'9') > 0;

DECLARE CURSOR
postgres=# fetch from c;
a  | b
+
10 | 20
(1 row)

postgres=# fetch from c;
a  |  b
+-
10 | a20
(1 row)

But still this solution is +/- correct

Regards
Pavel

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pavel Stehule


Unfortunately this is not possible, because this should happen on the
client. The client calls FETCH for every record available in that cursor
when the user (application) wants to display the data (scrollable
list of records)
So do you think i should wrap each FETCH statement?
We handle exceptions errors through libpq, and if a FETCH leads to such
a runtime error, we try to FETCH the first record again.
The problem is that we can't use this cursor any more -> it seems to be
corrupt after that error.



Change your SQL statement. It is not correct with your real data.

Pavel

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Michael Fuhr
On Mon, Jun 11, 2007 at 03:20:15PM +0200, Pavel Stehule wrote:
> it's not possible. PostgreSQL doesn't support multiple recordset. You
> have to have two functions.

If you don't mind handling cursors then you could return multiple
cursors from one function.  See the PL/pgSQL documentation for an
example (the example is at the bottom of the page).

http://www.postgresql.org/docs/8.2/interactive/plpgsql-cursors.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pavel Stehule



Yes i know that i can't, but why the transaction fails?



because casting fails. First FETCH was ok, and evaluating of next row
(which need second FETCH) was broken. When any statement in
transaction fail, you have to rollback current transaction.

Pavel

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Alvaro Herrera
Pavel Stehule escribió:
> Hello
> 
> it's not possible. PostgreSQL doesn't support multiple recordset. You
> have to have two functions.

The other idea is to declare the function to return SETOF refcursor (or
have an OUT refcursor param), and return two refcursors open with the
different recordsets.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pit M.


I didn't try myself, but wrapping the whole into a PL/pgSQL function and
using exceptions might do the work;

http://www.postgresql.org/docs/8.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Ciao,
Thomas



Unfortunately this is not possible, because this should happen on the 
client. The client calls FETCH for every record available in that cursor 
   when the user (application) wants to display the data (scrollable 
list of records)

So do you think i should wrap each FETCH statement?
We handle exceptions errors through libpq, and if a FETCH leads to such 
a runtime error, we try to FETCH the first record again.
The problem is that we can't use this cursor any more -> it seems to be 
corrupt after that error.


Pit

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Pavel Stehule

Hello,

I forgot, You can do it via recordset of cursors.

http://www.postgresql.org/docs/8.1/static/plpgsql-cursors.html
37.8.3.3. Returning Cursors

one cursor returns TotalRecords and TotalPages columns and second
record returns searchResult.

Regards
Pavel



2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:

I see what you're doing, but I'm not quite sure how to adapt it to what
I'm doing.  Here's simplified snippet of my code.  Can elaborate on how
I can return a recordset and the output parameters.?

CREATE OR REPLACE FUNCTION Search (OUT TotalRecords int, OUT TotalPages int)
RETURNS SETOF record AS
$BODY$
TotalRecords := 10;
TotalPages := 1;

FOR myRecord IN
SELECT cols FROM searchResults
LOOP
RETURN NEXT myRecord;
END LOOP;

Thanks,

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699



Pavel Stehule wrote:
> Hello
>
> it's possible, but it's probably some different than you expect
>
>
> CREATE OR REPLACE FUNCTION foo(OUT a integer, OUT b integer)
> RETURNS SETOF RECORD AS $$
> BEGIN
>  a := 10; b := 10;
>  RETURN NEXT;
>  a := 11; b := 20;
>  RETURN NEXT;
>  RETURN;
> END;
> $$ LANGUAGE plpgsql;
>
> postgres=# select * from foo();
> a  | b
> +
> 10 | 10
> 11 | 20
> (2 rows)
>
> Regards
> Pavel Stehule
>
>
> 2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:
>> Is this possible?  I'm attempting to create a function like this and I'm
>> getting the following error:
>>
>> ERROR: RETURN NEXT cannot have a parameter in function with OUT
>> parameters at or near "myRecord".
>>
>> --
>>
>> __
>> Jeremy Nix
>> Senior Application Developer
>> Southwest Financial Services, Ltd.
>> (513) 621-6699
>>
>>
>> ---(end of broadcast)---
>> TIP 3: Have you checked our extensive FAQ?
>>
>>http://www.postgresql.org/docs/faq
>>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Pavel Stehule

Hello

it's not possible. PostgreSQL doesn't support multiple recordset. You
have to have two functions.

Regards
Pavel




2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:

I see what you're doing, but I'm not quite sure how to adapt it to what
I'm doing.  Here's simplified snippet of my code.  Can elaborate on how
I can return a recordset and the output parameters.?

CREATE OR REPLACE FUNCTION Search (OUT TotalRecords int, OUT TotalPages int)
RETURNS SETOF record AS
$BODY$
TotalRecords := 10;
TotalPages := 1;

FOR myRecord IN
SELECT cols FROM searchResults
LOOP
RETURN NEXT myRecord;
END LOOP;

Thanks,

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699



Pavel Stehule wrote:
> Hello
>
> it's possible, but it's probably some different than you expect
>
>
> CREATE OR REPLACE FUNCTION foo(OUT a integer, OUT b integer)
> RETURNS SETOF RECORD AS $$
> BEGIN
>  a := 10; b := 10;
>  RETURN NEXT;
>  a := 11; b := 20;
>  RETURN NEXT;
>  RETURN;
> END;
> $$ LANGUAGE plpgsql;
>
> postgres=# select * from foo();
> a  | b
> +
> 10 | 10
> 11 | 20
> (2 rows)
>
> Regards
> Pavel Stehule
>
>
> 2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:
>> Is this possible?  I'm attempting to create a function like this and I'm
>> getting the following error:
>>
>> ERROR: RETURN NEXT cannot have a parameter in function with OUT
>> parameters at or near "myRecord".
>>
>> --
>>
>> __
>> Jeremy Nix
>> Senior Application Developer
>> Southwest Financial Services, Ltd.
>> (513) 621-6699
>>
>>
>> ---(end of broadcast)---
>> TIP 3: Have you checked our extensive FAQ?
>>
>>http://www.postgresql.org/docs/faq
>>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pit M.



Pavel Stehule schrieb:

Hello

I thing so problem is there

   AND Cast("CUSTOMERS"."ZIP" as integer) < 2

You cannot cast 'A1234' to integer

Regards
Pavel Stehule




Yes i know that i can't, but why the transaction fails?

Pit

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pavel Stehule


I didn't try myself, but wrapping the whole into a PL/pgSQL function and
using exceptions might do the work;




It's not good advice. I tested it, and problem is in where clause. I
don't understand problem well, but one possibility is change from cast
to to_number function like:

postgres=# begin;
BEGIN
postgres=# declare c cursor for select * from fx where to_number(b,'9') > 0;
DECLARE CURSOR
postgres=# fetch from c;
a  | b
+
10 | 20
(1 row)

postgres=# fetch from c;
a  |  b
+-
10 | a20
(1 row)

But still this solution is +/- correct

Regards
Pavel

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Distributing PostGres database to various customers

2007-06-11 Thread Alexander Staubo

On 6/11/07, Mike Gould <[EMAIL PROTECTED]> wrote:

How can we do this with PostGres?  Other than backup and restore or creating
SQL scripts I haven't been able to find another method.  Some of these
tables may have over a million rows in them initially if we convert old
data.


The most portable way is probably pg_dump as plaintext:

 pg_dump -Fp mydatabase >mydatabase.dump

Then restore with:

 cat mydatabase.dump | psql mydatabase

Alexander.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Thomas Pundt
Hi,

On Monday 11 June 2007 13:45, Pit M. wrote:
| Assume a user doing a query which shall select all customers having a
| ZIP in [1;2[ by using a CAST. If the cursor now fetches the row
| with ID 4 we get an error (invalid input syntax) as "A1234" cannot be
| casted as integer. This is ok, BUT now all further FETCH commands fail
| with "current transaction is aborted"!
|
| How can we resume from there with FETCH commands (e.g. at least FETCH
| FIRST)? We cannot avoid the (first) error itself, as the user can enter
| any WHERE condition at runtime. Therefore we must handle the error - if
| any - and resume from there properly.

I didn't try myself, but wrapping the whole into a PL/pgSQL function and
using exceptions might do the work;

http://www.postgresql.org/docs/8.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Ciao,
Thomas

-- 
Thomas Pundt <[EMAIL PROTECTED]>  http://rp-online.de/ 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] track ddl changes on single database

2007-06-11 Thread Ray Stell
On Mon, Jun 11, 2007 at 12:55:08PM +0200, hubert depesz lubaczewski wrote:
> you can modify log_line_prefix to contain database name.

%d = database name
restart required

a trigger might be cleaner.




> depesz
> 
> On 6/11/07, Rikard Pavelic <[EMAIL PROTECTED]> wrote:
> >
> >Hi!
> >
> >I'm looking for recommendation for tracking DDL changes on
> >single database instance.
> >
> >Currently I'm using pg_log to extract DDL changes, but those changes
> >are cluster wide.
> >
> >Ideally I would like to enable option in pg_log to give me info about
> >in which database changes were made.
> >Something like
> >timestamp DB: my_database LOG: statement: "DDL statement here..."
> >
> >Is there such an option, or any other suggestion how to track
> >these changes?
> >
> >Thanks,
> >Rikard
> >
> >---(end of broadcast)---
> >TIP 9: In versions below 8.0, the planner will ignore your desire to
> >   choose an index scan if your joining column's datatypes do not
> >   match
> >
> 
> 
> 
> -- 
> http://www.depesz.com/ - nowy, lepszy depesz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Distributing PostGres database to various customers

2007-06-11 Thread Mike Gould
All,

I am new to PostGres 8 (using 8.2.4 windows version).  We have for several 
years been using iAnywhere's SQL Anywhere product with our commercial 
transportation software.  With ASA there are 2 files that must be distributed 
for the database, a filename.db and a filename.log.  When we do a new 
installation we normally try and preload the database with data used for 
lookups, some registration data and if a customer is moving from another 
software where we've been contracted to convert their old data to our system we 
preload that.  Once that is done we can distribute the database as part of the 
setup process.

How can we do this with PostGres?  Other than backup and restore or creating 
SQL scripts I haven't been able to find another method.  Some of these tables 
may have over a million rows in them initially if we convert old data.

Best Regards,

Michael Gould
All Coast Intermodal Services, Inc.
904-376-7030

Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Jeremy Nix
I see what you're doing, but I'm not quite sure how to adapt it to what 
I'm doing.  Here's simplified snippet of my code.  Can elaborate on how 
I can return a recordset and the output parameters.?


CREATE OR REPLACE FUNCTION Search (OUT TotalRecords int, OUT TotalPages int)
RETURNS SETOF record AS
$BODY$
   TotalRecords := 10;
   TotalPages := 1;

   FOR myRecord IN
   SELECT cols FROM searchResults
   LOOP
   RETURN NEXT myRecord;
   END LOOP;

Thanks,

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699



Pavel Stehule wrote:

Hello

it's possible, but it's probably some different than you expect


CREATE OR REPLACE FUNCTION foo(OUT a integer, OUT b integer)
RETURNS SETOF RECORD AS $$
BEGIN
 a := 10; b := 10;
 RETURN NEXT;
 a := 11; b := 20;
 RETURN NEXT;
 RETURN;
END;
$$ LANGUAGE plpgsql;

postgres=# select * from foo();
a  | b
+
10 | 10
11 | 20
(2 rows)

Regards
Pavel Stehule


2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:

Is this possible?  I'm attempting to create a function like this and I'm
getting the following error:

ERROR: RETURN NEXT cannot have a parameter in function with OUT
parameters at or near "myRecord".

--

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Pavel Stehule

Hello

it's possible, but it's probably some different than you expect


CREATE OR REPLACE FUNCTION foo(OUT a integer, OUT b integer)
RETURNS SETOF RECORD AS $$
BEGIN
 a := 10; b := 10;
 RETURN NEXT;
 a := 11; b := 20;
 RETURN NEXT;
 RETURN;
END;
$$ LANGUAGE plpgsql;

postgres=# select * from foo();
a  | b
+
10 | 10
11 | 20
(2 rows)

Regards
Pavel Stehule


2007/6/11, Jeremy Nix <[EMAIL PROTECTED]>:

Is this possible?  I'm attempting to create a function like this and I'm
getting the following error:

ERROR: RETURN NEXT cannot have a parameter in function with OUT
parameters at or near "myRecord".

--

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] transaction problem using cursors

2007-06-11 Thread Pavel Stehule

Hello

I thing so problem is there

   AND Cast("CUSTOMERS"."ZIP" as integer) < 2

You cannot cast 'A1234' to integer

Regards
Pavel Stehule


2007/6/11, Pit M. <[EMAIL PROTECTED]>:

We use PG 8.2.4 with as cursors over libpq and get an error:

ERROR:  current transaction is aborted, commands ignored until end of
transaction block

How can we avoid this error or make the cursor still working afterwards?


Assume following contents of table CUSTOMERS:

ID |  ZIP  (varchar(5))
-- | ---
1   |  12345
2   |  12346
3   |  9
4   |  A1234


Assume a user doing a query which shall select all customers having a
ZIP in [1;2[ by using a CAST. If the cursor now fetches the row
with ID 4 we get an error (invalid input syntax) as "A1234" cannot be
casted as integer. This is ok, BUT now all further FETCH commands fail
with "current transaction is aborted"!

How can we resume from there with FETCH commands (e.g. at least FETCH
FIRST)? We cannot avoid the (first) error itself, as the user can enter
any WHERE condition at runtime. Therefore we must handle the error - if
any - and resume from there properly.

Refer to following log:

SAVEPOINT tsp_020DE240

DECLARE c020DE860 SCROLL CURSOR FOR
SELECT "CUSTOMERS"."ID","CUSTOMERS"."ZIP"
FROM "CUSTOMERS"
WHERE Cast("CUSTOMERS"."ZIP" as integer) >= 1
 AND Cast("CUSTOMERS"."ZIP" as integer) < 2

FETCH  FROM c020DE860

RELEASE SAVEPOINT tsp_020DE240

FETCH FIRST FROM c020DE860

FETCH FORWARD  FROM c020DE860 -> OK

FETCH FORWARD  FROM c020DE860 -> ERROR:  invalid input syntax for
integer: "A1234"

FETCH FORWARD  FROM c020DE860 -> ERROR:  current transaction is aborted,
commands ignored until end of transaction block


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Functions that return both Output Parameters and recordsets

2007-06-11 Thread Jeremy Nix
Is this possible?  I'm attempting to create a function like this and I'm 
getting the following error:


ERROR: RETURN NEXT cannot have a parameter in function with OUT 
parameters at or near "myRecord".


--

__
Jeremy Nix
Senior Application Developer
Southwest Financial Services, Ltd.
(513) 621-6699


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] transaction problem using cursors

2007-06-11 Thread Pit M.

We use PG 8.2.4 with as cursors over libpq and get an error:

ERROR:  current transaction is aborted, commands ignored until end of 
transaction block


How can we avoid this error or make the cursor still working afterwards?


Assume following contents of table CUSTOMERS:

ID |  ZIP  (varchar(5))
-- | ---
1   |  12345
2   |  12346
3   |  9
4   |  A1234


Assume a user doing a query which shall select all customers having a 
ZIP in [1;2[ by using a CAST. If the cursor now fetches the row 
with ID 4 we get an error (invalid input syntax) as "A1234" cannot be 
casted as integer. This is ok, BUT now all further FETCH commands fail 
with "current transaction is aborted"!


How can we resume from there with FETCH commands (e.g. at least FETCH 
FIRST)? We cannot avoid the (first) error itself, as the user can enter 
any WHERE condition at runtime. Therefore we must handle the error - if 
any - and resume from there properly.


Refer to following log:

SAVEPOINT tsp_020DE240

DECLARE c020DE860 SCROLL CURSOR FOR
   SELECT "CUSTOMERS"."ID","CUSTOMERS"."ZIP"
   FROM "CUSTOMERS"
   WHERE Cast("CUSTOMERS"."ZIP" as integer) >= 1
AND Cast("CUSTOMERS"."ZIP" as integer) < 2

FETCH  FROM c020DE860

RELEASE SAVEPOINT tsp_020DE240

FETCH FIRST FROM c020DE860

FETCH FORWARD  FROM c020DE860 -> OK

FETCH FORWARD  FROM c020DE860 -> ERROR:  invalid input syntax for 
integer: "A1234"


FETCH FORWARD  FROM c020DE860 -> ERROR:  current transaction is aborted, 
commands ignored until end of transaction block



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] track ddl changes on single database

2007-06-11 Thread hubert depesz lubaczewski

you can modify log_line_prefix to contain database name.

depesz

On 6/11/07, Rikard Pavelic <[EMAIL PROTECTED]> wrote:


Hi!

I'm looking for recommendation for tracking DDL changes on
single database instance.

Currently I'm using pg_log to extract DDL changes, but those changes
are cluster wide.

Ideally I would like to enable option in pg_log to give me info about
in which database changes were made.
Something like
timestamp DB: my_database LOG: statement: "DDL statement here..."

Is there such an option, or any other suggestion how to track
these changes?

Thanks,
Rikard

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match





--
http://www.depesz.com/ - nowy, lepszy depesz


[GENERAL] track ddl changes on single database

2007-06-11 Thread Rikard Pavelic
Hi!

I'm looking for recommendation for tracking DDL changes on
single database instance.

Currently I'm using pg_log to extract DDL changes, but those changes
are cluster wide.

Ideally I would like to enable option in pg_log to give me info about
in which database changes were made.
Something like
timestamp DB: my_database LOG: statement: "DDL statement here..."

Is there such an option, or any other suggestion how to track
these changes?

Thanks,
Rikard

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] performance problem with loading data

2007-06-11 Thread Alban Hertroys
Sergey Karin wrote:
> Hi all.
> 
> I use PG 8.1.8 and PostGIS 1.1.1
> vka1=# select version();

> I develop application for loading geodata to database. In separate
> transaction the application  inserts the data in separate table that
> created dynamically when transaction started. All tables has equal
> structure.

> I found *VERY* strange problem: speed of the loading process is slow
> down (first 1 objects are loaded in 69792 ms and last 1 objects
> in 123737 ms). And futhermore, if I do NOT close current session but
> start new transaction, the first 1 objects will be loaded in 192279
> ms and last 1 objects in 251742 ms. And so on!! :-(
> 
> But if I drop the trigger speed of loading process is NOT slow down.
> 
> Who can explain me what I do incorrect?

I think I can explain what happens (to my understanding, I'm "just a user").

When you start you have an empty table (right?). If PG's statistics are
accurate to match that situation at that point, it will know there are 0
records in it. Querying a table with 0 records is fastest with a
sequential scan.

Next you put data into the table, and at a certain point a sequential
scan will not be optimal anymore. But PG doesn't know that, so it keeps
using sequential scans!

I think you'll see performance improve if you add ANALYZE 
statements periodically. That way PG updates its stats on that table.

I know my explanation is a bit simplified, there are other statistics in
play. That is why I didn't advise to run ANALYZE just once after a
certain number of inserts ;)

Now this would be problematic with INSERT ... SELECT, as there's no room
to run periodical ANALYZES (maybe if you'd put in LIMIT/OFFSET, but that
feels kludgy).

For the technical people; would it be possible to use the statistics on
the table(s) in the SELECT part of the statement to update the
statistics of the table being inserted into? Maybe they wouldn't be
entirely accurate, but it wouldn't it be more accurate than statistics
that say it's empty?

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] parametered views

2007-06-11 Thread Alban Hertroys
Rodrigo De León wrote:
> On Jun 8, 7:59 pm, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> wrote:
>> i have 4 tables :
>>
>> date_table (date_id,.)
>> A_table(A_table_id, something1,something2.)
>> A1_table(A1_table_id references A_Table(A_Table_id),A11,A12)
>> A2_table(A2_table_id references A_Table(A_table_id),A21,A22,...)
>>
>> so i want to create a view with date_id,A_table_id,something1,
>> if something2=x then my view containts
>>  (A1_table.A11 and A1_table.A12)
>>  else it's containts (A2_table.A21 and A2_table.A22))
>>
>> so my view depends of something2 value.
>>
>> I hope i'm specific
>>
>> Thx
>> Lhaj
> 
> create view foobar as
> select date_id,A_table_id,something1
> ,case when something2=x then A1_table.A11 else A2_table.A21 end as foo
> ,case when something2=x then A1_table.A12 else A2_table.A22 end as bar
> from (... etc., etc. ...)

Not sure what you're trying to say here... You didn't formulate a question.

Maybe you mean this?:

CREATE VIEW foobar AS
SELECT date_id, A_table_id, A1_table.A11, A1_table.A12
 WHERE something2 = x
UNION
SELECT date_id, A_table_id, A2_table.A21, A2_table.A22
 WHERE something2 <> x

In case the records in the result sets matching something2 = x and the
ones not matching are distinct or if you don't care about duplicates,
use UNION ALL instead of UNION.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postmaster processes taking all the CPU

2007-06-11 Thread Alban Hertroys
MC Moisei wrote:
> 
> I'm not sure I understand the question. What else runs on it ?I have an 
> Apache that fronts a Tomcat (Java Enterprise App Server). In tomcat I only 
> run this application that has a connection pool of 30 connections(if I 
> remember correctly).Once the application starts to open connections it looks 
> that the each postmaster associated with the connection is not exiting as 
> fast as was before. I can follow up with a ps -aux capture if you think 
> that's helpful. Till yesterday all was working smoothly for about 2 years. It 
> looks like the postmasters are not finishing of if they do takes a good while 
> to finish. Also I've seen that the swap increases. I never use to have swap 
> used. I don't have space problems not errors in the syslog.Am I running out 
> of memory and all gets delayed by the swap handling ? I have the feeling that 
> I spin around my tail. So these processes are taking all the CPU and memory 
> and they hold for too long just doing a select. The traffic didn't increase 
> by any me
ans so one can say that causes the problem - at one point it sustained 4 times 
more traffic without problems.Hope this provide more insight.MC> Date: Fri, 8 
Jun 2007 16:35:40 -0400> From: [EMAIL PROTECTED]> To: 
pgsql-general@postgresql.org> Subject: Re: [GENERAL] Postmaster processes 
taking all the CPU> > On Fri, Jun 08, 2007 at 03:20:28PM -0500, MC Moisei 
wrote:> > > > pack of postmaster(4-22) processes ran by postgres user are 
taking> > over almost all the CPU. > > What else is the box doing?  If it 
doesn't have any other work to do,> why shouldn't postgres use the CPU time?  
(This is a way of saying,> "You didn't tell us anything that would allow us to 
help.")> > A> > -- > Andrew Sullivan  | [EMAIL PROTECTED]> In the future this 
spectacle of the middle classes shocking the avant-> garde will probably become 
the textbook definition of Postmodernism. > --Brad Holland> > 
---(end of broadcast)---> TIP 
4: Have 
you searched our list archives?> >
http://archives.postgresql.org/
> 

Could you please send your messages as plain text? Your mail client is
doing something foul to the plain text alternative incorporated in the
multi-part message. It's almost unreadable this way.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Suppress checking of chmod 700 on data-dir?

2007-06-11 Thread Simon Riggs
On Mon, 2007-06-11 at 09:03 +0200, Johannes Konert wrote:
> Joshua D. Drake wrote:

> > Anything else in there you should be grabbing via pg_dump anyway.

> So you suggest not to backup the filesystem-files, but to do a dump 
> instead? Does this work together with PITR as described in 23.3. 
> (http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP).
>  
> I want to make a full backup every night of a heavy DB while it is 
> running. Combined with short-term-WAL-archiving. Getting the WAL-files 
> is easy by using the config-parameter archive_command, but to copy and 
> backup the "base backup" once a day is impossible if I cannot access the 
> data-files. 
> Will the described backup&restore work as well with a dump + WAL-files?
> Then I could forget about copying the file-system-files

No, it won't, so I'm not sure what Josh means.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Using the GPU

2007-06-11 Thread Alban Hertroys
Alexander Staubo wrote:
> On 6/8/07, Billings, John <[EMAIL PROTECTED]> wrote:
>> If so  which part of the database, and what kind of parallel
>> algorithms would be  used?
> 
> GPUs are parallel vector processing pipelines, which as far as I can
> tell do not lend themselves right away to the data structures that
> PostgreSQL uses; they're optimized for processing high volumes of
> homogenously typed values in sequence.

But wouldn't vector calculations on database data be sped up? I'm
thinking of GIS data, joins across ranges like matching one (start, end)
range with another, etc.
I realize these are rather specific calculations, but if they're
important to your application...

OTOH modern PC GPU's are optimized for pushing textures; basically
transferring a lot of data in as short a time as possible. Maybe it'd be
possible to move result sets around that way? Do joins even maybe?

And then there are the vertex and pixel shaders...

It'd be kind of odd though, to order a big time database server with a
high-end gaming card in it :P

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Suppress checking of chmod 700 on data-dir?

2007-06-11 Thread Johannes Konert

Joshua D. Drake wrote:

Johannes Konert wrote:
But that is not my point. The question is where I can change the 
enforced chmod 700 postgresql always wants me to set.

You can't.

You can however change the postgresql.conf to put look for files
somewhere besides $PGDATA and thus you would be able to back them up.
With postgresql.conf I can change the path to the data-dir, but 
postgresql checks the chmod 700 on that directory either. So only the 
logfiles can be written and accessed somewhere else. The data itself is 
still only accessible by the postgres-user.

Anything else in there you should be grabbing via pg_dump anyway.
So you suggest not to backup the filesystem-files, but to do a dump 
instead? Does this work together with PITR as described in 23.3. 
(http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP). 
I want to make a full backup every night of a heavy DB while it is 
running. Combined with short-term-WAL-archiving. Getting the WAL-files 
is easy by using the config-parameter archive_command, but to copy and 
backup the "base backup" once a day is impossible if I cannot access the 
data-files. 
Will the described backup&restore work as well with a dump + WAL-files?

Then I could forget about copying the file-system-files


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match