Re: [GENERAL] PostgreSQL Cache

2008-09-29 Thread Greg Smith

On Mon, 29 Sep 2008, Matthew Pulis wrote:


I need to perform some timed testing, thus need to make sure that disk cache
does not affect me. Is clearing the OS (Ubuntu) disk cache, ( by running:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches ) enough to do this?


What you should do is:

1) Shutdown the database server (pg_ctl, sudo service postgresql stop, 
etc.)

2) sync
3) sudo echo 3  /proc/sys/vm/drop_caches
4) Start the database server

That will clear both the database and OS cache with a minimum of junk left 
behind in the process; clearing the cache without a sync is a bad idea.


Note that all of this will still leave behind whatever cache is in your 
disk controller card or on the disk themselves available.  There are some 
other techniques you could consider.  Add a setp 2.5 that generates a 
bunch of data unused by the test, then sync again, and you've turned most 
of that into useless caching.


Ideally, your test should be running against a large enough data set that 
the dozens or couple of hundred megabytes that might be in those will only 
add a bit of noise to whatever you're testing.  If you're not running a 
larger test or going through tasts to make the caches clear, the only easy 
way to make things more clear is to reboot the whole server.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cache

2008-09-29 Thread Joris Dobbelsteen

Matthew Pulis wrote:

Hi,

I need to perform some timed testing, thus need to make sure that disk 
cache does not affect me. Is clearing the OS (Ubuntu) disk cache, ( by 
running:  sudo echo 3 | sudo tee /proc/sys/vm/drop_caches ) enough to 
do this? If not can you please point me to some site please since all 
I am finding is such command.


Look for methodologies for doing performance tests. A problem is that 
the disk cache is an essential part that makes up for postgresql 
performance. Also do not forget about overhead and inaccuracies that you 
will affect your results.


In general performance tests are a rather large simulation of how your 
application would use the database. It should be large enough for many 
effects (such as initial cache state) to be neglected. It only provides 
an average for the performance on your system configuration.
If you run it a few times more, you can compute the variation. It 
provides some insight how stable your system is in handling the workload.


- Joris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Richard Huxton
Abdul Rahman wrote:
 Hi all,
 
 I am following the steps present in this link:
 
 http://people.planetpostgresql.org/dpage/index.php?/archives/51-Setting-up-Slony-I-with-pgAdmin.html
 
 
 But failed to complete step # 13. Here I am unable to get CREATE NEW
 SUBSCRIPTION. Can any body guide me plz.

What does happen - do you get an error? Is there anything in the
postgresql logs?

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Abdul Rahman
I am not getting the New Subscription option when do right click on 
subscription (0) on the master node.



  

Re: [GENERAL] PostgreSQL Cache

2008-09-29 Thread Oleg Bartunov

A while ago I wrote a script based on Dave Plonka work
http://net.doit.wisc.edu/~plonka/fincore/

My script monitors system buffers and shared buffers 
(if pg_buffercache installed) and I found it's almost useless to 
check system buffers, since I got rather ridiculous numbers.



I use it to investigate OS cacheing of PostgreSQL files and was
surprized on 24 Gb server, total cache was about 30 Gb. How this is
possible ?


I can send script and perl module if you want to play with.



Oleg

On Mon, 29 Sep 2008, Greg Smith wrote:


On Mon, 29 Sep 2008, Matthew Pulis wrote:

I need to perform some timed testing, thus need to make sure that disk 
cache

does not affect me. Is clearing the OS (Ubuntu) disk cache, ( by running:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches ) enough to do this?


What you should do is:

1) Shutdown the database server (pg_ctl, sudo service postgresql stop, etc.)
2) sync
3) sudo echo 3  /proc/sys/vm/drop_caches
4) Start the database server

That will clear both the database and OS cache with a minimum of junk left 
behind in the process; clearing the cache without a sync is a bad idea.


Note that all of this will still leave behind whatever cache is in your disk 
controller card or on the disk themselves available.  There are some other 
techniques you could consider.  Add a setp 2.5 that generates a bunch of data 
unused by the test, then sync again, and you've turned most of that into 
useless caching.


Ideally, your test should be running against a large enough data set that the 
dozens or couple of hundred megabytes that might be in those will only add a 
bit of noise to whatever you're testing.  If you're not running a larger test 
or going through tasts to make the caches clear, the only easy way to make 
things more clear is to reboot the whole server.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Ivan Zolotukhin
Guys,

This is all not about checkpoints. As I've mentioned in the first
message, even right after manual run of CHECKPOINT command in psql
pg_start_backup() takes same time (~10 minutes).

Regards,
 Ivan

On Sun, Sep 28, 2008 at 8:18 PM, Simon Riggs [EMAIL PROTECTED] wrote:

 On Sun, 2008-09-28 at 08:35 -0700, Joshua D. Drake wrote:
 Ivan Zolotukhin wrote:
  Hello,
 
  Nothing bad both in system and postgres logs :( No serious activity
  during backup. I've had to change statement_timeout for backup user to
  make it work. But I cannot reproduce this case unfortunately.

 This is actually not uncommon and PostgreSQL shows exactly nothing in
 terms of why it is taking so long. The only assumption I have come up
 with is that start_backup does cause a checkpoint.

 Yes, it does a normal checkpoint and writes a file. No reason for it to
 take longer than any other checkpoint.

 At 8.2 and below checkpoints were frequently delayed on busy systems.
 This was because of lwlock starvation during commit phase of
 transactions. That was fixed in 8.3.

 --
  Simon Riggs   www.2ndQuadrant.com
  PostgreSQL Training, Services and Support



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NULL values seem to short-circuit my unique index

2008-09-29 Thread Tom Allison
You can always add a constraint that these columns cannot be NULL  
themselves. That removes this problem.


On Sep 28, 2008, at 11:17 PM, Klint Gore [EMAIL PROTECTED] wrote:


Matthew Wilson wrote:
I'm trying to comprehend how NULL values interact with unique  
indexes.
It seems like I can insert two rows with NULL values in a column  
with a

unique constraint just fine.

Is there something special about NULL?  Can anyone post some links to
explain what is going on?



http://www.postgresql.org/docs/8.3/interactive/ddl-constraints.html#AEN2058
Last paragraph just above 5.3.4.


What's the deal with NULL?



NULL = NULL is not true, it's null
NULL  NULL is not false, it's null

It's the normal SQL 3 state logic (true/false/null) with only the  
true value from the comparison causing the constraint violation.   
Think of the unique constraint check like does this value equal any  
other value already recorded.


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  Fax: 02 6773 3266
EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Simon Riggs

On Mon, 2008-09-29 at 13:39 +0400, Ivan Zolotukhin wrote:

 This is all not about checkpoints. As I've mentioned in the first
 message, even right after manual run of CHECKPOINT command in psql
 pg_start_backup() takes same time (~10 minutes).

As explained, there's not very much going on apart from the checkpoint
and that can be problematic.

What version are you running?
What are your checkpoint_timeout and checkpoint_completion_target
settings?

My guesses are 8.3, 20 minutes, and default.

pg_start_backup() doesn't do an immediate checkpoint, it does a smooth
one, so doing a CHECKPOINT beforehand should make no difference in 8.3.

(Looks at code...)

I'm surprised that checkpoint smoothing moves slowly even when it has so
little to do. ISTM checkpoint completion target should set its write
rate according to the thought that if shared_buffers were all dirty it
would write them out in checkpoint_timeout *
checkpoint_completion_target seconds. However, what it does is write
them *all* out in that time, no matter how many dirty blocks there are.
If there is just a few blocks to write, we take the *same* time to write
them as if it were all dirty. Which looks fairly ludicrous to me, but
the only time that is possible in current code is pg_start_backup()
since no other code requests a checkpoint exactly when you ask, but
slowly. It makes more sense to have a constant write rate during
checkpoint, or at very least a minimum rate during checkpoint.

IMHO the checkpoint smoothing code is wrong, but since it only shows
itself for pg_start_backup() I think people will say we should change
that instead. If we do, then we'll get people saying how come
pg_start_backup() causes such a performance drop? because we start
doing an immediate checkpoint. The idea of this is that online backup
should have as little effect as possible on normal running. So I suggest
we change the checkpoint code instead. 

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Raymond O'Donnell
On 29/09/2008 08:49, Abdul Rahman wrote:
 I am not getting the New Subscription option when do right click on
 subscription (0) on the master node.

It sounds like you're using pgAdmin - IIRC you need to have some parts
at least of Slony installed on the machine on which pgAdmin is running.

There are separate lists for both Slony and pgAdmin - you might get more
useful information there, but you'll need to post a good bit more
information about your setup, what you did and what you're getting back.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cache

2008-09-29 Thread Sam Mason
On Mon, Sep 29, 2008 at 02:55:52AM -0400, Greg Smith wrote:
 On Mon, 29 Sep 2008, Matthew Pulis wrote:
 I need to perform some timed testing, thus need to make sure that
 disk cache does not affect me. Is clearing the OS (Ubuntu) disk
 cache, (by running: sudo echo 3 | sudo tee /proc/sys/vm/drop_caches)
 enough to do this?

 3) sudo echo 3  /proc/sys/vm/drop_caches

I'm not sure about the rest, but shouldn't this be:

  echo 3 | sudo tee /proc/sys/vm/drop_caches

the OP's sudo echo 3 seems redundant; echo doesn't need to be run
with higher privileges, only the writing process needs that.  As an
aside, it would be nicer if there was a more appropriately program than
tee but I've yet to find one.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] access public relation from a private database

2008-09-29 Thread Joey K.
Hello,

I've created a table in public schema,
CREATE TABLE public.foo (fooid SERIAL);


When I try to create a table in database nowhere that references public.foo
table,

CREATE DATABASE nowhere;
\c nowhere;
CREATE TABLE bar (bar integer REFERENCES public.foo(fooid));

I get, ERROR:  relation public.foo does not exist.

Can I reference public relations from private database?

Thanks,
Steve


Re: [GENERAL] access public relation from a private database

2008-09-29 Thread Tino Wildenhain

Joey K. wrote:


Hello,

I've created a table in public schema,
CREATE TABLE public.foo (fooid SERIAL);


When I try to create a table in database nowhere that references 
public.foo table,


CREATE DATABASE nowhere;
\c nowhere;
CREATE TABLE bar (bar integer REFERENCES public.foo(fooid));

I get, ERROR:  relation public.foo does not exist.

Can I reference public relations from private database?


You cannot. But you can create a private schema along
the public one above in the same database.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] access public relation from a private database

2008-09-29 Thread A. Kretschmer
am  Mon, dem 29.09.2008, um 17:43:21 +0530 mailte Joey K. folgendes:
 
 When I try to create a table in database nowhere that references public.foo
 table,
 
 CREATE DATABASE nowhere;
 \c nowhere;
 CREATE TABLE bar (bar integer REFERENCES public.foo(fooid));
 
 I get, ERROR:  relation public.foo does not exist.
 
 Can I reference public relations from private database?

You can't access across databases.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 I'm surprised that checkpoint smoothing moves slowly even when it has so
 little to do.

AFAIK that's operating as designed.  The point being that we shouldn't
create any more I/O load than we absolutely have to.

It's not clear to me that it's a bug for pg_start_backup to take awhile.
If it is a bug then I'd vote for just making it do an immediate
checkpoint --- that might cause big I/O load but it's hardly likely to
be worse than what will happen when you start taking the subsequent
filesystem backup.  The checkpoint code is too complicated already;
I don't want to make it support a third behavior.  And I'd vote against
taking out the current default behavior.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Glyn Astill
  I am not getting the New Subscription
 option when do right click on
  subscription (0) on the master node.
 
 It sounds like you're using pgAdmin - IIRC you need to
 have some parts
 at least of Slony installed on the machine on which pgAdmin
 is running.
 

Personally I prefer to use slonik, pgAdmin for slony based tasks is too 
unpolished to be used in my opinion, it needs a bit more work.

The whole problem with having clicky menus is that it allows you do do things 
without full knowledge of what's going off, and that can never be the case with 
slony.

For example here's a scenario using pgadmin to add a table into a set:

clicky: new replication set
clicky: add table to set
clicky: subsctribe to set
clicky: merge set

Did pgAdmin force you to wait until the subscription had completed before 
merging? No, and now you have a mess.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Raymond O'Donnell
On 29/09/2008 14:23, Glyn Astill wrote:

 The whole problem with having clicky menus is that it allows you do
 do things without full knowledge of what's going off, and that can
 never be the case with slony.

I agree, and I prefer to use Slonik as well. The example given in the
docs was good enough to get me started.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error:

2008-09-29 Thread chetan N
Hi,
I am trying to pass array of elements to store procedure, as follows

CREATE TYPE emptype as(
name text,
id integer,
dob date);

Function header looks like this

CREATE OR REPLACE FUNCTION passingEmployeeList(employeeList emptype[])
RETURNS SETOF employee AS $$

To execute i am using command

SELECT * from passingEmployeeList(ARRAY[('chethan', 1,
'10-05-1983'),('ashok', 2, '10-05-1982')]::emptype[] );

I am getting error as
ERROR:  could not find array type for data type record


Regards,
chethan


Re: [GENERAL] error:

2008-09-29 Thread Tom Lane
chetan N [EMAIL PROTECTED] writes:
 CREATE TYPE emptype as(
 name text,
 id integer,
 dob date);

 SELECT * from passingEmployeeList(ARRAY[('chethan', 1,
 '10-05-1983'),('ashok', 2, '10-05-1982')]::emptype[] );
 ERROR:  could not find array type for data type record

That does work in CVS HEAD, but in existing releases you'll need to
spell the array constructor like this:

ARRAY[('chethan', 1,'10-05-1983')::emptype, ('ashok', 2, '10-05-1982')::emptype]

8.3 and before try to determine the array type too soon --- casting the
result of the constructor doesn't help them.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Simon Riggs

On Mon, 2008-09-29 at 08:35 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  I'm surprised that checkpoint smoothing moves slowly even when it has so
  little to do.
 
 AFAIK that's operating as designed.  The point being that we shouldn't
 create any more I/O load than we absolutely have to.
 
 It's not clear to me that it's a bug for pg_start_backup to take awhile.

 If it is a bug then I'd vote for just making it do an immediate
 checkpoint --- that might cause big I/O load but it's hardly likely to
 be worse than what will happen when you start taking the subsequent
 filesystem backup. 

It was a clear intention for it to *not* cause a spike if we could avoid
it. The idea was if you wanted it to happen quickly then you could do a
checkpoint command first... oh well.

People might want to I/O limit the backup also, which they can do
without needing to let us know.

I'm happy to put an option in for this, so we have another function:
pg_start_backup(label text, immediate_chkpt boolean). I'll not be
rushing to do this though given my current TODO.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pl-pgsql, recursion and cursor contexting

2008-09-29 Thread Gauthier, Dave
Hi:

 

I'm in the business of writting recursive PL-Pgsql functions.  I need to
know what happens to the data stream from a select cursor inside of
which the recursive call is made.  For example

 

 

 

create table int_stream (num integer);

insert into int_stream (num) values (1);

insert into int_stream (num) values (2);

insert into int_stream (num) values (3);

insert into int_stream (num) values (4);

insert into int_stream (num) values (5);

 

create or replace function my_factorial(integer) returns insteger as $$

 

  in_int alias for $1;

  x integer;

  rec record;

 

begin

 

  if(in_int = 1) then

return(1);

  end if;

 

  for rec inselect num from int_stream where num = in_int

  loop

x := in_int * my_factorial(in_int - 1);

  end loop;

 

  return(x);

end;

$$ language plpgsql;

 

This comes up witht he right answer.  IOW, making the recursive call
from within the for rec in... loop doesn't seem to destroy the data
streams from earlier calls.  I just need to make sure that this will
always be the case and that getting the correct result in this example
is not just an artifact of it's simplicity.  I know, for example, this
was a no-no in Oracle.  You had to stuff arrays with the resuts from
looping in cursors, and then make the recursive call in a subsaquent
loop on the arrays.

 

Thanks

-dave

 

 



Re: [GENERAL] pl-pgsql, recursion and cursor contexting

2008-09-29 Thread Tom Lane
Gauthier, Dave [EMAIL PROTECTED] writes:
 I'm in the business of writting recursive PL-Pgsql functions.  I need to
 know what happens to the data stream from a select cursor inside of
 which the recursive call is made.  For example

Nothing, unless you use explicitly-named cursors and force a cursor name
conflict.  A for-loop's internal cursor always gets a name chosen to be
distinct from every other existing cursor, so there's no conflict.

 This comes up witht he right answer.  IOW, making the recursive call
 from within the for rec in... loop doesn't seem to destroy the data
 streams from earlier calls.  I just need to make sure that this will
 always be the case and that getting the correct result in this example
 is not just an artifact of it's simplicity.  I know, for example, this
 was a no-no in Oracle.

Wow, are they really that broken?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl-pgsql, recursion and cursor contexting

2008-09-29 Thread Pavel Stehule
Hello

every call of plpgsql function has own result, there are not any
shared result, so you need forward result from deeper call to up.

http://www.pgsql.cz/index.php/PL/pgSQL_(en)#Recursive_call_of_SRF_functions

regards
Pavel Stehule

2008/9/29 Gauthier, Dave [EMAIL PROTECTED]:
 Hi:



 I'm in the business of writting recursive PL-Pgsql functions.  I need to
 know what happens to the data stream from a select cursor inside of which
 the recursive call is made.  For example







 create table int_stream (num integer);

 insert into int_stream (num) values (1);

 insert into int_stream (num) values (2);

 insert into int_stream (num) values (3);

 insert into int_stream (num) values (4);

 insert into int_stream (num) values (5);



 create or replace function my_factorial(integer) returns insteger as $$



   in_int alias for $1;

   x integer;

   rec record;



 begin



   if(in_int = 1) then

 return(1);

   end if;



   for rec inselect num from int_stream where num = in_int

   loop

 x := in_int * my_factorial(in_int - 1);

   end loop;



   return(x);

 end;

 $$ language plpgsql;



 This comes up witht he right answer.  IOW, making the recursive call from
 within the for rec in... loop doesn't seem to destroy the data streams
 from earlier calls.  I just need to make sure that this will always be the
 case and that getting the correct result in this example is not just an
 artifact of it's simplicity.  I know, for example, this was a no-no in
 Oracle.  You had to stuff arrays with the resuts from looping in cursors,
 and then make the recursive call in a subsaquent loop on the arrays.



 Thanks

 -dave





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl-pgsql, recursion and cursor contexting

2008-09-29 Thread Gauthier, Dave
In all fairness, I believe in Oracle I was declaring explicit cursors
(by name) and recursive calls would fail outright with complaints that
the cursor was already open.  There was (to the best of my knowledge)
nothing like the for select... in loop... construct in Oracle's
PLSQL language.

-dave

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 29, 2008 10:28 AM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pl-pgsql, recursion and cursor contexting 

Gauthier, Dave [EMAIL PROTECTED] writes:
 I'm in the business of writting recursive PL-Pgsql functions.  I need
to
 know what happens to the data stream from a select cursor inside of
 which the recursive call is made.  For example

Nothing, unless you use explicitly-named cursors and force a cursor name
conflict.  A for-loop's internal cursor always gets a name chosen to be
distinct from every other existing cursor, so there's no conflict.

 This comes up witht he right answer.  IOW, making the recursive call
 from within the for rec in... loop doesn't seem to destroy the data
 streams from earlier calls.  I just need to make sure that this will
 always be the case and that getting the correct result in this example
 is not just an artifact of it's simplicity.  I know, for example, this
 was a no-no in Oracle.

Wow, are they really that broken?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] West: Second call for lightning talks

2008-09-29 Thread Joshua Drake
The PostgreSQL Conference: West is being held at Portland State
University on October 10th - 12th. The West Coast PostgreSQL
conference is currently seeking more Lightning Talks.

Lightning talks are an exciting way to get involved in the conference
with very little commitment on the speakers end. Assuming you can stand
in front of an audience for 5 minutes; you can speak about anything
PostgreSQL or Open Source related.

To submit your lightning talk please visit: 

http://www.pgcon.us/west08/talk_submission/

If you have not yet registered for the event please visit:

http://www.postgresqlconference.org/west08/register

Lastly thank you to this years sponsors:

Command Prompt: http://www.commandprompt.com/
EnterpriseDB: http://www.enterprisedb.com/

Afilias : http://www.afilias.info/
HP: http://www.hp.com/

Emma : http://www.myemma.com/

Continuent : http://www.continuent.com/
Endpoint : http://www.endpoint.com/
OTG : http://www.otg-nc.com/

EFF: http://www.eff.org/
Google: http://www.google.com/



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple querys

2008-09-29 Thread x asasaxax
Hi everyone,

I have multiple query´s, and i´m trying to optimize my queries by
creating a temporary table x(contains the field id_product).

The queries that utilize the temp table x, must use the order of the rows at
x.
My problem: I have a query that do a limit and offset on table x. I would
like to this query obey the order of x.

I thought these solution: create a field order_id_product, to use order by
order_id_product on other queries.

Can anyone help me?

Thanks a lot.


Re: [GENERAL] inserting to a multi-table view

2008-09-29 Thread Richard Broersma
On Sun, Sep 28, 2008 at 9:57 PM, Seb [EMAIL PROTECTED] wrote:

 Well, I was able to get PostgreSQL Update-able views to work nearly as
 well as the update-able queries did in Access.
 Would you mind sharing a sample schema?

I'll see what I can do.  I did post some sample schema a while back
but I can't seem of find them in the archive.  I do have some
production tables and views, but there is too much non related
attributes to make a useful example.

 As a side note, you'll notice that MS-Access will not allow
 update-able queries based on ODBC linked table like it does on its
 native tables for this reason.

 That's right, I did find that out once but didn't know whether it was
 due to ODBC limitations or something else.

MS-Access Implements Optimistic locking with all ODBC data sources.
The basic differences with an ODBC data source is that MS-Access's Jet
Engine can't put a file lock on it like it can with other file type db
like access, excel, flat files et.al.

Optimistic locking means that every time Access issues an update to
the ODBC server, it includes the all old values of a record (that
ms-access is aware of) in the update statement's where clause.  So if
your MS-Access client was only aware of a stale version of the record,
its update count will be zero, thereby access with throw an exception
saying that the update could not be completed as the underlying table
was changed by another user.


 Natural Primary key/Foreign key CASCADE UPDATEs don't work well with
 update-able views.  Choose one or the other.

 Not sure what you mean; can you please tell more about what doesn't work
 well with update-able views what the choice is?

This one is kind of hard to explain.  Basically it deals with the
order of operation between Referential Integrity updates versus client
side or update-able view updates.

Lets say your client app or update-able view wants to update a single
row in a view.  However, in addition to changing the fields from each
table, you also want to update the natural primary key.  This sounds
simple but its not.

1) The record changes made on the client application are not instantly
committed and refreshed for each field change that the user makes.
Basically, the whole row is updated with an update statement once when
the user commits the change.

2) The view redirects the update statement to its underlying rules
(usually on rule for each joined table update).  First of all the
primary table fields are changed (lets call it tableA) with the rule
update including its natural primary key.  Lets say the primary key
was changed from 'OLDVALUE' to 'NEWVALUE' on tableA.

3) Now tableB that has a foreign key referencing tableA with its
foreign key set to ON UPDATE CASCADE.  Declarative Referential
Integrity (DRI) such as ON UPDATE CASCADE have a much higher priority
over the PostgreSQL Rule system.  So before the rule system can
perform updates on tableB, tableB has it foreign key changed from
'OLDVALUE' to 'NEWVALUE'.

4) Here is where the problem occurs.  When the next update-able view
rule is executed to update tableB its where clause still thinks that
tableB foreign key is 'OLDVALUE'.  And because 'OLDVALUE' is in the
where clause of the rule's update statement instead of 'NEWVALUE', no
record is found to match and so the remaining field updates fail.

So the end result is that all of tableA updates are successful,
TableB's foreign key is updated by DRI but the rest of the field
updates are not.  So you are left with an inconsistent update from the
perspective of the view.  By the way, this really confuses MS-Access.
It doesn't know what to do when this happens.

That's why I says that Natural Primary key/Foreign key CASCADE
UPDATEs don't work well with update-able views.


Also, if you have concurrent users on the same updateable view, update
anomolies like this can still occur just from problems with user
concurrent updates.  Thats the reason I decided to abandon join tabled
updateable views.


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't cast from char to integer...

2008-09-29 Thread Mike Diehl
Hi all.

I'm trying to cut data from one table and put it in another table.  The 
problem comes from the fact that the first table has a field that is a 
char(x) and the destination table needs that data in an integer.

For example, I'm trying to do this:

insert into data 
select cast('666' as integer) as block_number, phone as phone_number, 
name  
from demo_q;

The data table has a field called block_number that is an integer.  I'm trying 
to populate that field with the INTEGER, 666.  (I use 666 for testing since I 
would never assign that number/id to an actuall customer.)

When I run this query, I get:

ERROR:  column block_number is of type integer but expression is of type 
character varying

What am I doing wrong?

TIA,
-- 
Mike Diehl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Counting unique rows as an aggregate.

2008-09-29 Thread r_musta
My current solution is to issue a bunch of queries:

 SELECT make, count(*) FROM table WHERE criteria GROUP BY make ORDER BY 
 count(*) DESC LIMIT 3;
make count
--+-
 audi   | 50
 bmw  | 40
 vw | 30

SELECT color, count(*) FROM table WHERE criteria GROUP BY color
ORDER BY count(*) DESC LIMIT 3;

color   count
---+--
 red  | 400
 blue| 200
 green  | 100

Which will give me the top 3 counts of each column im interested in
for the criteria specified.

However, this is starting to become too slow (as there are about 10 of
these queries), and therefore I need to write an aggregate function
which lets me do:

SELECT count_unique(make), count_unique(color) from table WHERE criteria;

After reading about aggregate functions, this should be possible, as
long as I can use a dictionary/hashmap type for the stateSTYPE
argument.

Is there such a type in postgresql that can be used in an aggregate
function, and if there isn't, how would it be possible to make one?

Also, do you think I'm going about this the wrong way, and there is a
much better solution that's I've neglected?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database question

2008-09-29 Thread john . crawford
Hi all I have been experiencing some strange behaviour on my postgres
DB.  I am VERY new to PG so bear with me as what I am going to ask is
all probably very basic to you guys.
First off over the last couple of weeks we have been seeing in the
dir /var/lib/pgsql/data/base/16450 some large file creations, so for
example

-rw--- 1 postgres postgres 1073741824 Sep 29 15:15 2683
-rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.77
-rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.83
-rw--- 1 postgres root   65347584 Sep 29 15:16 2613.88
-rw--- 1 postgres root 1073741824 Sep 29 15:16 2613.86
-rw--- 1 postgres root 1073741824 Sep 29 15:17 2613.82
-rw--- 1 postgres root 1073741824 Sep 29 15:17 2613.81
-rw--- 1 postgres postgres  380346368 Sep 29 15:17 16451.1
-rw--- 1 postgres postgres  217710592 Sep 29 15:18 33820
-rw--- 1 postgres root  119046144 Sep 29 15:18 2683.1
-rw--- 1 postgres root 1073741824 Sep 29 15:18 2613.84

What are these files and why have they suddenly started to be created
and why so large?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Is there a parameter to tell postgresql to not attempt to open an IPv6 socket?

2008-09-29 Thread yann . dubost
On 16 sep, 23:04, [EMAIL PROTECTED] (Andrew Sullivan) wrote:
 On Tue, Sep 16, 2008 at 04:46:37PM -0400, Reid.Thompson wrote:
  PostgreSQL appears to all intents to startup fine on the UNIX and IPv4
  sockets.  Is there a parameter to tell postgresql tonotattempt to open
  anIPv6socket?

 Specify the specific TCP/IP interfaces in the postmaster.conf file.
 Otherwise, Postgres will try tobindto all the sockets.  There's
 something hinkey about theIPv6support in AIX, IIRC, so that you end
 up with this symptom.  

 A

 --
 Andrew Sullivan
 [EMAIL PROTECTED]
 +1 503 667 4564 x104http://www.commandprompt.com/

 --
 Sent via pgsql-general mailing list ([EMAIL PROTECTED])
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

Hello,

I have the same pb. I have looked for a postmaster.conf file but there
is none on the server.
Apart from the doc and src files, the only files on my server
containing the word postmaster are :
/usr/local/pgsql/bin/postmaster
$PGDATA/postmaster.opts
$PGDATA/postmaster.pid

Do you have an idea why and how then to solve this pb ?

Thanks,

Yann.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slony Problem

2008-09-29 Thread furlani . francesco
Hi,
i have 1 master and one slave db.

when in the master i run this sql : update wspet00 set datamf=20080925
where numcar = 7123123123
in the slave slon process i have an error with this log:


update only public.wspet00 set datamf='20080925' where ;
 ERROR:  syntax error at or near ;
LINE 1: update only public.wspet00 set datamf='20080925' where ;


it's seem like the sql was truncate..any idea?

Postgres version 8.2.5 and slony 1.2.13 on windows 2003 server.

Thanks,
Francesco

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with custom_variable_classes

2008-09-29 Thread Malcolm Studd

Taras Kopets wrote:

now you have to initialize this variable once per session before usage:

SELECT set_config('olap.rownum_name', false);


Ok, thanks. That has fixed my problem.

Malcolm


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] need help of getting PK after insertRow in JDBC

2008-09-29 Thread Chen, Dongdong (GE Healthcare, consultant)
Hi:
I am a software engineer from GE. I am using JDBC to operate
PostgreSQL8.3 in Ubuntu8.04. The develop environment is Eclipse3.2 My
problem is: 
There is a PostgreSQL table XX containing 5 fields: AA, BB, CC, DD,
EE, AA is primary key and auto-generated type, BB, CC, DD and EE is
string type.
I want to get the value of AA immediately after insert a row into
the table. the code is like this:
 
Statement st = db.creatStatement(ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_UPDATABLE);
ResultSet rs=st.executeQuery(SELECT * FROM XX);
rs.moveToInsertRow();
rs.updateString(BB, b);
rs.updateString(CC, c);
rs.updateString(DD, d);
rs.updateString(EE, e);
rs.insertRow();
rs.moveToCurrentRow();
int index = rs.getInt(AA);
System.out.println(index);
 
   in the last sentence I always get 0 no matter how many records I
insert. I tried other ways of moving the cursor including next(), last()
and so on, and also cannot get the correct value. I tried the drivers
both postgresql-8.3-603.jdbc3.jar and postgresql-8.3-603.jdbc4.jar.
 
But when I use pdadminIII to check the table XX, the AA field is
already auto-generated with the correct value.
 
I found a way to solve this: close resultset and statement after
moveToCurrentRow() and re-open them, and rs.last(), then run int
index=rs.getInt(AA), I can get the correct value. I think this method
is sort of awkward, anyone knows a better way and the correct
operations? 
 
I am not sure it is proper to send this mail to this mail list. Sorry if
bring you any inconvenience.
Thanks a lot!
 
Best Regards
 
Kevin Chen/ChenDongdong
+8613810644051
 
 
 
ge1.JPG

Re: [GENERAL] The planner hates me.

2008-09-29 Thread Hoover, Jeffrey
change t.date2 dates.date to t.date2+0dates.date, this will prevent
the query from trying to use the index on date2 because the where clause
now references an expression and not the column itself:

explain analyze
select sum(amount),  dates.date as date  
from transaction t
join (select get_dates as date from get_dates('09/17/08','09/24/08'))
dates
  on (t.state='I' or t.date1 =  dates.date) and t.date2+0dates.date
group by dates.date


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Amiel
Sent: Thursday, September 25, 2008 10:24 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] The planner hates me.

PostgreSQL 8.2.4 on i386-pc-solaris2.10, compiled by GCC gcc (GCC)
3.4.3 (csl-sol210-3_4-branch+sol_rpath)
(test environment)

Picture a table called 'transaction' with 1 million rows.
most ( 99% of the records have date1 and date2 values in the past
(spread over 4 years)
99.99% of the records have a state OTHER than 'I'.


CREATE TABLE transaction
(
  amount numeric(10,2) NOT NULL,
  date1 date NOT NULL,
  state character(1) NOT NULL,
  date2 date DEFAULT date(now())
)

CREATE INDEX t_date1_index
  ON transaction
  USING btree
  (date1);

CREATE INDEX t_date2_index
  ON transaction
  USING btree
  (date2);

CREATE INDEX t_state_index
  ON transaction
  USING btree
  (state);



explain analyze
 select sum(amount),  dates.date as date  
from transaction t
 join (select get_dates as date from
get_dates('09/17/08','09/24/08')) dates on  
(t.state='I' or   t.date1 =  dates.date)   
   group by dates.date

get_dates simply returns each date between (and including) the passed
dates ...and is a 'stable' function).  Yes...I know I probably could
have used a generate_series or something...but this was written before I
knew anything about that

HashAggregate  (cost=1290485.15..1290485.40 rows=200 width=17) (actual
time=277.804..277.809 rows=8 loops=1)
  -  Nested Loop  (cost=270.37..1123134.88 rows=334700533 width=17)
(actual time=3.182..153.741 rows=120536 loops=1)
-  Function Scan on get_dates  (cost=0.00..1.25 rows=1000
width=4) (actual time=0.057..0.065 rows=8 loops=1)
-  Bitmap Heap Scan on transaction t  (cost=270.37..618.60
rows=336357 width=22) (actual time=3.093..10.958 rows=15067 loops=8)
  Recheck Cond: ((t.state = 'I'::bpchar) OR (t.date1 =
get_dates.get_dates))
  -  BitmapOr  (cost=270.37..270.37 rows=336357 width=0)
(actual time=2.853..2.853 rows=0 loops=8)
-  Bitmap Index Scan on t_state_index
(cost=0.00..8.97 rows=4971 width=0) (actual time=2.842..2.842 rows=15067
loops=8)
  Index Cond: (state = 'I'::bpchar)
-  Bitmap Index Scan on t_date1_index
(cost=0.00..252.99 rows=331387 width=0) (actual time=0.009..0.009 rows=2
loops=8)
  Index Cond: (t.date1 = get_dates.get_dates)
Total runtime: 277.883 ms


Uses bitmap scans for optimization and performs admirably.

BUT...when I change the query thusly... (adding in an addition AND
clause)

explain analyze
 select sum(amount),  dates.date as date  
from transaction t
 join (select get_dates as date from
get_dates('09/17/08','09/24/08')) dates on  
(t.state='I' or   t.date1 =  dates.date)and t.date2
 dates.date
   group by dates.date

HashAggregate  (cost=1222618.09..1222618.34 rows=200 width=15) (actual
time=7538.193..7538.199 rows=8 loops=1)
  -  Nested Loop  (cost=0.00..1166174.15 rows=112887885 width=15)
(actual time=0.889..7411.997 rows=120522 loops=1)
Join Filter: ((t.state = 'I'::bpchar) OR (t.date1 =
get_dates.get_dates))
-  Function Scan on get_dates  (cost=0.00..1.25 rows=1000
width=4) (actual time=0.055..0.062 rows=8 loops=1)
-  Index Scan using t_date2_index on transaction t
(cost=0.00..590.77 rows=328800 width=24) (actual time=0.018..492.348
rows=986273 loops=8)
  Index Cond: (t.date2  get_dates.get_dates)
Total runtime: 7538.259 ms

I am pulling my hair out.  Why does it insist on using the index on the
date2 field?   It was doing so well with the bitmap indexing on the
other fields...if the planner would simply do the same with date2, all
would be well.  I really need The date2 portion of the query to filter
AFTER the first two clauses, because those are quicker and narrows the
data down to a manageable size


Any suggestions on query or index changes?  I still need the date2 index
for other queriesbut the fact that it gets used exclusively for this
query drives me batty!.

I've tried various combinations of multi-column indexes to no
avail...because I still need the date2 index for other reasons, this
query ALWAYS chooses it for some reason.


Any help would be appreciated.







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:

Re: [GENERAL] Re: Is there a parameter to tell postgresql to not attempt to open an IPv6 socket?

2008-09-29 Thread Scott Marlowe
On Mon, Sep 29, 2008 at 7:41 AM,  [EMAIL PROTECTED] wrote:
 On 16 sep, 23:04, [EMAIL PROTECTED] (Andrew Sullivan) wrote:
 On Tue, Sep 16, 2008 at 04:46:37PM -0400, Reid.Thompson wrote:
  PostgreSQL appears to all intents to startup fine on the UNIX and IPv4
  sockets.  Is there a parameter to tell postgresql tonotattempt to open
  anIPv6socket?

 Specify the specific TCP/IP interfaces in the postmaster.conf file.
 Otherwise, Postgres will try tobindto all the sockets.  There's
 something hinkey about theIPv6support in AIX, IIRC, so that you end
 up with this symptom.


 Hello,

 I have the same pb. I have looked for a postmaster.conf file but there
 is none on the server.
 Apart from the doc and src files, the only files on my server
 containing the word postmaster are :
 /usr/local/pgsql/bin/postmaster
 $PGDATA/postmaster.opts
 $PGDATA/postmaster.pid

 Do you have an idea why and how then to solve this pb ?

Look for postgresql.conf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need help of getting PK after insertRow in JDBC

2008-09-29 Thread Martin Gainty

Good Afternoon

add the CLOSE_CURSORS_AT_COMMIT Option to the Statements capabilities e.g.
Statement st = db.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, 
ResultSet.CONCUR_UPDATABLE.ResultSet.CLOSE_CURSORS_AT_COMMIT);

//then tell the connection handle to commit the DML to the DB
db.commit();

Martin 

Control your own destiny or someone else will -- Jack Welch
__ 
Disclaimer and confidentiality note 
Everything in this e-mail and any attachments relates to the official business 
of Sender. This transmission is of a confidential nature and Sender does not 
endorse distribution to any party other than intended recipient. Sender does 
not necessarily endorse content contained within this transmission. 


Subject: [GENERAL] need help of getting PK after insertRow in JDBC 
Date: Fri, 26 Sep 2008 10:14:01 +0800
From: [EMAIL PROTECTED]
To: pgsql-general@postgresql.org; [EMAIL PROTECTED]








Hi:
I am a 
software engineer from GE. I am using JDBC to operate PostgreSQL8.3 in 
Ubuntu8.04. The develop environment is Eclipse3.2 My problem is: 

There is a 
PostgreSQL table XX containing 5 fields: AA, BB, CC, DD, EE, AA is primary key 
and auto-generated type, BB, CC, DD and EE is string type.
I want to 
get the value of AA immediately after insert a row into the table. the 
code is like this:
 
Statement st 
= db.creatStatement(ResultSet.TYPE_SCROLL_SENSITIVE, 
ResultSet.CONCUR_UPDATABLE);
ResultSet 
rs=st.executeQuery(SELECT * FROM XX);

rs.moveToInsertRow();

rs.updateString(BB, b);

rs.updateString(CC, c);

rs.updateString(DD, d);

rs.updateString(EE, e);

rs.insertRow();

rs.moveToCurrentRow();
int index = 
rs.getInt(AA);
System.out.println(index);
 
   in the last 
sentence I always get 0 no matter how many records I insert. I tried other ways 
of moving the cursor including next(), last() and so on, and also 
cannot get the correct value. I tried the drivers both 
postgresql-8.3-603.jdbc3.jar and 
postgresql-8.3-603.jdbc4.jar.
 
But when I 
use pdadminIII to check the table XX, the AA field is already auto-generated 
with the correct value.
 
I found a 
way to solve this: close resultset and statement after moveToCurrentRow() and 
re-open them, and rs.last(), then run int index=rs.getInt(AA), I can get the 
correct value. I think this method is sort of awkward, anyone knows a 
better way and the correct operations? 
 
I am not sure it is proper to 
send this mail to this mail list. Sorry if bring you any 
inconvenience.
Thanks a 
lot!
 
Best Regards
 
Kevin 
Chen/ChenDongdong
+8613810644051


 
 

_
See how Windows connects the people, information, and fun that are part of your 
life.
http://clk.atdmt.com/MRT/go/msnnkwxp1020093175mrt/direct/01/attachment: ge1.JPG

Re: [GENERAL] database question

2008-09-29 Thread Scott Marlowe
On Mon, Sep 29, 2008 at 8:21 AM,  [EMAIL PROTECTED] wrote:
 Hi all I have been experiencing some strange behaviour on my postgres
 DB.  I am VERY new to PG so bear with me as what I am going to ask is
 all probably very basic to you guys.
 First off over the last couple of weeks we have been seeing in the
 dir /var/lib/pgsql/data/base/16450 some large file creations, so for
 example

 -rw--- 1 postgres postgres 1073741824 Sep 29 15:15 2683
 -rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.77
 -rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.83

 What are these files and why have they suddenly started to be created
 and why so large?

PostgreSQL automatically splits table files into 1G chunks so it can
run on OSes with file size limits.  These are part of the table
identified by the oid 2613.  You can find it by looking in pg_class.
Run psql -E and do \d and you'll see the queries that psql uses to
create its output, and you can muck about with them to see which are
which.

Also, the contrib module oid2name will tell you these things from the
shell / CLI.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting unique rows as an aggregate.

2008-09-29 Thread Scott Marlowe
On Mon, Sep 29, 2008 at 12:12 PM, r_musta [EMAIL PROTECTED] wrote:
 However, this is starting to become too slow (as there are about 10 of
 these queries), and therefore I need to write an aggregate function
 which lets me do:

SELECT count_unique(make), count_unique(color) from table WHERE criteria;

 After reading about aggregate functions, this should be possible, as
 long as I can use a dictionary/hashmap type for the stateSTYPE
 argument.

This might be a nice fit for materialized views.  While they're not
built in, PostgreSQL's extensibility allows you to build them prettily
easily.

http://jonathangardner.net/tech/w/PostgreSQL/Materialized_Views

Jonathan Gardner's web page on it is fantastic.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] PostgreSQL future ideas

2008-09-29 Thread postgres Emanuel CALVO FRANCO
2008/9/27 Douglas McNaught [EMAIL PROTECTED]:
 On Sat, Sep 27, 2008 at 12:13 PM, Mark Mielke [EMAIL PROTECTED] wrote:
 If
 some parts of PostgreSQL are not performance bottlenecks, and they are
 extremely complicated to write in C, and very easy to write in something
 else common and simple (I've never used LUA myself?), I imagine it would be
 acceptable to the community.

 As long as they can expose their interfaces using the standard PG
 function call interface, and use the documented SPI mechanism to talk
 to the rest of the back end.  Stuff that hooks into undocumented or
 unstable parts of the code would be much less viable.

 -Doug

 --
 Sent via pgsql-hackers mailing list ([EMAIL PROTECTED])
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


I think that C is the best language for Postgresql. C++ has a little
thinks that make not good for performance.
Why people want to make more understable code touching the language?.
Simplify documentation for programmmers. Thats was the idea in the beggining.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Greg Smith

On Mon, 29 Sep 2008, Simon Riggs wrote:

I'm surprised that checkpoint smoothing moves slowly even when it has so 
little to do. ISTM checkpoint completion target should set its write 
rate according to the thought that if shared_buffers were all dirty it 
would write them out in checkpoint_timeout * 
checkpoint_completion_target seconds. However, what it does is write 
them *all* out in that time, no matter how many dirty blocks there are. 
If there is just a few blocks to write, we take the *same* time to write 
them as if it were all dirty.


The checkpoint smoothing code that made it into 8.3 missed a couple of 
nice to have features that just didn't make the schedule cut-off. 
Enforcing a minimum rate was one, another was smoothing fsync calls.


Back when we were talking about the patch to sort writes at checkpoint 
time, someone (I think you actually) commented that it might be worthwile 
to create some sort of hook for making behavior of checkpoint-time dirty 
buffer processing easy to change with a custom strategy.  The sorted 
behavior would then be the first such strategy available.  Another one I 
was thinking of was something that specified min+max write writes, which 
would make this problem go away--might even auto-tune checkpoint_segments 
or replace it altogether with an implementation based on those inputs.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cache

2008-09-29 Thread Greg Smith

On Mon, 29 Sep 2008, Sam Mason wrote:


On Mon, Sep 29, 2008 at 02:55:52AM -0400, Greg Smith wrote:

3) sudo echo 3  /proc/sys/vm/drop_caches


I'm not sure about the rest, but shouldn't this be:
 echo 3 | sudo tee /proc/sys/vm/drop_caches


I couldn't think of any reason to actually include the tee in there and 
just optimized displaying the 3 out as script noise.


As an aside, it would be nicer if there was a more appropriately program 
than tee but I've yet to find one.


What are you trying to accomplish here that tee isn't quite right for?

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] subquery in FROM must have an alias

2008-09-29 Thread Ashutosh Chauhan
Thanks to Stephan and Hubert for their replies. Using your answers I
was able to solve the problem. It turned out that its a natural join
that I wanted.

Thanks for quick help,
Ashutosh

On Sun, Sep 28, 2008 at 10:18, Stephan Szabo
[EMAIL PROTECTED] wrote:
 On Sun, 28 Sep 2008, Ashutosh Chauhan wrote:

 Hi all,

 This has been asked before and answered as well.
 http://archives.postgresql.org/pgsql-sql/2007-12/msg2.php but I
 still cant figure out why postgres throws this error message even when
 I have provided the aliases. My query:

 select a,b
   from (billing.item JOIN (
   select *
   from ( billing.invoice JOIN billing.customer
  on (id_customer_shipped = 
 customer_uid and
 address = 'pgh' ))
 as temp2 ))
as temp;

 I have two from clauses so I have provided two corresponding alias
 names for those two from clauses.

 If you break the above down a bit, you have:

 select a,b
 from
  (
  billing.item join
  (select * from
   (
billing.invoice join
billing.customer
on (id_customer_shipped = customer_uid and address='pgh')
   )
   as temp2
  )
  )
 as temp;

 What the system is complaining about is the subselect (select * from ... )
 not having an alias. You've aliased the billing.invoice join
 billing.customer one and (billing.item join (...)) one, but not the
 subselect. In fact, I believe the two aliases you're using aren't strictly
 necessary. Also, the above appears to be missing the condition for the
 outermost join.

 Maybe something like the following will work with a filled in on
 condition:

 select a,b
 from
  (
  billing.item join
  (select * from
   (
billing.invoice join
billing.customer
on (id_customer_shipped = customer_uid and address='pgh')
   )
  )
  as temp
  on (...)
  )




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sample databases

2008-09-29 Thread Tommy Gibbons
Hi,
I would like some pointers as to how to install the dbsamples so that I can
use them in Postgres.  These .tar.qz files seem to contain *.sql files.
These seem to be text files but I do not understand how to import to
postgres or if there is some command line command to run.

These samples are on
http://www.postgresql.org/ftp/projects/pgFoundry/dbsamples/

Tommy.


Re: [GENERAL] Sample databases

2008-09-29 Thread Raymond O'Donnell
On 29/09/2008 23:03, Tommy Gibbons wrote:

 I would like some pointers as to how to install the dbsamples so that I
 can use them in Postgres.  These .tar.qz files seem to contain *.sql
 files.  These seem to be text files but I do not understand how to
 import to postgres or if there is some command line command to run.

You use psql to load them:

psql -U user -f file database


Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sample databases

2008-09-29 Thread David Rowley
Tommy Gibbons wrote:

I would like some pointers as to how to install the dbsamples so that I can
use them in Postgres.  These .tar.qz files seem to contain *.sql files.
These seem to be text files but I do not understand how to  import to
postgres or if there is some command line command to run.

 These samples are on
http://www.postgresql.org/ftp/projects/pgFoundry/dbsamples/

 

You can use psql to import .sql files. Once your logged into psql you can \i
filename to import the files. Have a look at the other options too with \?

You may wish to create a database before importing the files, see
http://www.postgresql.org/docs/8.3/interactive/sql-createdatabase.html

 

David.



Re: [GENERAL] database question

2008-09-29 Thread Greg Smith

On Mon, 29 Sep 2008, [EMAIL PROTECTED] wrote:


What are these files and why have they suddenly started to be created
and why so large?


They're the contents of the database and they get created every time there 
is another 1GB worth of data in there.  Note that the database will use 
more space if data is being UPDATEd and you don't vacuum it regularly. 
Without the vacuum going it's as if you'd added a new row instead when you 
update something.


While it's possible to decode what those files are by using oid2name or 
pg_class, what you probably want to know instead is what the big tables 
and indexes in your database are to figure out what is gobbling space. 
The script at http://wiki.postgresql.org/wiki/Disk_Usage will give you 
that.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Bruce Momjian
Simon Riggs wrote:
  If it is a bug then I'd vote for just making it do an immediate
  checkpoint --- that might cause big I/O load but it's hardly likely to
  be worse than what will happen when you start taking the subsequent
  filesystem backup. 
 
 It was a clear intention for it to *not* cause a spike if we could avoid
 it. The idea was if you wanted it to happen quickly then you could do a
 checkpoint command first... oh well.
 
 People might want to I/O limit the backup also, which they can do
 without needing to let us know.
 
 I'm happy to put an option in for this, so we have another function:
 pg_start_backup(label text, immediate_chkpt boolean). I'll not be
 rushing to do this though given my current TODO.

I agree with Tom;  either we make the pg_start_backup() checkpoint
immediate or leave the behavior unchanged.

Personally I think immediate makes more sense because issuing
pg_start_backup() seems like it should behave like a manual CHECKPOINT
command.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_start_backup() takes too long

2008-09-29 Thread Joshua Drake
On Mon, 29 Sep 2008 19:06:46 -0400 (EDT)
Bruce Momjian [EMAIL PROTECTED] wrote:
 
 I agree with Tom;  either we make the pg_start_backup() checkpoint
 immediate or leave the behavior unchanged.
 
 Personally I think immediate makes more sense because issuing
 pg_start_backup() seems like it should behave like a manual CHECKPOINT
 command.
 

I am not actually that concerned that it takes a long time now that I
know why. I am more concerned that it isn't obvious why it takes so
long. Something like this would be more than sufficient:

elog(NOTICE, pg_start_backup called, issuing CHECKPOINT);  
RequestCheckpoint(CHECKPOINT_FORCE | CHECKPOINT_WAIT);

Sincerely,

Joshua D. Drake


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't cast from char to integer...

2008-09-29 Thread Tom Lane
Mike Diehl [EMAIL PROTECTED] writes:
 For example, I'm trying to do this:

 insert into data 
   select cast('666' as integer) as block_number, phone as phone_number, 
 name  
 from demo_q;

 When I run this query, I get:

 ERROR:  column block_number is of type integer but expression is of type 
 character varying

I don't get that ...

I wonder whether the problem is that you are expecting the as labels
in the SELECT to do something, like make the data go into the
correspondingly named columns of the target table.  But they don't do
anything.  This query is going to try to assign 666, phone, and name to
the first three columns of data, in order.  I'll bet it is actually
complaining about the second or third column, not the 666 at all.

If that's it, the syntax you are looking for is

insert into data (block_number, phone_number, name)
select cast('666' as integer), phone_number, name
from demo_q;

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database question

2008-09-29 Thread Tom Lane
Scott Marlowe [EMAIL PROTECTED] writes:
 On Mon, Sep 29, 2008 at 8:21 AM,  [EMAIL PROTECTED] wrote:
 -rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.77
 -rw--- 1 postgres root 1073741824 Sep 29 15:15 2613.83
 
 What are these files and why have they suddenly started to be created
 and why so large?

 PostgreSQL automatically splits table files into 1G chunks so it can
 run on OSes with file size limits.  These are part of the table
 identified by the oid 2613.  You can find it by looking in pg_class.

Actually relfilenode, not oid, is the thing to look at.  But a table
with such a small relfilenode number must be a system catalog, and a
quick look shows that in any recent PG version it's pg_largeobject.

So the answer is you've got something that's gone hog-wild on creating
large objects and not deleting them; or maybe the application *is*
deleting them but pg_largeobject isn't getting vacuumed.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting unique rows as an aggregate.

2008-09-29 Thread Tom Lane
r_musta [EMAIL PROTECTED] writes:
 However, this is starting to become too slow (as there are about 10 of
 these queries), and therefore I need to write an aggregate function
 which lets me do:

 SELECT count_unique(make), count_unique(color) from table WHERE criteria;

I must be missing something, because I don't see why you couldn't do

SELECT count(distinct make), count(distinct color) from table WHERE criteria;

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't cast from char to integer...

2008-09-29 Thread Mike Diehl
That fixed it.  If you are ever in Albuquerque, NM., let me know.  I'll be 
happy to buy you a beer.

Mike.


On Monday 29 September 2008 05:26:43 pm Tom Lane wrote:
 Mike Diehl [EMAIL PROTECTED] writes:
  For example, I'm trying to do this:
 
  insert into data
  select cast('666' as integer) as block_number, phone as phone_number,
  name from demo_q;
 
  When I run this query, I get:
 
  ERROR:  column block_number is of type integer but expression is of
  type character varying

 I don't get that ...

 I wonder whether the problem is that you are expecting the as labels
 in the SELECT to do something, like make the data go into the
 correspondingly named columns of the target table.  But they don't do
 anything.  This query is going to try to assign 666, phone, and name to
 the first three columns of data, in order.  I'll bet it is actually
 complaining about the second or third column, not the 666 at all.

 If that's it, the syntax you are looking for is

 insert into data (block_number, phone_number, name)
   select cast('666' as integer), phone_number, name
 from demo_q;

   regards, tom lane



-- 
Mike Diehl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Announcing PostgreSQL Certification Project Job Task Analysis Survey

2008-09-29 Thread Gregory S. Youngblood
Hello!

 

The PostgreSQL Community is working to create a community driven and
endorsed PostgreSQL Certification. This effort is spearheaded by the
PostgreSQL Certification Project (http://www.postgresqlcertification.org).
The primary focus of the project is the development and support of a series
of tiered certifications that provide a standardized method of identifying
the knowledge and skills of database professionals working with PostgreSQL.

 

At this time, the PostgreSQL Certification Project is pleased to announce
the availability of a Job Task Analysis survey that will help shape the
content and form the initial certifications will take. This survey is
publicly available and all are encouraged to participate.

 

To take the survey, please register at:

http://www.postgresqlcertification.org/jta

 

The registration process takes just a couple of minutes. Once registration
is complete and you are logged in, click JTA in the menu followed by
participate in the survey
(http://www.postgresqlcertification.org/job_task_analysis). The survey takes
approximately 30 minutes to complete.

 

We ask everyone to please take the time to complete the survey and to refer
an employer, manager, colleague, or anyone else that may rely on a
PostgreSQL database in some manner to take it as well. With the help of the
community we will be able to reach as wide an audience as possible. 

 

If you are interested in joining the PostgreSQL Certification Project,
please visit http://lists.postgresqlcertification.org/mailman/listinfo/cert/
and subscribe to the mailing list after taking the survey.

 

 



Re: [GENERAL] Replication using slony-I

2008-09-29 Thread Abdul Rahman
Thanks a lot for replying!

Here is complete information for consideration:
In order to perform replication. I am following the steps present in 

http://people.planetpostgresql.org/dpage/index.php?/archives/51-Setting-up-Slony-I-with-pgAdmin.html

and successfully reached to step # 13. I have Slony-I pgAdmin-III included in 
postgresql-8.2 and the platform is WindowsXp. But here I am not getting the 
NewSubscription option when do right click on Subscription Set.

I also worked and found that the command  in step # 4

psql -U postgres slave1  schema.sql

is not working because I am not getting the tables in either slave. So, I 
suppose I have to run the command in step # 2

 pgbench -i -U postgres master

for slave1 and slave2 instead of making sql script and running it. I am going 
to restart my work in this way. Glyn Astill you may explore slonik in detail 
that I may use it now. So many Thanks Again.



  

Re: [GENERAL] Counting unique rows as an aggregate.

2008-09-29 Thread Richard Broersma
On Mon, Sep 29, 2008 at 4:36 PM, Tom Lane [EMAIL PROTECTED] wrote:

 SELECT count(distinct make), count(distinct color) from table WHERE 
 criteria;

Is this in the SQL spec?  I didn't know Agg functions could do this?


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting unique rows as an aggregate.

2008-09-29 Thread Klint Gore

Richard Broersma wrote:

On Mon, Sep 29, 2008 at 4:36 PM, Tom Lane [EMAIL PROTECTED] wrote:

 SELECT count(distinct make), count(distinct color) from table WHERE 
criteria;

Is this in the SQL spec?  I didn't know Agg functions could do this?
  


Yes. SQL92 6.5

 set function specification ::=
   COUNT left paren asterisk right paren
 | general set function

general set function ::=
   set function type
   left paren [ set quantifier ] value expression right 
paren


set function type ::=
 AVG | MAX | MIN | SUM | COUNT

set quantifier ::= DISTINCT | ALL


I never realised that you could use it for more than count though.  In 
8.3.3, it worked for sum/avg/stddev/variance. 


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ODBC driver crash

2008-09-29 Thread Craig Ringer
Hi

I'm currently encountering an interesting issue with the PostgreSQL ODBC
driver on Windows. I'm unlucky enough to have to use MS Access 2007 for
a project at the moment, with PostgreSQL providing the storage through
ODBC. It all works as well as can be expected from Access, except that
the application crashes when it quits.

The crash only occurs when the PostgreSQL ODBC driver has been used. If
the Access database file is opened but only Access-internal tables are
operated on, there's no crash when Access quits.

The crash occurs whether a file, system, or user DSN is being used.
I can reproduce this on two different machines. It happens with or
without SSL in use. It affects any Access 2007 database with a
PostgreSQL ODBC connection in use, including a newly created blank
database with a freshly linked table.

Because Windows is full of stripped binaries and limited debugging tools
I'm not able to get anything as useful as a backtrace at the moment.
I've had a quick look and strangely the crash address doesn't seem to be
part of the address space mapped to any DLLs. Windows Crash Reporter
seems to agree, in that it indicates that the module name is unknown.

I'm at a bit of a loss. I'm used to debugging problems on platforms with
 luxuries like symbol names in binaries, or at least my own code on
win32 where I can build it for debugging.

Anybody else seeing, or seen, similar issues?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general