Re: [GENERAL] Query_time SQL as a function w/o creating a new type

2007-10-25 Thread Ow Mun Heng

On Fri, 2007-10-26 at 08:35 +0200, Reg Me Please wrote:
> You could try this:
> 
> 
> CREATE OR REPLACE FUNCTION foo( out procpid integer, out client_addr inet, 
> out 
> query_time interval, out current_query text )
> RETURNS SETOF RECORD AS $BODY$
> ...
> $BODY$ LANGUAGE PLPGSQL VOLATILE;


Somehow it doesn't work..

CREATE OR REPLACE FUNCTION query_time2( out procpid integer, out
client_addr inet, out 
query_time interval, out current_query text ) AS
  --RETURNS SETOF RECORD AS
$BODY$

BEGIN
SELECT procpid, client_addr, (now() - query_start),
current_query
FROM pg_stat_activity
ORDER BY (now() - query_start) DESC;
RETURN;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;


ERROR:  query has no destination for result data
HINT:  If you want to discard the results of a SELECT, use PERFORM
instead.
CONTEXT:  PL/pgSQL function "query_time2" line 3 at SQL statement


> 
> Il Friday 26 October 2007 08:24:46 Ow Mun Heng ha scritto:
> > Hi,
> >
> > After Erik Jones gave me the idea for this, I started to become lazy to
> > have to type this into the sql everytime I want to see how long a query
> > is taking.. so, I thought that I'll create a function to do just that..
> > I ended up with..
> >
> > CREATE OR REPLACE FUNCTION query_time()
> >   RETURNS SETOF query_time AS
> > $BODY$
> > DECLARE
> > rec RECORD;
> >
> > BEGIN
> > FOR rec IN
> > SELECT procpid, client_addr, now() - query_start as query_time,
> > current_query
> > FROM pg_stat_activity
> > ORDER BY query_time DESC
> > LOOP
> > RETURN NEXT rec;
> > END LOOP;
> > RETURN;
> > END;
> >
> > $BODY$
> >   LANGUAGE 'plpgsql' VOLATILE;
> >
> >
> > But the issue with the above is that I need to create a type.
> >
> > CREATE TYPE query_time AS
> >(procpid integer,
> > client_addr inet,
> > query_time interval,
> > current_query text);
> >
> > Is there a method which I'm able to return a result set w/o needing to
> > declare/create a new type.
> >
> > I tried to use language 'sql' but it only returned me 1 column, with all
> > the fields concatenated together with comma separating the fields.
> >
> >
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >subscribe-nomail command to [EMAIL PROTECTED] so that your
> >message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Query_time SQL as a function w/o creating a new type

2007-10-25 Thread Reg Me Please
You could try this:


CREATE OR REPLACE FUNCTION foo( out procpid integer, out client_addr inet, out 
query_time interval, out current_query text )
RETURNS SETOF RECORD AS $BODY$
...
$BODY$ LANGUAGE PLPGSQL VOLATILE;

(Thanks to Joen Conway for showing this in tablefunc!)


Il Friday 26 October 2007 08:24:46 Ow Mun Heng ha scritto:
> Hi,
>
> After Erik Jones gave me the idea for this, I started to become lazy to
> have to type this into the sql everytime I want to see how long a query
> is taking.. so, I thought that I'll create a function to do just that..
> I ended up with..
>
> CREATE OR REPLACE FUNCTION query_time()
>   RETURNS SETOF query_time AS
> $BODY$
> DECLARE
> rec RECORD;
>
> BEGIN
>   FOR rec IN
>   SELECT procpid, client_addr, now() - query_start as query_time,
>   current_query
>   FROM pg_stat_activity
>   ORDER BY query_time DESC
>   LOOP
>   RETURN NEXT rec;
>   END LOOP;
>   RETURN;
> END;
>
> $BODY$
>   LANGUAGE 'plpgsql' VOLATILE;
>
>
> But the issue with the above is that I need to create a type.
>
> CREATE TYPE query_time AS
>(procpid integer,
> client_addr inet,
> query_time interval,
> current_query text);
>
> Is there a method which I'm able to return a result set w/o needing to
> declare/create a new type.
>
> I tried to use language 'sql' but it only returned me 1 column, with all
> the fields concatenated together with comma separating the fields.
>
>
>
>
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Thomas Finneid


Jorge Godoy wrote:

Em Thursday 25 October 2007 13:36:09 [EMAIL PROTECTED] escreveu:

Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new project
might start at another location in the world, so the travelling there
takes time. In the mean time, all admin tasks can be performed without
problems, even backup operations that take 3 days.


This sounds a lot like oil exploration...  Data gathered from sensors is 
usually a few TBs, explosions have definite intervals, interference between 
sensors, etc.


Sorry I cant talk about what the work actually is, a colleague of mine 
just got reprimanded for just mentioning he was working on a compression 
library. The manager thought he was revealing *too much* :)


Putting the data inside the DB fast is part of the solution, getting it out 
fast to be processes / analyzed is another part.


But you are right about that part, things needs to be fast.

regards

thomas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Thomas Finneid



Scott Marlowe wrote:

It may well be that one big table and partial indexes would do what
you want.  Did you explore partial indexes against one big table?
That can be quite handy.


Hmm, interresting, I suppose it could work. Tanks for the suggestion, 
Ill keep it in mind.


regards

thomas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Thomas Finneid



Gregory Stark wrote:

Tom's point is that if you have 55k tables then just *finding* the newest
child table is fairly expensive. You're accessing a not insignificant-sized
index and table of tables. And the situation is worse when you consider the
number of columns all those tables have, all the indexes those tables have,
all the column keys those indexes the tables have have, etc.


Yes, I got that. But I name the child tables so that I when my server 
receives read requests, I retreive details from the request to be able 
to figure out the exact child table name, without the system needing to 
do any internal searches to find the newest table.



Nonetheless you've more or less convinced me that you're not completely nuts.


thank you for only regarding me as somewhat nuts :)


I would suggest not bothering with inheritance though. Inheritance imposes
additional costs to track the inheritance relationships. For your purposes you
may as well just create separate tables and not bother trying to use
inheritance.


As addressed in a previous reply, I find inheritance better for a couple 
of practical reasons.



If its practical to use partitions, granularity does not come into the
equation.


Uhm, yes it does. This is engineering, it's all about trade-offs. Having 55k
tables will have costs and benefits. I think it's a bit early to dismiss the
costs. Keep in mind that profiling them may be a bit tricky since they occur
during planning and DDL that you haven't finished experimenting with yet. The
problem you just ran into is just an example of the kind of costs it imposes.


See answer on why granularity is not relevant for my case.


You should also consider some form of compromise with separate tables but at a
lower level of granularity. Perhaps one partition per day instead of one per
30s. you could drop a partition when all the keys in it are marked as dead.


The structure of the data is divided in a descrete timeline, so every 
predefined x seconds a whole new bunch of data arrives, and all that 
belongs in a single partition.



regards

thomas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Query_time SQL as a function w/o creating a new type

2007-10-25 Thread Ow Mun Heng
Hi,

After Erik Jones gave me the idea for this, I started to become lazy to
have to type this into the sql everytime I want to see how long a query
is taking.. so, I thought that I'll create a function to do just that..
I ended up with..

CREATE OR REPLACE FUNCTION query_time()
  RETURNS SETOF query_time AS
$BODY$
DECLARE
rec RECORD;

BEGIN
FOR rec IN
SELECT procpid, client_addr, now() - query_start as query_time,
current_query
FROM pg_stat_activity
ORDER BY query_time DESC
LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;

$BODY$
  LANGUAGE 'plpgsql' VOLATILE;


But the issue with the above is that I need to create a type.

CREATE TYPE query_time AS
   (procpid integer,
client_addr inet,
query_time interval,
current_query text);

Is there a method which I'm able to return a result set w/o needing to 
declare/create a new type.

I tried to use language 'sql' but it only returned me 1 column, with all the 
fields concatenated together with
comma separating the fields.





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Selecting K random rows - efficiently!

2007-10-25 Thread John D. Burger
As far as I can tell, all of the proposed solutions lack sample  
independence.  Take the OP's suggested approach of doing something  
like this:


  SELECT * FROM mydata
  WHERE mydata.random_number >= (SELECT RANDOM() OFFSET 0)
  ORDER BY mydata.random_number ASC LIMIT 100

All you're doing is picking random =subsequences= from the same  
permutation of the original data.  This is not the same as a random  
sample.  That is, if rows A and B are adjacent in the permutation,  
then if A is in the sample, B will also be in it with very high  
probability, depending on the size of the sample.  Another way of  
saying this is that the first element of the sample is selected  
randomly, the rest are completely deterministic.  In a true random  
sample, different elements are selected independently.


On the other hand, ORDER BY RANDOM() does indeed construct true  
random samples, because it makes a new permutation every time.  If  
you want to use the random_number column approach, then you need to  
do the same.  You can accomplish this by sampling from the original  
permutation repeatedly, doing the above with LIMIT 1 as many times as  
you need.  Yes this is more costly, but TANSTAAFL.


As is often observed, it's easy to create the appearance of  
randomness, harder to accomplish in reality.


- John Burger
  MITRE

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Selecting tree data

2007-10-25 Thread D. Dante Lorenso

Pat Maddox wrote:

I'd like to store some tree data in my database.  I want to be able to
sort the data but maintain a tree structure
Is it possible to pull all the data like that with one query?  How do
I need to structure the table, and what query do I have to run in
order to make it happen?


You need to look at the connectby function which is part of contrib.

-- Dante

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Jorge Godoy
Em Thursday 25 October 2007 13:36:09 [EMAIL PROTECTED] escreveu:
>
> Regarding dumps and restore; the system will always be offline during
> those operations and it will be so for several days, because a new project
> might start at another location in the world, so the travelling there
> takes time. In the mean time, all admin tasks can be performed without
> problems, even backup operations that take 3 days.

This sounds a lot like oil exploration...  Data gathered from sensors is 
usually a few TBs, explosions have definite intervals, interference between 
sensors, etc.

Putting the data inside the DB fast is part of the solution, getting it out 
fast to be processes / analyzed is another part.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] [ANN]VTD-XML 2.2

2007-10-25 Thread jimmy Zhang
XimpleWare is proud to announce the the release of version 2.2 of VTD-XML, the 
next generation XML parsers/indexer/slicer/editor. This release significantly 
expands VTD-XML's ability to slice, split, edit and incrementally update the 
XML documents. To this end, we introduce the concept of namespace-compensated 
element fragment. This release also adds VTD+XML index writing capability to 
the VTD Navigator class. Other enhancements in this release include index size 
pre-computation, support for HTTP get, and some bug fixes.



To download the latest release, please go to 
http://sourceforge.net/project/showfiles.php?group_id=110612.


[GENERAL] Selecting tree data

2007-10-25 Thread Pat Maddox
I'd like to store some tree data in my database.  I want to be able to
sort the data but maintain a tree structure.  So for example, if I
order by a timestamp, I should get

- parent1
   * child1
   * child2
   * child3
- parent2
  * child4
  * child5

and if I reverse the sort order, I get

- parent2
  * child5
  * child4
- parent1
  * child3
  * child2
  * child1

Is it possible to pull all the data like that with one query?  How do
I need to structure the table, and what query do I have to run in
order to make it happen?

Pat

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith


On Oct 25, 2007, at 3:45 PM, Devrim GÜNDÜZ wrote:


./psql template1 -p 5433


=
[EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql -p 5433  
template1

Welcome to psql 7.4.13, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

template1=# \l
List of databases
   Name|  Owner   | Encoding
---+--+--
template0 | postgres | UNICODE
template1 | postgres | UNICODE
(2 rows)

template1=# select * from pg_database ;
  datname  | datdba | encoding | datistemplate | datallowconn |  
datlastsysoid | datvacuumxid | datfrozenxid | datpath | datconfig  
|  datacl
---++--+---+-- 
+---+--+--+-+--- 
+--
template1 |  1 |6 | t | t 
| 17140 | 7251 |   3221232724 | |   |  
{postgres=C*T*/postgres}
template0 |  1 |6 | t | f 
| 17140 |  464 |  464 | |   |  
{postgres=C*T*/postgres}

(2 rows)

Wh h!
Now we're back to square one.

I can re-make postgres on v7.4  (already done)
go back to my old dumpall and use that 7.4 dumpall to load PG 7.4

Remove the test DB from 8.2 as the dumpall that loaded it was via 7.4  
not 8.2

use 8.2's dumpall to dump the 7.4
use 8.2's psql to load in that dump


Ralph Smith
[EMAIL PROTECTED]
=



Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Devrim GÜNDÜZ
Hi,

On Thu, 2007-10-25 at 15:36 -0700, Ralph Smith wrote:
> [EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql -p 5433
> psql: FATAL:  database "postgres" does not exist 

7.4 does not have postgres database. use ./psql template1 -p 5433.

-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith


On Oct 25, 2007, at 1:57 PM, Tom Lane wrote:


Ralph Smith <[EMAIL PROTECTED]> writes:

On Oct 25, 2007, at 1:09 PM, Tom Lane wrote:

Ummm ... those are the column headings for 8.2 pg_database, not 7.4.
You're looking at the wrong postmaster.



[EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql
Welcome to psql 7.4.13, the PostgreSQL interactive terminal.



Type:  \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit



postgres=# select * from pg_database ;
   datname  | datdba | encoding | datistemplate | datallowconn |
datconnlimit | datlastsysoid | datfrozenxid | dattablespace |
datconfig |   datacl


Still the wrong column headings :-(.  What you have above is a 7.4  
psql

connecting to an 8.2 postmaster, no doubt because the default port
number wired into it is the 8.2 installation's not the 7.4 one's.
You'll need to explicitly give a -p switch to psql to connect to the
correct postmaster.

regards, tom lane

===
[EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql -p 5433
psql: FATAL:  database "postgres" does not exist

looks like I have to re-install PG 7.4.  There's only only 41 MB there.

On Ubuntu, what are my options?
Use Synaptic to uninstall?

Once that's done, any caveats WRT the subsequent install?

All this is so I can practice and make mistakes here on test boxes  
before w move the real on from 7.4 to 8.2.


Thanks a bunch Tom.

Ralph Smith
[EMAIL PROTECTED]
=


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Gregory Stark

"Thomas Finneid" <[EMAIL PROTECTED]> writes:

>> What you're
>> effectively doing is replacing the upper levels of a big table's indexes
>> with lookups in the system catalogs, which in point of fact is a
>> terrible tradeoff from a performance standpoint.
>
> Only if you assume I use all data in all tables all the time. But as I have
> explained in other replies recently, most of the times only data from the
> newest child table is used.

Tom's point is that if you have 55k tables then just *finding* the newest
child table is fairly expensive. You're accessing a not insignificant-sized
index and table of tables. And the situation is worse when you consider the
number of columns all those tables have, all the indexes those tables have,
all the column keys those indexes the tables have have, etc.

Nonetheless you've more or less convinced me that you're not completely nuts.

I would suggest not bothering with inheritance though. Inheritance imposes
additional costs to track the inheritance relationships. For your purposes you
may as well just create separate tables and not bother trying to use
inheritance.

> If its practical to use partitions, granularity does not come into the
> equation.

Uhm, yes it does. This is engineering, it's all about trade-offs. Having 55k
tables will have costs and benefits. I think it's a bit early to dismiss the
costs. Keep in mind that profiling them may be a bit tricky since they occur
during planning and DDL that you haven't finished experimenting with yet. The
problem you just ran into is just an example of the kind of costs it imposes.

You should also consider some form of compromise with separate tables but at a
lower level of granularity. Perhaps one partition per day instead of one per
30s. you could drop a partition when all the keys in it are marked as dead.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Scott Marlowe
On 10/25/07, Thomas Finneid <[EMAIL PROTECTED]> wrote:
>
> Tom Lane wrote:
>
> > You are making a common beginner error, which is to suppose that N
> > little tables are better than one big one.  They are not.
>
> Well that depends on how you define better. For my purposes, it is better.
>
> > What you're
> > effectively doing is replacing the upper levels of a big table's indexes
> > with lookups in the system catalogs, which in point of fact is a
> > terrible tradeoff from a performance standpoint.
>
> Only if you assume I use all data in all tables all the time. But as I
> have explained in other replies recently, most of the times only data
> from the newest child table is used.
>
> I did the performance tests before deciding on the design and having it
> all in one large table would not perform at all within requirements, The
> reason was that the indexes for the ever growing table would take longer
> and longer to update at each insert.
>
> When I use partitions, or child tables, I can use COPY to insert the
> data into the new chilkd table and then add the indexes to the single
> table only. That was, by far, the fastets solution.
>
> > From a database-theory standpoint, if all this data is alike then you
> > should have it all in one big table.
>
> Then, what is the point with partitions, if you can not use it to
> somehow separate logically similar data into different paritions because
> one has a need to do so? Of course I could have put it in a single
> table, had it not been for the performance. I could have used a discrete
> timestamp to separate the data, but why? partitions is more practical.
>
> > There are certain practical cases
> > where it's worth partitioning, but not at the level of granularity that
> > you are proposing.
>
> If its practical to use partitions, granularity does not come into the
> equation.

It may well be that one big table and partial indexes would do what
you want.  Did you explore partial indexes against one big table?
That can be quite handy.

i.e

create table mybigtable (ts timestamp, id int primary key, row1
numeric, . rown numeric);
populate with a years worth of data, i.e. 100M rows or something like that
create index mybigtable_20071025 on mybigtable (id) where ts between
'2007-10-25 00:00:00' and '2007-10-25 23:59:59.9';

repeat as needed.  now, when you want something from the table, you
can just ask for it with a timestamp range and it will hit the index,
and the table, all pretty fast.

Worth a look I guess.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Thomas Finneid


Tom Lane wrote:


You are making a common beginner error, which is to suppose that N
little tables are better than one big one.  They are not.  


Well that depends on how you define better. For my purposes, it is better.


What you're
effectively doing is replacing the upper levels of a big table's indexes
with lookups in the system catalogs, which in point of fact is a
terrible tradeoff from a performance standpoint.


Only if you assume I use all data in all tables all the time. But as I 
have explained in other replies recently, most of the times only data 
from the newest child table is used.


I did the performance tests before deciding on the design and having it 
all in one large table would not perform at all within requirements, The 
reason was that the indexes for the ever growing table would take longer 
and longer to update at each insert.


When I use partitions, or child tables, I can use COPY to insert the 
data into the new chilkd table and then add the indexes to the single 
table only. That was, by far, the fastets solution.



From a database-theory standpoint, if all this data is alike then you
should have it all in one big table.  


Then, what is the point with partitions, if you can not use it to 
somehow separate logically similar data into different paritions because 
one has a need to do so? Of course I could have put it in a single 
table, had it not been for the performance. I could have used a discrete 
timestamp to separate the data, but why? partitions is more practical.



There are certain practical cases
where it's worth partitioning, but not at the level of granularity that
you are proposing.  


If its practical to use partitions, granularity does not come into the 
equation.


regards

thomas



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] execute pg_dump via python

2007-10-25 Thread Trevor Talbot
On 10/25/07, Garry Saddington <[EMAIL PROTECTED]> wrote:
> I am using zope on windows with an external python method to backup my
> database. I am struggling to run the following command:
>
> pg_dump.exe database > file

> subprocess.Popen(['c:/dir/dir/pg_dump.exe','database','>','c:/dir/dir/output
> file'])
>
> The command string works perfectly in a terminal. Does anyone know how I
> should be doing this? I get no errors or traceback when I try the method
> through Zope.

This is probably a Python question more than anything else.  I don't
know Python, but two things come to mind:

* It probably does not open a command shell, so file redirection ('>')
does not work.  Use pg_dump's -f option instead.

* Things named "popen" usually open a pair of pipes for programmatic
input and output, meaning your app is expected to read the output of
pg_dump directly, as if you were going to display it on the screen or
write to a file yourself.

I'd suggest asking in a Python group about executing external
processes and checking for errors.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Tom Lane
Ralph Smith <[EMAIL PROTECTED]> writes:
> On Oct 25, 2007, at 1:09 PM, Tom Lane wrote:
>> Ummm ... those are the column headings for 8.2 pg_database, not 7.4.
>> You're looking at the wrong postmaster.

> [EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql
> Welcome to psql 7.4.13, the PostgreSQL interactive terminal.

> Type:  \copyright for distribution terms
> \h for help with SQL commands
> \? for help on internal slash commands
> \g or terminate with semicolon to execute query
> \q to quit

> postgres=# select * from pg_database ;
>datname  | datdba | encoding | datistemplate | datallowconn |  
> datconnlimit | datlastsysoid | datfrozenxid | dattablespace |  
> datconfig |   datacl

Still the wrong column headings :-(.  What you have above is a 7.4 psql
connecting to an 8.2 postmaster, no doubt because the default port
number wired into it is the 8.2 installation's not the 7.4 one's.
You'll need to explicitly give a -p switch to psql to connect to the
correct postmaster.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith

On Oct 25, 2007, at 1:09 PM, Tom Lane wrote:


Ralph Smith <[EMAIL PROTECTED]> writes:

On Oct 25, 2007, at 12:24 PM, Tom Lane wrote:

Hmph.  Nothing obviously wrong there, except that it's not finding
anything except template1.  What does "select * from pg_database"
show?



postgres=# select * from pg_database ;
   datname  | datdba | encoding | datistemplate | datallowconn |
datconnlimit | datlastsysoid | datfrozenxid | dattablespace |
datconfig |   datacl
---++--+---+--
+--+---+--+---
+---+-


Ummm ... those are the column headings for 8.2 pg_database, not 7.4.
You're looking at the wrong postmaster.

regards, tom lane

===
Oops. And I thought I got over NOT using the full path to the  
commands...

Sorry about that.

[EMAIL PROTECTED]:/usr/lib/postgresql/7.4/bin$ ./psql
Welcome to psql 7.4.13, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

postgres=# select * from pg_database ;
  datname  | datdba | encoding | datistemplate | datallowconn |  
datconnlimit | datlastsysoid | datfrozenxid | dattablespace |  
datconfig |   datacl
---++--+---+-- 
+--+---+--+--- 
+---+-
postgres  | 10 |6 | f | t 
|   -1 | 10818 |  524 |  1663  
|   |
template1 | 10 |6 | t | t 
|   -1 | 10818 |  524 |  1663  
|   | {=c/postgres,postgres=CTc/postgres}
template0 | 10 |6 | t | f 
|   -1 | 10818 |  524 |  1663  
|   | {=c/postgres,postgres=CTc/postgres}
airburst  |  17032 |0 | f | t 
|   -1 | 10818 |  524 |  1663  
|   |

(4 rows)

Ralph Smith
[EMAIL PROTECTED]
=



Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Tom Lane
Ralph Smith <[EMAIL PROTECTED]> writes:
> On Oct 25, 2007, at 12:24 PM, Tom Lane wrote:
>> Hmph.  Nothing obviously wrong there, except that it's not finding
>> anything except template1.  What does "select * from pg_database"
>> show?

> postgres=# select * from pg_database ;
>datname  | datdba | encoding | datistemplate | datallowconn |  
> datconnlimit | datlastsysoid | datfrozenxid | dattablespace |  
> datconfig |   datacl
> ---++--+---+-- 
> +--+---+--+--- 
> +---+-

Ummm ... those are the column headings for 8.2 pg_database, not 7.4.
You're looking at the wrong postmaster.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith

On Oct 25, 2007, at 12:24 PM, Tom Lane wrote:


Ralph Smith <[EMAIL PROTECTED]> writes:

On Oct 25, 2007, at 10:13 AM, Tom Lane wrote:

Works for me.  What does the -v give you on stderr?  Also,
7.4.what-exactly and 8.2.what-exactly?



Sorry for the bulk here...


Hmph.  Nothing obviously wrong there, except that it's not finding
anything except template1.  What does "select * from pg_database"
show?

regards, tom lane

==


postgres=# \l
List of databases
   Name|  Owner   | Encoding
---+--+---
airburst  | root | SQL_ASCII
postgres  | postgres | UTF8
template0 | postgres | UTF8
template1 | postgres | UTF8
(4 rows)


postgres=# select * from pg_database ;
  datname  | datdba | encoding | datistemplate | datallowconn |  
datconnlimit | datlastsysoid | datfrozenxid | dattablespace |  
datconfig |   datacl
---++--+---+-- 
+--+---+--+--- 
+---+-
postgres  | 10 |6 | f | t 
|   -1 | 10818 |  524 |  1663  
|   |
template1 | 10 |6 | t | t 
|   -1 | 10818 |  524 |  1663  
|   | {=c/postgres,postgres=CTc/postgres}
template0 | 10 |6 | t | f 
|   -1 | 10818 |  524 |  1663  
|   | {=c/postgres,postgres=CTc/postgres}
airburst  |  17032 |0 | f | t 
|   -1 | 10818 |  524 |  1663  
|   |

(4 rows)

From 7.4's postgresql.conf in /etc/postgresql/7.4/main:
#--- 


# CONNECTIONS AND AUTHENTICATION
#--- 



# - Connection Settings -

tcpip_socket = true
max_connections = 100
# note: increasing max_connections costs about 500 bytes of  
shared
# memory per connection slot, in addition to costs from  
shared_buffers

# and max_locks_per_transaction.
#superuser_reserved_connections = 2
port = 5433
unix_socket_directory = '/var/run/postgresql'
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal
#virtual_host = ''  # what interface to listen on;  
defaults to any



From 8.2's postgresql.conf in /etc/postgresql/8.2/main:
#--- 


# CONNECTIONS AND AUTHENTICATION
#--- 



# - Connection Settings -

#listen_addresses = 'localhost' # what IP address(es) to  
listen on;
# comma-separated list of  
addresses;
# defaults to 'localhost',  
'*' = all

# (change requires restart)
port = 5432 # (change requires restart)




I certainly can use some help!

Ralph Smith
[EMAIL PROTECTED]
=



Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
That is awesome.  Can it be added to pga3?


Jon

-Original Message-
From: Dave Page [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 2:11 PM
To: Gregory Stark
Cc: Roberts, Jon; pgsql-general@postgresql.org
Subject: Re: subversion support?



> --- Original Message ---
> From: Gregory Stark <[EMAIL PROTECTED]>
> To: "Dave Page" <[EMAIL PROTECTED]>
> Sent: 25/10/07, 19:06:12
> Subject: Re: subversion support?
> 
> The situation is complicated somewhat by the SQL "ALTER TABLE" and so on
> commands which you need to use instead of just reissuing the CREATE TABLE
> command.

>From memory, pga2 used to log the reverse engineered DDL after a change, as
well as the SQL used to make that change. It then allowed you to generate
scripts to update your production schemas from one version to another. It
also allowed you to view the history for an object, and rollback to a
previous version. Dropped objects were logged in the 'graveyard' from where
they could be resurrected.

/D

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Tom Lane
Ralph Smith <[EMAIL PROTECTED]> writes:
> On Oct 25, 2007, at 10:13 AM, Tom Lane wrote:
>> Works for me.  What does the -v give you on stderr?  Also,
>> 7.4.what-exactly and 8.2.what-exactly?

> Sorry for the bulk here...

Hmph.  Nothing obviously wrong there, except that it's not finding
anything except template1.  What does "select * from pg_database"
show?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] subversion support?

2007-10-25 Thread Dave Page


> --- Original Message ---
> From: Gregory Stark <[EMAIL PROTECTED]>
> To: "Dave Page" <[EMAIL PROTECTED]>
> Sent: 25/10/07, 19:06:12
> Subject: Re: subversion support?
> 
> The situation is complicated somewhat by the SQL "ALTER TABLE" and so on
> commands which you need to use instead of just reissuing the CREATE TABLE
> command.

>From memory, pga2 used to log the reverse engineered DDL after a change, as 
>well as the SQL used to make that change. It then allowed you to generate 
>scripts to update your production schemas from one version to another. It also 
>allowed you to view the history for an object, and rollback to a previous 
>version. Dropped objects were logged in the 'graveyard' from where they could 
>be resurrected.

/D

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-25 Thread Scott Marlowe
On 10/25/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Dave Page" <[EMAIL PROTECTED]> writes:
>
> >> Complaint?  Who is complaining?
> >>
> >> I am simply asking if this feature that is rather common in other database
> >> development tools will ever be added to pgAdmin.
> >
> > pgAdmin II had change control. No-one ever really used it though so we never
> > bothered to implement it in pgAdmin III.
>
> Note that most database admins I've seen use change control by making a series
> of sql files with all the definitions they need to recreate the tables. They
> then use those sql files to drive the database, rather than the other way
> around. So you just need to track those sql files in your revision control
> system, and they're just plain text.
>
> The situation is complicated somewhat by the SQL "ALTER TABLE" and so on
> commands which you need to use instead of just reissuing the CREATE TABLE
> command.

that's what I do.  The fact that I can wrap my entire change control
script in begin / commit pairs means I don't have to worry about
ruining production if one step goes wrong.

Unlike some other large, expensive, commercial databases.

I like the idea of easy, SVN controlled DDL.  I'm just not sure it's
likely to be as easy as we wish when moving from dev to test to
production.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] subversion support?

2007-10-25 Thread Gregory Stark
"Dave Page" <[EMAIL PROTECTED]> writes:

>> Complaint?  Who is complaining?  
>> 
>> I am simply asking if this feature that is rather common in other database
>> development tools will ever be added to pgAdmin.
>
> pgAdmin II had change control. No-one ever really used it though so we never
> bothered to implement it in pgAdmin III.

Note that most database admins I've seen use change control by making a series
of sql files with all the definitions they need to recreate the tables. They
then use those sql files to drive the database, rather than the other way
around. So you just need to track those sql files in your revision control
system, and they're just plain text.

The situation is complicated somewhat by the SQL "ALTER TABLE" and so on
commands which you need to use instead of just reissuing the CREATE TABLE
command.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-25 Thread Magnus Hagander
Roberts, Jon wrote:
> So the long story short I'm getting is, "no it is not on the radar".  This
> is terribly ironic given the fact that pgAdmin is developed using source
> control but the code you write with the tool doesn't have any hooks into
> source control.

Actually, it is on my personal radar. But beware that I have a very very
longrange radar, and I have a lot of things that sit much higher up on
my priority list.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] subversion support?

2007-10-25 Thread Dave Page


> --- Original Message ---
> From: "Roberts, Jon" <[EMAIL PROTECTED]>
> To: pgsql-general@postgresql.org
> Sent: 25/10/07, 17:35:32
> Subject: Re: [GENERAL] subversion support?
> 
> Complaint?  Who is complaining?  
> 
> I am simply asking if this feature that is rather common in other database
> development tools will ever be added to pgAdmin.

pgAdmin II had change control. No-one ever really used it though so we never 
bothered to implement it in pgAdmin III.

Regards, Dave

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith

==
On Oct 25, 2007, at 10:13 AM, Tom Lane wrote:


Ralph Smith <[EMAIL PROTECTED]> writes:
I want to use v8.2's pg_dumpall to export v7.4's data into a text  
file.



prompt:/usr/lib/postgresql/8.2/bin$ ./pg_dumpall -c -v -p 5433 >
myfile.txt


Works for me.  What does the -v give you on stderr?  Also,
7.4.what-exactly and 8.2.what-exactly?

regards, tom lane

==
Sorry for the bulk here...
---
[EMAIL PROTECTED]:/usr/lib/postgresql/8.2/bin$ ./pg_dumpall -c -v  
-p 5433pg_dumpall: executing SET search_path = pg_catalog

--
-- PostgreSQL database cluster dump
--

-- Started on 2007-10-25 10:40:28 PDT

\connect postgres

SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET escape_string_warning = 'off';

pg_dumpall: executing SELECT usename as rolname, usesuper as  
rolsuper, true as rolinherit, usesuper as rolcreaterole, usecreatedb  
as rolcreatedb, usecatupd as rolcatupdate, true as rolcanlogin, -1 as  
rolconnlimit, passwd as rolpassword, valuntil as rolvaliduntil, null  
as rolcomment FROM pg_shadow UNION ALL SELECT groname as rolname,  
false as rolsuper, true as rolinherit, false as rolcreaterole, false  
as rolcreatedb, false as rolcatupdate, false as rolcanlogin, -1 as  
rolconnlimit, null::text as rolpassword, null::abstime as  
rolvaliduntil, null as rolcomment FROM pg_group WHERE NOT EXISTS  
(SELECT 1 FROM pg_shadow  WHERE usename = groname) ORDER BY 1

--
-- Roles
--

DROP ROLE postgres;
CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN;
pg_dumpall: executing SELECT useconfig[1] FROM pg_shadow WHERE  
usename = 'postgres'



pg_dumpall: executing SELECT groname, grolist FROM pg_group ORDER BY 1


--
-- Database creation
--

pg_dumpall: executing SELECT datname, coalesce(usename, (select  
usename from pg_shadow where usesysid=(select datdba from pg_database  
where datname='template0'))), pg_encoding_to_char(d.encoding),  
datistemplate, datacl, -1 as datconnlimit, 'pg_default' AS  
dattablespace FROM pg_database d LEFT JOIN pg_shadow u ON (datdba =  
usesysid) WHERE datallowconn ORDER BY 1

REVOKE ALL ON DATABASE template1 FROM PUBLIC;
GRANT CONNECT ON DATABASE template1 TO PUBLIC;
REVOKE ALL ON DATABASE template1 FROM postgres;
GRANT CREATE,TEMPORARY ON DATABASE template1 TO postgres WITH GRANT  
OPTION;
pg_dumpall: executing SELECT datconfig[1] FROM pg_database WHERE  
datname = 'template1';



pg_dumpall: executing SELECT datname FROM pg_database WHERE  
datallowconn ORDER BY 1

pg_dumpall: dumping database "template1"...
\connect template1

pg_dumpall: running ""/usr/lib/postgresql/8.2/bin/pg_dump"  -v -p  
'5433' -Fp 'template1'"

pg_dump: reading schemas
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined conversions
pg_dump: reading user-defined tables
pg_dump: reading table inheritance information
pg_dump: reading rewrite rules
pg_dump: reading type casts
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = off
pg_dump: saving database definition
--
-- PostgreSQL database dump
--

-- Started on 2007-10-25 10:40:28 PDT

SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

pg_dump: creating COMMENT DATABASE template1
--
-- TOC entry 1352 (class 0 OID 0)
-- Dependencies: 1351
-- Name: DATABASE template1; Type: COMMENT; Schema: -; Owner:
--

COMMENT ON DATABASE template1 IS 'Default template database';


pg_dump: creating SCHEMA public
pg_dump: creating COMMENT SCHEMA public
--
-- TOC entry 1353 (class 0 OID 0)
-- Dependencies: 4
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
--

COMMENT ON SCHEMA public IS 'Standard public schema';


pg_dump: setting owner and privileges for COMMENT DATABASE template1
pg_dump: setting owner and privileges for SCHEMA public
pg_dump: setting owner and privileges for COMMENT SCHEMA public
pg_dump: setting owner and privileges for ACL public
--
-- TOC entry 1354 (class 0 OID 0)
-- Dependencies: 4
-- Name: public; Type: ACL; Schema: -; Owner: postgres
--

REVOKE ALL ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;


-- Completed on 2007-10-25 10:40:28 PDT

--
-- PostgreSQL database dump complete
--

-- Completed on 2007-10-25 10:40:28 PDT

--
-- PostgreSQL database cluster dump complete
--



Ralph Smith
[EMAIL PROTECTED]

Re: [GENERAL] select count() out of memory

2007-10-25 Thread Scott Marlowe
On 10/25/07, Steve Crawford <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera wrote:
> ...
> >
> > You can use CREATE TABLE LIKE, which copies the definition but does not
> > set the inheritance.
> >
>
> Well, sort of.
>
> Unless I'm using it incorrectly it only copies the basic column
> definitions and, as optionally specified, defaults and some of the
> constraints.
>
> Primary key constraints are lost as CREATE TABLE newtable (LIKE
> oldtable) does not create any indexes including those necessary for the
> primary key.
>
> I don't know how foreign-key constraints are handled as I haven't used
> this form of CREATE TABLE where foreign keys are involved.

Neither inheritance nor creating LIKE will inherit primary keys.

Foreign keys will not be inherited by either method either.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-25 Thread brian

Roberts, Jon wrote:


When you edit a function, package, procedure, trigger, etc, it will notify
you via a pop-up window if there is a difference in the committed version
and the database version.  You can then click "show differences" and then it
pops up another window with your typical code differences window.


Wouldn't the fact that the thing has been edited suggest that it has, 
indeed, been changed?


Of course, having a diff pop up in your GUI package of choice would be 
nice if that's your favoured way to work.




When you are done revising the code, you then commit it to the repository in
the tool with a click of a button.

So the long story short I'm getting is, "no it is not on the radar".  This
is terribly ironic given the fact that pgAdmin is developed using source
control but the code you write with the tool doesn't have any hooks into
source control.


About as ironic as any other random software package/project that also 
uses version control in development not having those hooks.


brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Tom Lane
Ralph Smith <[EMAIL PROTECTED]> writes:
> I want to use v8.2's pg_dumpall to export v7.4's data into a text file.

> prompt:/usr/lib/postgresql/8.2/bin$ ./pg_dumpall -c -v -p 5433 >  
> myfile.txt

Works for me.  What does the -v give you on stderr?  Also,
7.4.what-exactly and 8.2.what-exactly?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] subversion support?

2007-10-25 Thread Karsten Hilbert
On Thu, Oct 25, 2007 at 11:35:32AM -0500, Roberts, Jon wrote:

> Complaint?  Who is complaining?  
> 
> I am simply asking if this feature that is rather common in other database
> development tools will ever be added to pgAdmin.
Why are you then asking on a *PostgreSQL* list ?

> And no, I will not sponsor such development.
Which means you'll have to be content with a "No, not in the
foreseeable future lest unforeseeable things happen."

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Steve Crawford
Alvaro Herrera wrote:
...
> 
> You can use CREATE TABLE LIKE, which copies the definition but does not
> set the inheritance.
> 

Well, sort of.

Unless I'm using it incorrectly it only copies the basic column
definitions and, as optionally specified, defaults and some of the
constraints.

Primary key constraints are lost as CREATE TABLE newtable (LIKE
oldtable) does not create any indexes including those necessary for the
primary key.

I don't know how foreign-key constraints are handled as I haven't used
this form of CREATE TABLE where foreign keys are involved.

Cheers,
Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Tom Lane
[EMAIL PROTECTED] writes:
>> In other words, you really should have only one table; they aren't
>> independent.  What you need to do is dial down your ideas of how many
>> partitions are reasonable to have.

> Yes, but no. Each partition represents a chunk of information on a
> discrete timeline. So there is no point in grouping it all into a single
> table, because the access pattern is to only access data from a specific
> point in time, i.e. a single partition, usually the latest. Since the
> amount of data is so big, approx 3MB per second, and each partition needs
> to be indexed before the clients start reading the data (in the same
> second). I find its better to use partitions, even though I am not
> actually using it.

You are making a common beginner error, which is to suppose that N
little tables are better than one big one.  They are not.  What you're
effectively doing is replacing the upper levels of a big table's indexes
with lookups in the system catalogs, which in point of fact is a
terrible tradeoff from a performance standpoint.

>From a database-theory standpoint, if all this data is alike then you
should have it all in one big table.  There are certain practical cases
where it's worth partitioning, but not at the level of granularity that
you are proposing.  This is why nobody, not even Oracle, tries to
support tens of thousands of partitions.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] 2 versions running & using pg_dumpall

2007-10-25 Thread Ralph Smith

I have versions 7.4 (port=5433) & 8.2 (port=5432) on this Ubuntu box.

I want to use v8.2's pg_dumpall to export v7.4's data into a text file.
(IDEALLY I'd like to port it directly to psql and avoid the file, but  
I don't know if I can do that.)


Anyway, when I:
prompt:/usr/lib/postgresql/8.2/bin$ ./pg_dumpall -c -v -p 5433 >  
myfile.txt


It's NOT dumping everything.  Only...  Well, I'm not sure.
I think only DB postgres.  It's only 87 lines long.

Anybody have any suggestions?

Thank you,
Ralph Smith
[EMAIL PROTECTED]
=




Re: [GENERAL] 8.3b1 in production?

2007-10-25 Thread Gregory Stark
"Scott Marlowe" <[EMAIL PROTECTED]> writes:

> On 10/25/07, Peter Childs <[EMAIL PROTECTED]> wrote:
>> I was wondering why my PITR base back up was taking 2 hours on my 8.3 test
>> database where as it takes 50 minutes on 8.1 and the database files are
>> meant to be smaller on a freshly installed 8.3 server rather than a 8.1.1
>> server that aint been rebuilt since 8.1.1 was newly out.
>> I was planning to upgrade to 8.3 once its out...
>>
>> Down time for upgrades is somwhat lacking in a 24x7 business.
>>
>> Oh my 8.1 server has been up for well over a year with out being down at
>> all. the database for longer which really show how good postgres really is
>> 377 days uptime on computer and I think that was to move a plug.
>
> You should really look at scheduling the 5 minute window up update
> your 8.1 install.  8.1.1 is quite old and has a few known data eating
> bugs, if I remember correctly.  Updating to 8.1.10 should only take
> literally about 1 or 2 minutes.

Actually 8.1.2 fixed two locale problems which could require reindexing. If
you're using plperl functions which play with the locale or a locale like
hungarian which compares some different strings as equal then you might have
to reindex.

Otherwise it's just a Postgres server restart's worth of downtime. There are
both data eating bug fixes and security fixes in 8.1.10 for you.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
No I haven't.  Thanks for the tip.


Jon

-Original Message-
From: Reg Me Please [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 11:25 AM
To: pgsql-general@postgresql.org
Cc: Tino Wildenhain; Roberts, Jon
Subject: Re: [GENERAL] subversion support?

Ever tried Druid?

http://druid.sourceforge.net/


Il Thursday 25 October 2007 18:02:51 Tino Wildenhain ha scritto:
> Hi,
>
> Roberts, Jon schrieb:
> > I could use psql instead of pgAdmin then which isn't what I want.
> >
> > Having used Quest software SQL Navigator since 97 for Oracle and then
> > migrated to Toad for Oracle which both products have integration to
> > source control, it is hard to revert back to a command line or text file
> > solution.
>
> Well you can still use gui tools and just let them work against a
> development database. With little scripting you can just dump
> the schema of that database periodically and check it in to SVN.
>
> Hook scripts can then take over the deployment (ideally based
> on tag creation)
>
> > pgAdmin should graphically show differences between the committed
version
> > and the database.
>
> Does SQL Nav do this? At least the SQL Navigator/Toad support seems
> to heavily depend on server side code to help. This looks very unclean
> to the very least.
>
> > It should allow me to click a button in the tool and commit it to the
> > repository.
> >
> > It should allow me to revert back to a previous version and the tool
take
> > care of restoring the function automatically.
>
> You can test before you commit in the database - unlike Oracle, Postgres
> supports transactions even for DDL :-) (ok, I've yet find the button
> in pgadmin to disable auto commit :-)
>
>
> Regards
> Tino
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> this is my config
>
> checkpoint_segments = 96
> effective_cache_size = 128000
> shared_buffers = 43
> max_fsm_pages = 208000
> max_fsm_relations = 1
>
> max_connections = 1000
>
> autovacuum = off# enable autovacuum subprocess?
>
> fsync = on  # turns forced synchronization on
> or off
> #full_page_writes = on  # recover from partial page writes
> wal_sync_method = fdatasync
> wal_buffers = 256
>
> commit_delay = 5
> #commit_siblings = 5# range 1-1000

Now that you hopefully understand more about the server i am building,
does anybody got any suggestions on improvements of the config? I could
certainly reduce the max_connections to 1/10. but are there other
configurations that could either be reduced or increased or even set to a
non default value?

regards

thomas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
I not sure I follow the question about SQL Navigator and Toad.  

When you edit a function, package, procedure, trigger, etc, it will notify
you via a pop-up window if there is a difference in the committed version
and the database version.  You can then click "show differences" and then it
pops up another window with your typical code differences window.

When you are done revising the code, you then commit it to the repository in
the tool with a click of a button.

So the long story short I'm getting is, "no it is not on the radar".  This
is terribly ironic given the fact that pgAdmin is developed using source
control but the code you write with the tool doesn't have any hooks into
source control.


Jon

-Original Message-
From: Tino Wildenhain [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 11:03 AM
To: Roberts, Jon
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] subversion support?

Hi,

Roberts, Jon schrieb:
> I could use psql instead of pgAdmin then which isn't what I want.
> 
> Having used Quest software SQL Navigator since 97 for Oracle and then
> migrated to Toad for Oracle which both products have integration to source
> control, it is hard to revert back to a command line or text file
solution.

Well you can still use gui tools and just let them work against a
development database. With little scripting you can just dump
the schema of that database periodically and check it in to SVN.

Hook scripts can then take over the deployment (ideally based
on tag creation)

> 
> pgAdmin should graphically show differences between the committed version
> and the database.

Does SQL Nav do this? At least the SQL Navigator/Toad support seems
to heavily depend on server side code to help. This looks very unclean
to the very least.

> It should allow me to click a button in the tool and commit it to the
> repository.  
> 
> It should allow me to revert back to a previous version and the tool take
> care of restoring the function automatically.

You can test before you commit in the database - unlike Oracle, Postgres
supports transactions even for DDL :-) (ok, I've yet find the button
in pgadmin to disable auto commit :-)


Regards
Tino

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Erik Jones

On Oct 25, 2007, at 11:16 AM, Alvaro Herrera wrote:


[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:

Thats good enough for me, thats exactly what I want.


In that case, why use partitions at all?  They are simple  
independent

tables.


For two reasons,
- the data logically belongs together.
- because its more practical to create tables as childs of a  
parent table

than as independent tables.
   - changes to the table is applied to all partitions, and prohibits
tables with different dd.
   - performing the create operation does not require the source  
code to

contain the ddl of the parent table.


You can use CREATE TABLE LIKE, which copies the definition but does  
not

set the inheritance.


That won't propogate changes made later.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
Complaint?  Who is complaining?  

I am simply asking if this feature that is rather common in other database
development tools will ever be added to pgAdmin.

And no, I will not sponsor such development.  


Jon
-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 11:16 AM
To: Roberts, Jon
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] subversion support?

Roberts, Jon wrote:
> I could use psql instead of pgAdmin then which isn't what I want.
> 
> Having used Quest software SQL Navigator since 97 for Oracle and then
> migrated to Toad for Oracle which both products have integration to source
> control, it is hard to revert back to a command line or text file
solution.
> 
> 
> pgAdmin should graphically show differences between the committed version
> and the database.

1. Complaints about pgadmin, should go to the pgadmin this. This is a 
postgresql list.

> 
> It should allow me to click a button in the tool and commit it to the
> repository.  
> 
> It should allow me to revert back to a previous version and the tool take
> care of restoring the function automatically.
> 
> It should show history and let me see the differences.
> 
> In other words, take Tortoise and merge that product into pgAdmin so I
have
> one product instead of two.

2. Are you will to sponsor such development?


Sincerely,

Joshua D. Drake

> 
> 
> Jon
> 
> -Original Message-
> From: Brad Lhotsky [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, October 25, 2007 9:13 AM
> To: Roberts, Jon
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] subversion support?
> 
> You could setup a subversion commit hook to export the functions to the 
> database.
> 
> Then you adjust your development mentality to:
> 
> 1) Edit the files on the disk
> 2) Commit to Subversion
> 
> Then the hook takes over and runs the drop/create automatically,  you 
> could even have it email the developer if the create failed.
> 
> 
> Roberts, Jon wrote:
>> Robert, that does sound better.  It keeps the names of the files in svn
>> consistent with the database object names which is essential.  It also
> makes
>> it automatic.  Unfortunately, it doesn't tell you who did the changes.
>>
>> Do you want to share that code?
>>
>>
>> Thanks!
>>
>>
>> Jon
>>
>> -Original Message-
>> From: Robert Treat [mailto:[EMAIL PROTECTED] 
>> Sent: Wednesday, October 24, 2007 10:24 PM
>> To: pgsql-general@postgresql.org
>> Cc: Roberts, Jon
>> Subject: Re: [GENERAL] subversion support?
>>
>> On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:
>>> Yeah.  I think having to save the function to disk and then leave
pgAdmin
>>> to execute subversion commands is going through hoops.
>>>
>>> Also, pgAdmin should be integrated so that you are notified if the
>> function
>>> in the database is different from the last committed version.  A visual
>>> diff should be there so you can see what the differences are.
>>>
>> We have a script that runs nightly that dumps tables / functions to file,
>> and 
>> then checks it in automagically to svn, which sends an email of the
diffs.
> 
>> Perhaps that would work for you? 
>>
> 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] writes:
>>> In that case, why use partitions at all?  They are simple independent
>>> tables.
>
>> For two reasons,
>> - the data logically belongs together.
>> - because its more practical to create tables as childs of a parent
>> table
>> than as independent tables.
>>- changes to the table is applied to all partitions, and prohibits
>> tables with different dd.
>>- performing the create operation does not require the source code to
>> contain the ddl of the parent table.
>
> In other words, you really should have only one table; they aren't
> independent.  What you need to do is dial down your ideas of how many
> partitions are reasonable to have.

Yes, but no. Each partition represents a chunk of information on a
discrete timeline. So there is no point in grouping it all into a single
table, because the access pattern is to only access data from a specific
point in time, i.e. a single partition, usually the latest. Since the
amount of data is so big, approx 3MB per second, and each partition needs
to be indexed before the clients start reading the data (in the same
second). I find its better to use partitions, even though I am not
actually using it.

regards

thomas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] subversion support?

2007-10-25 Thread Reg Me Please
Ever tried Druid?

http://druid.sourceforge.net/


Il Thursday 25 October 2007 18:02:51 Tino Wildenhain ha scritto:
> Hi,
>
> Roberts, Jon schrieb:
> > I could use psql instead of pgAdmin then which isn't what I want.
> >
> > Having used Quest software SQL Navigator since 97 for Oracle and then
> > migrated to Toad for Oracle which both products have integration to
> > source control, it is hard to revert back to a command line or text file
> > solution.
>
> Well you can still use gui tools and just let them work against a
> development database. With little scripting you can just dump
> the schema of that database periodically and check it in to SVN.
>
> Hook scripts can then take over the deployment (ideally based
> on tag creation)
>
> > pgAdmin should graphically show differences between the committed version
> > and the database.
>
> Does SQL Nav do this? At least the SQL Navigator/Toad support seems
> to heavily depend on server side code to help. This looks very unclean
> to the very least.
>
> > It should allow me to click a button in the tool and commit it to the
> > repository.
> >
> > It should allow me to revert back to a previous version and the tool take
> > care of restoring the function automatically.
>
> You can test before you commit in the database - unlike Oracle, Postgres
> supports transactions even for DDL :-) (ok, I've yet find the button
> in pgadmin to disable auto commit :-)
>
>
> Regards
> Tino
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] subversion support?

2007-10-25 Thread Joshua D. Drake

Roberts, Jon wrote:

I could use psql instead of pgAdmin then which isn't what I want.

Having used Quest software SQL Navigator since 97 for Oracle and then
migrated to Toad for Oracle which both products have integration to source
control, it is hard to revert back to a command line or text file solution.


pgAdmin should graphically show differences between the committed version
and the database.


1. Complaints about pgadmin, should go to the pgadmin this. This is a 
postgresql list.




It should allow me to click a button in the tool and commit it to the
repository.  


It should allow me to revert back to a previous version and the tool take
care of restoring the function automatically.

It should show history and let me see the differences.

In other words, take Tortoise and merge that product into pgAdmin so I have
one product instead of two.


2. Are you will to sponsor such development?


Sincerely,

Joshua D. Drake




Jon

-Original Message-
From: Brad Lhotsky [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 9:13 AM

To: Roberts, Jon
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] subversion support?

You could setup a subversion commit hook to export the functions to the 
database.


Then you adjust your development mentality to:

1) Edit the files on the disk
2) Commit to Subversion

Then the hook takes over and runs the drop/create automatically,  you 
could even have it email the developer if the create failed.



Roberts, Jon wrote:

Robert, that does sound better.  It keeps the names of the files in svn
consistent with the database object names which is essential.  It also

makes

it automatic.  Unfortunately, it doesn't tell you who did the changes.

Do you want to share that code?


Thanks!


Jon

-Original Message-
From: Robert Treat [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 24, 2007 10:24 PM

To: pgsql-general@postgresql.org
Cc: Roberts, Jon
Subject: Re: [GENERAL] subversion support?

On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:

Yeah.  I think having to save the function to disk and then leave pgAdmin
to execute subversion commands is going through hoops.

Also, pgAdmin should be integrated so that you are notified if the

function

in the database is different from the last committed version.  A visual
diff should be there so you can see what the differences are.


We have a script that runs nightly that dumps tables / functions to file,
and 
then checks it in automagically to svn, which sends an email of the diffs.


Perhaps that would work for you? 






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> > [EMAIL PROTECTED] wrote:
> >> Thats good enough for me, thats exactly what I want.
> >
> > In that case, why use partitions at all?  They are simple independent
> > tables.
> 
> For two reasons,
> - the data logically belongs together.
> - because its more practical to create tables as childs of a parent table
> than as independent tables.
>- changes to the table is applied to all partitions, and prohibits
> tables with different dd.
>- performing the create operation does not require the source code to
> contain the ddl of the parent table.

You can use CREATE TABLE LIKE, which copies the definition but does not
set the inheritance.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"Saca el libro que tu religión considere como el indicado para encontrar la
oración que traiga paz a tu alma. Luego rebootea el computador
y ve si funciona" (Carlos Duclós)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] wrote:
>> > [EMAIL PROTECTED] wrote:
>> >> Thats good enough for me, thats exactly what I want.
>> >
>> > In that case, why use partitions at all?  They are simple independent
>> > tables.
>>
>> For two reasons,
>> - the data logically belongs together.
>> - because its more practical to create tables as childs of a parent
>> table
>> than as independent tables.
>>- changes to the table is applied to all partitions, and prohibits
>> tables with different dd.
>>- performing the create operation does not require the source code to
>> contain the ddl of the parent table.
>
> You can use CREATE TABLE LIKE, which copies the definition but does not
> set the inheritance.

I know, but point 1 of reason 2 was why I decided against it. I was
discussing the different options with myself when I was trying to decide,
and I decided on the partitions. So unless there will be a problem with
the real operations, I cant se any reason to change.

regards

thomas




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] subversion support?

2007-10-25 Thread Tino Wildenhain

Hi,

Roberts, Jon schrieb:

I could use psql instead of pgAdmin then which isn't what I want.

Having used Quest software SQL Navigator since 97 for Oracle and then
migrated to Toad for Oracle which both products have integration to source
control, it is hard to revert back to a command line or text file solution.


Well you can still use gui tools and just let them work against a
development database. With little scripting you can just dump
the schema of that database periodically and check it in to SVN.

Hook scripts can then take over the deployment (ideally based
on tag creation)



pgAdmin should graphically show differences between the committed version
and the database.


Does SQL Nav do this? At least the SQL Navigator/Toad support seems
to heavily depend on server side code to help. This looks very unclean
to the very least.


It should allow me to click a button in the tool and commit it to the
repository.  


It should allow me to revert back to a previous version and the tool take
care of restoring the function automatically.


You can test before you commit in the database - unlike Oracle, Postgres
supports transactions even for DDL :-) (ok, I've yet find the button
in pgadmin to disable auto commit :-)


Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Tom Lane
[EMAIL PROTECTED] writes:
>> In that case, why use partitions at all?  They are simple independent
>> tables.

> For two reasons,
> - the data logically belongs together.
> - because its more practical to create tables as childs of a parent table
> than as independent tables.
>- changes to the table is applied to all partitions, and prohibits
> tables with different dd.
>- performing the create operation does not require the source code to
> contain the ddl of the parent table.

In other words, you really should have only one table; they aren't
independent.  What you need to do is dial down your ideas of how many
partitions are reasonable to have.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> Excellent, it sounds like you should be fine then.  One thing to
> note:  if you want to get an "idea" of how many rows you have in your
> partitions, you can run a SUM aggregate on reltuples in pg_class for
> all of your partitions.  The more recent the last ANALYZE for each
> table, the more accurate those values will be.

cool, thanks.

regards

thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] wrote:
>> Thats good enough for me, thats exactly what I want.
>
> In that case, why use partitions at all?  They are simple independent
> tables.

For two reasons,
- the data logically belongs together.
- because its more practical to create tables as childs of a parent table
than as independent tables.
   - changes to the table is applied to all partitions, and prohibits
tables with different dd.
   - performing the create operation does not require the source code to
contain the ddl of the parent table.


regards

thomas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
Scott Marlowe escribió:

> So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
> of anyone having 60,000 or so partitions in a table, and she looked at
> me like I had a third eye in my forehead and said in her sweet voice
> "Well, that would certainly be an edge case".  She sounded like she
> was worried about me.

Did you get rid of that third eye already?  I would be equally worried.

-- 
Alvaro Herrerahttp://www.advogato.org/person/alvherre
"Someone said that it is at least an order of magnitude more work to do
production software than a prototype. I think he is wrong by at least
an order of magnitude."  (Brian Kernighan)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
> of anyone having 60,000 or so partitions in a table, and she looked at
> me like I had a third eye in my forehead and said in her sweet voice
> "Well, that would certainly be an edge case".  She sounded like she
> was worried about me.

That means I am exploring new territory, which is good. Of course, there
is a possibility the design could be done in a different way so as not to
need that many partitions. But I asked on this list a some of months ago
about tips on how to design this data model and what I came up with was
what I have today.

The problem is the work is proprietary and confidential, so its difficult
for me to explain in detail what I want to do. But you could look at my
previous post asking the question for the descriptions of the problem I
want to solve.

regards

thomas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> Are you selecting directly from the child table, or from the parent
> table with constraint_exclusion turned on?

the problem was when selecting from the parent table, but selecting from
child tables are no problem. As stated in other replies, I only wanted to
know how many rows where in the table in total, it is not a part of the
actual operations of the server.

> But hitting the parent table with no constraining where clause is a
> recipe for disaster.  The very reason to use partitioning is so that
> you never have to scan through a single giant table.

So I have found out...

> Anyway, you're heading off into new territory with 55,000 partitions.

Perhaps, but I am only using the child tables for actual operations
though. But I also have a couple of indexes on each child table, so there
is now about 15 indexes as well.
The intended operations of the server works fine, its the select on the
parent table that fails.

> What is the average size, in MB of one of your partitions?  I found
> with my test, there was a point of diminishing returns after 400 or so
> partitions at which point indexes were no longer needed, because the
> average query just seq scanned the partitions it needed, and they were
> all ~ 16 or 32 Megs.

I have no idea, but I suspect about a couple of megabytes each, at least
thats the size of the raw data. then maybe add a couple of megabytes more
for internal stuff.

regards

thomas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Erik Jones


On Oct 25, 2007, at 10:36 AM, [EMAIL PROTECTED] wrote:

The db worked fine until it reached perhaps 30-40 thousand  
partitions.


It depends on how you have the partitions set up and how you're
accessing them.  Are all of these partitions under the same parent
table?  If so, then trying run a SELECT COUNT(*) against the parent
table is simply insane.  Think about it, you're asking one query to
scan 55000 tables.  What you need to do is partition based on your
access patterns, not what you *think* will help with performance down
the road.  Look into constraint exclusion, whether or not you can
just access child tables directly, and whether you really need all of
these under one logical table.  Also, no matter how you do the
partitioning, once you get up to that many and more relations in your
system, dumps and restores take a lot longer.


The design is based on access patterns, i.e. one partition  
represents a
group of data along a discrete axis, so the partitions are the  
perfect for
modeling that. Only the last partition will be used on normal  
cases. The
previous partitions only need to exists until the operator deletes  
them,

which will be sometime between 1-6 weeks.

Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new  
project

might start at another location in the world, so the travelling there
takes time. In the mean time, all admin tasks can be performed without
problems, even backup operations that take 3 days.


Excellent, it sounds like you should be fine then.  One thing to  
note:  if you want to get an "idea" of how many rows you have in your  
partitions, you can run a SUM aggregate on reltuples in pg_class for  
all of your partitions.  The more recent the last ANALYZE for each  
table, the more accurate those values will be.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> > [EMAIL PROTECTED] wrote:
> > It will work on a million partitions and more, provided you do
> > operations on single partitions.
> 
> Thats good enough for me, thats exactly what I want.

In that case, why use partitions at all?  They are simple independent
tables.

-- 
Alvaro Herrera   http://www.PlanetPostgreSQL.org/
"El destino baraja y nosotros jugamos" (A. Schopenhauer)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.3b1 in production?

2007-10-25 Thread Scott Marlowe
On 10/25/07, Peter Childs <[EMAIL PROTECTED]> wrote:
> I was wondering why my PITR base back up was taking 2 hours on my 8.3 test
> database where as it takes 50 minutes on 8.1 and the database files are
> meant to be smaller on a freshly installed 8.3 server rather than a 8.1.1
> server that aint been rebuilt since 8.1.1 was newly out.
> I was planning to upgrade to 8.3 once its out...
>
> Down time for upgrades is somwhat lacking in a 24x7 business.
>
> Oh my 8.1 server has been up for well over a year with out being down at
> all. the database for longer which really show how good postgres really is
> 377 days uptime on computer and I think that was to move a plug.

You should really look at scheduling the 5 minute window up update
your 8.1 install.  8.1.1 is quite old and has a few known data eating
bugs, if I remember correctly.  Updating to 8.1.10 should only take
literally about 1 or 2 minutes.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
>> The db worked fine until it reached perhaps 30-40 thousand partitions.
>
> It depends on how you have the partitions set up and how you're
> accessing them.  Are all of these partitions under the same parent
> table?  If so, then trying run a SELECT COUNT(*) against the parent
> table is simply insane.  Think about it, you're asking one query to
> scan 55000 tables.  What you need to do is partition based on your
> access patterns, not what you *think* will help with performance down
> the road.  Look into constraint exclusion, whether or not you can
> just access child tables directly, and whether you really need all of
> these under one logical table.  Also, no matter how you do the
> partitioning, once you get up to that many and more relations in your
> system, dumps and restores take a lot longer.

The design is based on access patterns, i.e. one partition represents a
group of data along a discrete axis, so the partitions are the perfect for
modeling that. Only the last partition will be used on normal cases. The
previous partitions only need to exists until the operator deletes them,
which will be sometime between 1-6 weeks.

Regarding dumps and restore; the system will always be offline during
those operations and it will be so for several days, because a new project
might start at another location in the world, so the travelling there
takes time. In the mean time, all admin tasks can be performed without
problems, even backup operations that take 3 days.

regards

thomas
thomas



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] wrote:
> It will work on a million partitions and more, provided you do
> operations on single partitions.

Thats good enough for me, thats exactly what I want. I just used the
select count() on the root to get a feeling of how many rows it was in
total. An then I thought that the error message was just a configuration
issue. But since doing operations like that on the the root table of this
magnitude is not a good idea, I won't.

> What you want to do is not possible, period.  Maybe when we redesign
> partitioning, but that's far into the future.  Kindly do not waste our
> time (nor yours).

Thank you for that prompt reply.

In all fairness, thats why I asked the question here, to find out the
facts, not to be abused for being ignorant about pg.

thomas



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Scott Marlowe
On 10/25/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Hi
>
> I am volume testing a db model that consists of a paritioned tables. The
> db has been running for a week and a half now and has built up to contain
> approx 55000 partition tables of 18000 rows each. The root table therefore
> contains about 1 billion rows. When I try to do a "select count(*)" of the
> root table, it does some work for a while, perhaps 5-10 minutes and the
> aborts with
>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 130.

So, out of curiosity, I asked my Oracle DBA friend if she'd ever heard
of anyone having 60,000 or so partitions in a table, and she looked at
me like I had a third eye in my forehead and said in her sweet voice
"Well, that would certainly be an edge case".  She sounded like she
was worried about me.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Erik Jones


On Oct 25, 2007, at 9:36 AM, [EMAIL PROTECTED] wrote:


[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:


are a dump of Postgres's current memory allocations and could be

useful in

showing if there's a memory leak causing this.


The file is 20M, these are the last lines: (the first line  
continues

unttill ff_26000)


idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks;  
392 free

(0

chunks); 632 used


You have 26000 partitions???


At the moment the db has 55000 partitions, and thats only a fifth  
of the
complete volume the system will have in production. The reason I  
chose

this solution is that a partition will be loaded with new data every
3-30
seconds, and all that will be read by up to 15 readers every time  
new

data
is available. The data will be approx 2-4TB in production in  
total. So

it
will be too slow if I put it in a single table with permanent  
indexes.


I did a test previously, where I created 1 million partitions  
(without

data) and I checked the limits of pg, so I think it should be ok.


Clearly it's not.


I does not mean my problem has anything to do with the number of
partitions. It might have, or it might not, and thats the problem, the
cause has not been located yet.

According to the documented limits of pg,
  The difference could be the memory usage and wastage

for all those relcache entries and other stuff.  I would reduce the
number of partitions to a more reasonable value (within the tens,  
most

likely)


The db worked fine until it reached perhaps 30-40 thousand partitions.


It depends on how you have the partitions set up and how you're  
accessing them.  Are all of these partitions under the same parent  
table?  If so, then trying run a SELECT COUNT(*) against the parent  
table is simply insane.  Think about it, you're asking one query to  
scan 55000 tables.  What you need to do is partition based on your  
access patterns, not what you *think* will help with performance down  
the road.  Look into constraint exclusion, whether or not you can  
just access child tables directly, and whether you really need all of  
these under one logical table.  Also, no matter how you do the  
partitioning, once you get up to that many and more relations in your  
system, dumps and restores take a lot longer.


Erik Jones
Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Scott Marlowe
On 10/25/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> >> [EMAIL PROTECTED] wrote:
> >>> I did a test previously, where I created 1 million partitions (without
> >>> data) and I checked the limits of pg, so I think it should be ok.
> >
> >> Clearly it's not.
> >
> > You couldn't have tested it too much --- even planning a query over so
> > many tables would take forever, and actually executing it would surely
> > have run the system out of locktable space before it even started
> > scanning.
>
> And this is the testing, so you're right
>
> Its only the select on the root table that fails. Operations on a single
> partitions is no problem.

Not sure I understand exactly what you're saying.

Are you selecting directly from the child table, or from the parent
table with constraint_exclusion turned on?

If you're hitting the child table directly, you aren't actually using
partitioning.  It's a wholly independent table at that point.

If you're hitting a single child table through the parent table via
constraint_exclusion, then you are using partitioning, but only
hitting on physical table.

But hitting the parent table with no constraining where clause is a
recipe for disaster.  The very reason to use partitioning is so that
you never have to scan through a single giant table.

Anyway, you're heading off into new territory with 55,000 partitions.
What is the average size, in MB of one of your partitions?  I found
with my test, there was a point of diminishing returns after 400 or so
partitions at which point indexes were no longer needed, because the
average query just seq scanned the partitions it needed, and they were
all ~ 16 or 32 Megs.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL and AutoCad

2007-10-25 Thread Josh Tolley
On 10/24/07, Bob Pawley <[EMAIL PROTECTED]> wrote:
> Is there any way of converting text from an AutoCad (.dwg ot .dxf) file into
> a PostgreSQL  Database??
>
> Bob Pawley

I know nothing of AutoCad, but your message has been sitting for a
while without response, so I'll throw out the suggestion that you
probably want AutoCad to export the text to some more common format
(like a ASCII or UTF8 file or some such) and import that.

-Josh/eggyknap

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:

> > The partitioning facility is designed for partition counts in the tens,
> > or maybe hundreds at the most.
> 
> Maybe, but it works even on 55000 partitions as long as the operations are
> done against a partition and not the root table.

It will work on a million partitions and more, provided you do
operations on single partitions.

What you want to do is not possible, period.  Maybe when we redesign
partitioning, but that's far into the future.  Kindly do not waste our
time (nor yours).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Crosstab Problems

2007-10-25 Thread Reg Me Please
Il Thursday 25 October 2007 16:29:33 Scott Marlowe ha scritto:
> On 10/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Joe Conway <[EMAIL PROTECTED]> writes:
> > > Tom Lane wrote:
> > >> 1. Treat NULL rowid as a category in its own right.  This would
> > >> conform with the behavior of GROUP BY and DISTINCT, for instance.
> > >
> > > In any case, the attached changes the behavior to #1 for both flavors
> > > of crosstab (the original crosstab(text, int) and the usually more
> > > useful crosstab(text, text)).
> > >
> > > It is appropriate for 8.3 but not back-patching as it changes behavior
> > > in a non-backward compatible way and is probably too invasive anyway.
> >
> > Um, if the previous code crashed in this case, why would you worry about
> > being backward-compatible with it?  You're effectively changing the
> > behavior anyway, so you might as well make it do what you've decided is
> > the right thing.
>
> As a crosstab user, I agree with Tom.

If I can throw in my EUR 0.01 contrib, I would agree with Joe (thanks for your
wonderful crosstab).
If crosstab in 8.3 will have a different behaviour *and* it's not part of the
core features, then I'd prefer to correct it.
In any case developers will have to cope with discrepancies when going to 8.3
and you can bet they won't remain with 8.2 when 8.3 will be rolled out.

And, by the way, why not including the crosstab as a standard feature?
I think it deserves it!


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Scott Marlowe
On 10/25/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > [EMAIL PROTECTED] wrote:
> >> I did a test previously, where I created 1 million partitions (without
> >> data) and I checked the limits of pg, so I think it should be ok.
>
> > Clearly it's not.
>
> You couldn't have tested it too much --- even planning a query over so
> many tables would take forever, and actually executing it would surely
> have run the system out of locktable space before it even started
> scanning.
>
> The partitioning facility is designed for partition counts in the tens,
> or maybe hundreds at the most.

I've had good results well into the hundreds, but after about 400 or
so, things start to get a bit wonky.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> [EMAIL PROTECTED] wrote:
>>> I did a test previously, where I created 1 million partitions (without
>>> data) and I checked the limits of pg, so I think it should be ok.
>
>> Clearly it's not.
>
> You couldn't have tested it too much --- even planning a query over so
> many tables would take forever, and actually executing it would surely
> have run the system out of locktable space before it even started
> scanning.

And this is the testing, so you're right

Its only the select on the root table that fails. Operations on a single
partitions is no problem.

> The partitioning facility is designed for partition counts in the tens,
> or maybe hundreds at the most.

Maybe, but it works even on 55000 partitions as long as the operations are
done against a partition and not the root table.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.3b1 in production?

2007-10-25 Thread Peter Childs
On 24/10/2007, Gregory Stark <[EMAIL PROTECTED]> wrote:
>
> "rihad" <[EMAIL PROTECTED]> writes:
>
> > Hi,
> >
> > Does anyone have an idea how risky it is to start using 8.3b1 in
> production,
> > with the intention of upgrading to release (or newer beta) as soon as it
> > becomes available? Risky compared to running a release, that is. Beta ->
> > release upgrades might be less tricky than 8.2 -> 8.3.
>
> Well nobody's going to be able to guess at what problems haven't been
> found
> yet. All we can say decisively is what bugs have already been found:
>
> . On Windows UTF8 encoding isn't allowed
>
> . VACUUM does an unnecessarily large amount of I/O
>
> . Toaster could cause failures on machines with strict alignment
>
> . Resources limits in Windows limit the number of clients
>
> . pg_tablespace_size() on pg_global fails even for superuser
>
> . ABI break with old libpq for applications which depend on encoding IDs
>   (such as initdb -- you can't run initdb with an 8.2 libpq against an 
> 8.3server)
>
> . invalid tsvector input could cause crashes
>
> . ALTER COLUMN TYPE would reset the index's options, possibly moving it to
> the
>   default tablespace or worse
>
> Also:
>
> . A new data type, txid, was added
>
> . Several new contrib modules were added to aid tsearch migration
>
> . Some tsearch functions were removed or modified
>
> . tsearch word categories were redefined and renamed
>
> . Make plan invalidation work for dropped sequences (etc)
>
> . Be careful to get share lock on each page before computing its free
> space.
>
> . This avoids useless checkpoint activity if XLogWrite is executed when we
>   have a very stale local copy of RedoRecPtr.
>
> . Teach planagg.c that partial indexes specifying WHERE foo IS NOT NULL
> can be
>   used to perform MIN(foo) or MAX(foo)
>
> . Remove an Assert that's been obsoleted by recent changes in the
> parsetree
>   representation of DECLARE CURSOR. Report and fix by Heikki.
>
> . Ensure that the result of evaluating a function during
> constant-expression
>   simplification gets detoasted before it is incorporated into a Const
> node.
>
> . Make dumpcolors() have tolerable performance when using 32-bit chr, as
> we do
>
> . Make "role is not permitted to log in" errors not be hidden
>
> . Remove quotes around locale names in some places for consistency.
>
> . Add missing entry for PG_WIN1250 encoding, per gripe from Pavel Stehule.
>   Also enable translation of PG_WIN874


Hmm looks like December release might be a dream then

I was wondering why my PITR base back up was taking 2 hours on my 8.3 test
database where as it takes 50 minutes on 8.1 and the database files are
meant to be smaller on a freshly installed 8.3 server rather than a
8.1.1server that aint been rebuilt since
8.1.1 was newly out.
I was planning to upgrade to 8.3 once its out...

Down time for upgrades is somwhat lacking in a 24x7 business.

Oh my 8.1 server has been up for well over a year with out being down at
all. the database for longer which really show how good postgres really is
377 days uptime on computer and I think that was to move a plug.


Peter Childs


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] wrote:
>> > [EMAIL PROTECTED] wrote:
>> >
>> >> > are a dump of Postgres's current memory allocations and could be
>> >> useful in
>> >> > showing if there's a memory leak causing this.
>> >>
>> >> The file is 20M, these are the last lines: (the first line continues
>> >> unttill ff_26000)
>> >>
>> >>
>> >> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free
>> (0
>> >> chunks); 632 used
>> >
>> > You have 26000 partitions???
>>
>> At the moment the db has 55000 partitions, and thats only a fifth of the
>> complete volume the system will have in production. The reason I chose
>> this solution is that a partition will be loaded with new data every
>> 3-30
>> seconds, and all that will be read by up to 15 readers every time new
>> data
>> is available. The data will be approx 2-4TB in production in total. So
>> it
>> will be too slow if I put it in a single table with permanent indexes.
>>
>> I did a test previously, where I created 1 million partitions (without
>> data) and I checked the limits of pg, so I think it should be ok.
>
> Clearly it's not.

I does not mean my problem has anything to do with the number of
partitions. It might have, or it might not, and thats the problem, the
cause has not been located yet.

According to the documented limits of pg,
  The difference could be the memory usage and wastage
> for all those relcache entries and other stuff.  I would reduce the
> number of partitions to a more reasonable value (within the tens, most
> likely)

The db worked fine until it reached perhaps 30-40 thousand partitions.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-25 Thread Scott Marlowe
On 10/25/07, Reg Me Please <[EMAIL PROTECTED]> wrote:
> Il Thursday 25 October 2007 13:20:40 Gregory Stark ha scritto:
> > "Gregory Stark" <[EMAIL PROTECTED]> writes:
> > > "Reg Me Please" <[EMAIL PROTECTED]> writes:
> > >>->  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940
> > >> width=8) (actual time=0.012..0.013 rows=1 loops=1)
> > >
> > > The discrepancy etween the estimated rows and actual rows makes me think
> > > you've not analyzed this table in a long time. It's probably best to
> > > analyze the whole database to have a consistent set of statistics and to
> > > catch any other old table stats.
> > >
> > > There could be other misestimations based due to Postgres limitations but
> > > first fix the out of date stats and re-post both plans.
> >
> > Actually it's pretty clear there are some other bad estimations as well.
> > You should send along the view definition too.
> >
> > And I would recommend you try it with a normal JOIN ON/USING instead of the
> > NATURAL JOIN. It's possible it's joining on some unexpected columns --
> > though that doesn't really look like it's the case here.
>
> I'm not sure whether my previous message has reached the list.
>
> In any case, the tables have been created with a pg_restore and, thus,
> not much stats should be available not out-of-date ones.
>
> I'd actually like to better understand how to compose queries (and indexes)
> in order to make them appealing to the query planner.

I'm not sure you understand stats in pgsql.  The planner makes
decsisions based on those stats, expecting them to be up to date.
with default or out of date stats, the planner cannot make a good
decision.

Run analyze on your db, rerun the explain analyze and post the output.
 without good stats, you can't make a query that's guaranteed to work
well, because pgsql is simply guessing about your data distribution.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>> I did a test previously, where I created 1 million partitions (without
>> data) and I checked the limits of pg, so I think it should be ok.

> Clearly it's not.

You couldn't have tested it too much --- even planning a query over so
many tables would take forever, and actually executing it would surely
have run the system out of locktable space before it even started
scanning.

The partitioning facility is designed for partition counts in the tens,
or maybe hundreds at the most.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Crosstab Problems

2007-10-25 Thread Scott Marlowe
On 10/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Joe Conway <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> 1. Treat NULL rowid as a category in its own right.  This would conform
> >> with the behavior of GROUP BY and DISTINCT, for instance.
>
> > In any case, the attached changes the behavior to #1 for both flavors of
> > crosstab (the original crosstab(text, int) and the usually more useful
> > crosstab(text, text)).
>
> > It is appropriate for 8.3 but not back-patching as it changes behavior
> > in a non-backward compatible way and is probably too invasive anyway.
>
> Um, if the previous code crashed in this case, why would you worry about
> being backward-compatible with it?  You're effectively changing the
> behavior anyway, so you might as well make it do what you've decided is
> the right thing.

As a crosstab user, I agree with Tom.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
I could use psql instead of pgAdmin then which isn't what I want.

Having used Quest software SQL Navigator since 97 for Oracle and then
migrated to Toad for Oracle which both products have integration to source
control, it is hard to revert back to a command line or text file solution.


pgAdmin should graphically show differences between the committed version
and the database.

It should allow me to click a button in the tool and commit it to the
repository.  

It should allow me to revert back to a previous version and the tool take
care of restoring the function automatically.

It should show history and let me see the differences.

In other words, take Tortoise and merge that product into pgAdmin so I have
one product instead of two.


Jon

-Original Message-
From: Brad Lhotsky [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 9:13 AM
To: Roberts, Jon
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] subversion support?

You could setup a subversion commit hook to export the functions to the 
database.

Then you adjust your development mentality to:

1) Edit the files on the disk
2) Commit to Subversion

Then the hook takes over and runs the drop/create automatically,  you 
could even have it email the developer if the create failed.


Roberts, Jon wrote:
> Robert, that does sound better.  It keeps the names of the files in svn
> consistent with the database object names which is essential.  It also
makes
> it automatic.  Unfortunately, it doesn't tell you who did the changes.
> 
> Do you want to share that code?
> 
> 
> Thanks!
> 
> 
> Jon
> 
> -Original Message-
> From: Robert Treat [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, October 24, 2007 10:24 PM
> To: pgsql-general@postgresql.org
> Cc: Roberts, Jon
> Subject: Re: [GENERAL] subversion support?
> 
> On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:
>> Yeah.  I think having to save the function to disk and then leave pgAdmin
>> to execute subversion commands is going through hoops.
>>
>> Also, pgAdmin should be integrated so that you are notified if the
> function
>> in the database is different from the last committed version.  A visual
>> diff should be there so you can see what the differences are.
>>
> 
> We have a script that runs nightly that dumps tables / functions to file,
> and 
> then checks it in automagically to svn, which sends an email of the diffs.

> Perhaps that would work for you? 
> 

-- 
Brad Lhotsky<[EMAIL PROTECTED]>
NCTS Computer SpecialistPhone: 410.558.8006
"Darkness is a state of mind, I can go where you would stumble."
  -Wolfsheim, 'Blind'

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] subversion support?

2007-10-25 Thread Brad Lhotsky
You could setup a subversion commit hook to export the functions to the 
database.


Then you adjust your development mentality to:

1) Edit the files on the disk
2) Commit to Subversion

Then the hook takes over and runs the drop/create automatically,  you 
could even have it email the developer if the create failed.



Roberts, Jon wrote:

Robert, that does sound better.  It keeps the names of the files in svn
consistent with the database object names which is essential.  It also makes
it automatic.  Unfortunately, it doesn't tell you who did the changes.

Do you want to share that code?


Thanks!


Jon

-Original Message-
From: Robert Treat [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 24, 2007 10:24 PM

To: pgsql-general@postgresql.org
Cc: Roberts, Jon
Subject: Re: [GENERAL] subversion support?

On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:

Yeah.  I think having to save the function to disk and then leave pgAdmin
to execute subversion commands is going through hoops.

Also, pgAdmin should be integrated so that you are notified if the

function

in the database is different from the last committed version.  A visual
diff should be there so you can see what the differences are.



We have a script that runs nightly that dumps tables / functions to file,
and 
then checks it in automagically to svn, which sends an email of the diffs.  
Perhaps that would work for you? 



--
Brad Lhotsky<[EMAIL PROTECTED]>
NCTS Computer SpecialistPhone: 410.558.8006
"Darkness is a state of mind, I can go where you would stumble."
 -Wolfsheim, 'Blind'

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-25 Thread Reg Me Please
Il Thursday 25 October 2007 13:20:40 Gregory Stark ha scritto:
> "Gregory Stark" <[EMAIL PROTECTED]> writes:
> > "Reg Me Please" <[EMAIL PROTECTED]> writes:
> >>->  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940
> >> width=8) (actual time=0.012..0.013 rows=1 loops=1)
> >
> > The discrepancy etween the estimated rows and actual rows makes me think
> > you've not analyzed this table in a long time. It's probably best to
> > analyze the whole database to have a consistent set of statistics and to
> > catch any other old table stats.
> >
> > There could be other misestimations based due to Postgres limitations but
> > first fix the out of date stats and re-post both plans.
>
> Actually it's pretty clear there are some other bad estimations as well.
> You should send along the view definition too.
>
> And I would recommend you try it with a normal JOIN ON/USING instead of the
> NATURAL JOIN. It's possible it's joining on some unexpected columns --
> though that doesn't really look like it's the case here.

I'm not sure whether my previous message has reached the list.

In any case, the tables have been created with a pg_restore and, thus,
not much stats should be available not out-of-date ones.

I'd actually like to better understand how to compose queries (and indexes)
in order to make them appealing to the query planner.


Oggetto: Re: [PGSQL v8.2.5] Similar queries behave differently
Data: giovedì 25 ottobre 2007
Da: Reg Me Please <[EMAIL PROTECTED]>
A: pgsql-general@postgresql.org

Hai all again.

Maybe I've solved the problem, but would like to have some hint on "why".

In the second query I've substituted the last join (natural join tt_rice)
with an additional "where condition". I can do this as I am sure that
the tt_rice table will always contain just one row with one field.

The main difference with the first query is that in the first case the
single row with a single field is a "bigint", while in the second one it
is "text".

Otherwise the two queries are almost identical, apart the number of result
rows and the size of the joined tables.

Is there any deeper tutorial on how to read (and understand) the explain
analyze output?

Many thanks again.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> I have shown the entire configuration. if its not in the configuration
> shown, I have changed its value.

I meant to say "I haven't changed its value"

thomas




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Delete/Update with ORDER BY

2007-10-25 Thread Roberts, Jon
I have never seen order by in a delete statement in Oracle so I tested it.


SQL*Plus: Release 10.2.0.1.0 - Production on Thu Oct 25 07:45:50 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

SQL> connect / as sysdba
Connected.
SQL> create table a (a1 number, a2 number);

Table created.

SQL> delete from a where a1 = 10 order by a2;
delete from a where a1 = 10 order by a2
*
ERROR at line 1:
ORA-00933: SQL command not properly ended

Sure enough, it doesn't work.  What are you trying to do with an order by on
a delete?  


Jon

-Original Message-
From: Evandro Andersen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 25, 2007 7:25 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Delete/Update with ORDER BY

In Oracle you can use this:



DELETE FROM A WHERE A1 = 10 ORDER BY A2



There is something in the Postgresql ?



Evandro Andersen

Brazil

Postgresql 8.2





  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para
armazenamento!
http://br.mail.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Delete/Update with ORDER BY

2007-10-25 Thread A. Kretschmer
am  Thu, dem 25.10.2007, um  5:25:14 -0700 mailte Evandro Andersen folgendes:
> In Oracle you can use this:
> 
> 
> 
> DELETE FROM A WHERE A1 = 10 ORDER BY A2
> 
> 
> 
> There is something in the Postgresql ?

Can you explain this a little bit more? I can't see any sense. Either i
delete rows with A1=10 or not, but i don't need an order for this.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> > [EMAIL PROTECTED] wrote:
> >
> >> > are a dump of Postgres's current memory allocations and could be
> >> useful in
> >> > showing if there's a memory leak causing this.
> >>
> >> The file is 20M, these are the last lines: (the first line continues
> >> unttill ff_26000)
> >>
> >>
> >> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
> >> chunks); 632 used
> >
> > You have 26000 partitions???
> 
> At the moment the db has 55000 partitions, and thats only a fifth of the
> complete volume the system will have in production. The reason I chose
> this solution is that a partition will be loaded with new data every 3-30
> seconds, and all that will be read by up to 15 readers every time new data
> is available. The data will be approx 2-4TB in production in total. So it
> will be too slow if I put it in a single table with permanent indexes.
> 
> I did a test previously, where I created 1 million partitions (without
> data) and I checked the limits of pg, so I think it should be ok.

Clearly it's not.  The difference could be the memory usage and wastage
for all those relcache entries and other stuff.  I would reduce the
number of partitions to a more reasonable value (within the tens, most
likely)

Maybe your particular problem can be solved by raising
max_locks_per_transaction (?) but I wouldn't count on it.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Delete/Update with ORDER BY

2007-10-25 Thread Evandro Andersen
In Oracle you can use this:



DELETE FROM A WHERE A1 = 10 ORDER BY A2



There is something in the Postgresql ?



Evandro Andersen

Brazil

Postgresql 8.2





  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> [EMAIL PROTECTED] wrote:
>
>> > are a dump of Postgres's current memory allocations and could be
>> useful in
>> > showing if there's a memory leak causing this.
>>
>> The file is 20M, these are the last lines: (the first line continues
>> unttill ff_26000)
>>
>>
>> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
>> chunks); 632 used
>
> You have 26000 partitions???

At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every 3-30
seconds, and all that will be read by up to 15 readers every time new data
is available. The data will be approx 2-4TB in production in total. So it
will be too slow if I put it in a single table with permanent indexes.

I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.

thomas


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> God morgen
>
> Please display these memory settings from your postgresql.conf file
> sort_mem
> shared_buffers

I have shown the entire configuration. if its not in the configuration
shown, I have changed its value.

I have used the configuration example provided by Sun regarding running
postgres on solaris.

http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp

regards

thomas




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:

> > are a dump of Postgres's current memory allocations and could be useful in
> > showing if there's a memory leak causing this.
> 
> The file is 20M, these are the last lines: (the first line continues
> unttill ff_26000)
> 
> 
> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used

You have 26000 partitions???

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
> <[EMAIL PROTECTED]> writes:
>
>> max_connections = 1000
>
> Do you actually have anywhere near this number of processes? What is your
> setting for work_mem? Keep in mind every process could use as much as
> work_mem
> and actually it's possible to use that much several times over.
>
> Also, what is your maintenance_work_mem and do you have many vacuums or
> other
> such commands running at the time?
>
> 1,000 processes is a large number of processes. You may be better off
> re-architecting to run fewer processes simultaneously. But if that's not
> possible you'll have to keep it in mind to tune other things properly.

The application only needs about 20 connections under normal situations,
but might need up to 100 in some situations, f.ex. if there is much
latency and new connections arrive before another is finished.

I could certainly reduce the number to 100 or 50, but do you think that
would help with this problem.

regards

thomas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-25 Thread Roberts, Jon
Robert, that does sound better.  It keeps the names of the files in svn
consistent with the database object names which is essential.  It also makes
it automatic.  Unfortunately, it doesn't tell you who did the changes.

Do you want to share that code?


Thanks!


Jon

-Original Message-
From: Robert Treat [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 24, 2007 10:24 PM
To: pgsql-general@postgresql.org
Cc: Roberts, Jon
Subject: Re: [GENERAL] subversion support?

On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:
> Yeah.  I think having to save the function to disk and then leave pgAdmin
> to execute subversion commands is going through hoops.
>
> Also, pgAdmin should be integrated so that you are notified if the
function
> in the database is different from the last committed version.  A visual
> diff should be there so you can see what the differences are.
>

We have a script that runs nightly that dumps tables / functions to file,
and 
then checks it in automagically to svn, which sends an email of the diffs.  
Perhaps that would work for you? 

-- 
Robert Treat
Database Architect
http://www.omniti.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Martin Gainty
God morgen

Please display these memory settings from your postgresql.conf file
sort_mem
shared_buffers

Takk
Martin--
- Original Message -
From: <[EMAIL PROTECTED]>
To: "Gregory Stark" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; 
Sent: Thursday, October 25, 2007 7:07 AM
Subject: Re: [GENERAL] select count() out of memory


> Hi
>
> I have tried to answer to the best of my knowledge but its running on
> Soalris 10, and I am not that familiar with solaris ( Go Linux!!! :)
>
> > any more memory. Either you have a very low memory ulimit (look at
ulimit
> > -a
> > in the same session as Postgres) or your machine is really low on
memory.
> > Perhaps you have shared_buffers set very high or some other program is
> > using
> > all your available memory (and swap)?
> >
>
> the machine has 32GB RAM, I dont know how much swap it has, but I do know
> the disk system is a disk cluster with 16x450GB disks, it probably has a
> local disk as well but I dont know how big it is.
>
> -bash-3.00$ ulimit -a
> core file size(blocks, -c) unlimited
> data seg size (kbytes, -d) unlimited
> file size (blocks, -f) unlimited
> open files(-n) 256
> pipe size  (512 bytes, -p) 10
> stack size(kbytes, -s) 10240
> cpu time (seconds, -t) unlimited
> max user processes(-u) 16357
> virtual memory(kbytes, -v) unlimited
>
>
> this is my config
>
> checkpoint_segments = 96
> effective_cache_size = 128000
> shared_buffers = 43
> max_fsm_pages = 208000
> max_fsm_relations = 1
>
> max_connections = 1000
>
> autovacuum = off# enable autovacuum subprocess?
>
> fsync = on  # turns forced synchronization on
> or off
> #full_page_writes = on  # recover from partial page writes
> wal_sync_method = fdatasync
> wal_buffers = 256
>
> commit_delay = 5
> #commit_siblings = 5# range 1-1000
>
>
>
> > Also, what version of Postgres is this?
>
> Apparently its 8.1.8, I thought it was 8.2
>
> > are a dump of Postgres's current memory allocations and could be useful
in
> > showing if there's a memory leak causing this.
>
> The file is 20M, these are the last lines: (the first line continues
> unttill ff_26000)
>
>
> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_4_value2: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_4_value1: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_4_trace_id: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_3_value7: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_3_value2: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_3_value1: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_3_trace_id: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_2_value7: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_2_value2: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_2_value1: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_2_trace_id: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_1_value7: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_1_value2: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_1_value1: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> idx_attributes_g1_seq_1_ff_1_trace_id: 1024 total in 1 blocks; 392 free (0
> chunks); 632 used
> pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
> used
> pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
> pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 chunks);
> 696 used
> pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
> pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
> used
> pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
> pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
> 696 used
> pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0
> chunks); 768 used
> pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
> 696 used
> pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);
> 632 used
> pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
> pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0
> chunks); 696 used
> pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
> pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
> 632 used
> pg_cla

Re: [GENERAL] select count() out of memory

2007-10-25 Thread Gregory Stark
<[EMAIL PROTECTED]> writes:

> max_connections = 1000

Do you actually have anywhere near this number of processes? What is your
setting for work_mem? Keep in mind every process could use as much as work_mem
and actually it's possible to use that much several times over.

Also, what is your maintenance_work_mem and do you have many vacuums or other
such commands running at the time?

1,000 processes is a large number of processes. You may be better off
re-architecting to run fewer processes simultaneously. But if that's not
possible you'll have to keep it in mind to tune other things properly.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-25 Thread Gregory Stark
"Gregory Stark" <[EMAIL PROTECTED]> writes:

> "Reg Me Please" <[EMAIL PROTECTED]> writes:
>
>>->  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940 width=8) 
>> (actual time=0.012..0.013 rows=1 
>> loops=1)
>
> The discrepancy etween the estimated rows and actual rows makes me think
> you've not analyzed this table in a long time. It's probably best to analyze
> the whole database to have a consistent set of statistics and to catch any
> other old table stats.
>
> There could be other misestimations based due to Postgres limitations but
> first fix the out of date stats and re-post both plans.

Actually it's pretty clear there are some other bad estimations as well. You
should send along the view definition too.

And I would recommend you try it with a normal JOIN ON/USING instead of the
NATURAL JOIN. It's possible it's joining on some unexpected columns -- though
that doesn't really look like it's the case here.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] conditional alter table add ?

2007-10-25 Thread Peter Childs
On 17/10/2007, Lothar Behrens <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I do convert an UML XMI model to a database script to create the
> database schema.
> To enable multiple iterations I need conditional alter table add
> column like syntax.
>
> Is there any way to do this ?



Not easily in a straight forward sql script. your going to need to write
your script in a scripting language (like perl, python or ruby) then do the
alter table query dependent
on other queries to the database,

Peter.


[GENERAL] execute pg_dump via python

2007-10-25 Thread Garry Saddington
I am using zope on windows with an external python method to backup my 
database. I am struggling to run the following command:

pg_dump.exe database > file

I have tried using os.popen - no luck
and also subprocess.Popen.

eg:
import subprocess

subprocess.Popen(['c:/dir/dir/pg_dump.exe','database','>','c:/dir/dir/output 
file'])

The command string works perfectly in a terminal. Does anyone know how I 
should be doing this? I get no errors or traceback when I try the method 
through Zope.

regards
garry

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
Hi

I have tried to answer to the best of my knowledge but its running on
Soalris 10, and I am not that familiar with solaris ( Go Linux!!! :)

> any more memory. Either you have a very low memory ulimit (look at ulimit
> -a
> in the same session as Postgres) or your machine is really low on memory.
> Perhaps you have shared_buffers set very high or some other program is
> using
> all your available memory (and swap)?
>

the machine has 32GB RAM, I dont know how much swap it has, but I do know
the disk system is a disk cluster with 16x450GB disks, it probably has a
local disk as well but I dont know how big it is.

-bash-3.00$ ulimit -a
core file size(blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files(-n) 256
pipe size  (512 bytes, -p) 10
stack size(kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes(-u) 16357
virtual memory(kbytes, -v) unlimited


this is my config

checkpoint_segments = 96
effective_cache_size = 128000
shared_buffers = 43
max_fsm_pages = 208000
max_fsm_relations = 1

max_connections = 1000

autovacuum = off# enable autovacuum subprocess?

fsync = on  # turns forced synchronization on
or off
#full_page_writes = on  # recover from partial page writes
wal_sync_method = fdatasync
wal_buffers = 256

commit_delay = 5
#commit_siblings = 5# range 1-1000



> Also, what version of Postgres is this?

Apparently its 8.1.8, I thought it was 8.2

> are a dump of Postgres's current memory allocations and could be useful in
> showing if there's a memory leak causing this.

The file is 20M, these are the last lines: (the first line continues
unttill ff_26000)


idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_4_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_3_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_2_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value7: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value2: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_value1: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
idx_attributes_g1_seq_1_ff_1_trace_id: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
MdSmgr: 4186112 total in 9 blocks; 911096 free (4 chunks); 3275016 used
LockTable (locallock hash): 2088960 total in 8 blocks; 418784 free (25
chunks); 1670176 used
Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used
ErrorC

Re: [GENERAL] Install plJava

2007-10-25 Thread João Paulo Zavanela
>João Paulo Zavanela wrote:
>>
>> The file pljava.dll exist in directory, why this error?
>> Someone can help me?
>
>PL/Java has it's own mailing list here:
>http://gborg.postgresql.org/mailman/listinfo/pljava-dev
>
>I think it is still active, but I'm not sure.  Sorry, I'm short on time.
>   Search the archives there; I think this problem has come up before.
>
>--
>Guy Rouillier


Thanks!
I will search in this mailing list.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-25 Thread Gregory Stark

"Reg Me Please" <[EMAIL PROTECTED]> writes:

>->  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940 width=8) 
> (actual time=0.012..0.013 rows=1 
> loops=1)

The discrepancy etween the estimated rows and actual rows makes me think
you've not analyzed this table in a long time. It's probably best to analyze
the whole database to have a consistent set of statistics and to catch any
other old table stats.

There could be other misestimations based due to Postgres limitations but
first fix the out of date stats and re-post both plans.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Gregory Stark

<[EMAIL PROTECTED]> writes:

>
> ERROR:  out of memory
> DETAIL:  Failed on request of size 130.
>
> Does anybody have any suggestion as to which parameter I should tune to
> give it more memory to be able to perform queries on the root table?

This indicates that malloc() failed which means the system couldn't provide
any more memory. Either you have a very low memory ulimit (look at ulimit -a
in the same session as Postgres) or your machine is really low on memory.
Perhaps you have shared_buffers set very high or some other program is using
all your available memory (and swap)?

> The last parts of the db log is the following, I dont think anything other
> than the last 2 lines are relevant.

You're wrong. All the lines like:

> pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
> chunks); 696 used

are a dump of Postgres's current memory allocations and could be useful in
showing if there's a memory leak causing this.

Also, what version of Postgres is this?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Indexes & Primary Keys (based on the same columns)

2007-10-25 Thread Ow Mun Heng

On Mon, 2007-10-22 at 08:20 -0400, Bill Moran wrote:
> In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:
> 
> > Ow Mun Heng wrote:
> > > I'm wondering if what I'm doing is redundant.
> > > 
> > > I have a primary key on columns (A,B,C,D)
> > > and I've also defined an index based on the same columns (A,B,C,D)
> > > 
> > > and sometimes in the query explain, I see the pkey being used for the
> > > scan instead of the index.
> > > 
> > > So.. That made me think perhaps the additional index on the _same_
> > > parameter is redundant.
> > 
> > A primary key creates an index so having a second index with the same 
> > definition is redundant.
> 
> Note the "same definition."
> 
> Since this is a multi-column index, there may be some advantage gained
> by having indexes defined slightly differently.  I.e., your PK is
> (ABCD) but you have an additional index on (DCBA)
> 
> Whether or not this is actually helpful depends on the nature of the
> queries you run.
> 

I found that that might not matter as much as there are bitmap indexes
which seems to be able to handle these.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Determine query run-time from pg_* tables

2007-10-25 Thread Ow Mun Heng

On Tue, 2007-10-23 at 09:28 -0500, Erik Jones wrote:
> > Ow Mun Heng wrote:
> >> Hi,
> >>
> >> Is there a method to obtain the query's runtime from any of the pg_*
> >> tables?
> query_start does, however, give you the time that the query started.   
> I use something like
> 
> SELECT procpid, client_addr, to_char(now() - query_start, 'DD  
> HH24:MI:SS') as query_time, current_query
> FROM pg_stat_activity
> ORDER BY query_time DESC;

Thanks for this.. but I found that this query doesn't really do much for
the query_time. It's always 00 for a long runnig query >1min.

I've re-wrote it using

SELECT procpid, client_addr, now() - query_start as query_time,
current_query
FROM pg_stat_activity
ORDER BY query_time DESC;

the to_char doesn't really do much for me..

Thanks for the pointer though.. It led me to the right direction.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-25 Thread Reg Me Please
Hi all.
On the very same database and session I have two different (but similar) 
queries behaving in a very different way as far as timings.

This is the first one:

prove=# explain analyze select d.* from t_vcol natural join v_dati_attuali d 
natural join tt_elem where vtab_id='TEST';
 QUERY PLAN
-
 Nested Loop  (cost=5.65..8562012.60 rows=88104022 width=73) (actual 
time=36.579..36.772 rows=7 loops=1)
   ->  Hash Join  (cost=1.19..442967.06 rows=408730 width=73) (actual 
time=36.547..36.660 rows=7 loops=1)
 Hash Cond: (d.camp_id = t_vcol.camp_id)
 ->  Nested Loop  (cost=0.00..430860.08 rows=1603700 width=73) (actual 
time=36.480..36.558 rows=24 loops=1)
   ->  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940 width=8) 
(actual time=0.012..0.013 rows=1 loops=1)
   ->  Index Scan using i_dati_0 on t_dati d  (cost=0.00..211.74 
rows=827 width=73) (actual time=36.461..36.498 rows=24 loops
 Index Cond: (d.elem_id = tt_elem.elem_id)
 Filter: dato_flag
 ->  Hash  (cost=1.12..1.12 rows=5 width=15) (actual time=0.039..0.039 
rows=5 loops=1)
   ->  Seq Scan on t_vcol  (cost=0.00..1.12 rows=5 width=15) 
(actual time=0.015..0.026 rows=5 loops=1)
 Filter: (vtab_id = 'TEST'::text)
   ->  Bitmap Heap Scan on tt_data  (cost=4.46..16.62 rows=216 width=8) 
(actual time=0.009..0.009 rows=1 loops=7)
 Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND 
(d.dato_scadenza > tt_data.data_corr))
 ->  Bitmap Index Scan on tt_data_pkey  (cost=0.00..4.41 rows=216 
width=0) (actual time=0.006..0.006 rows=1 loops=7)
   Index Cond: ((d.dato_validita <= tt_data.data_corr) AND 
(d.dato_scadenza > tt_data.data_corr))
 Total runtime: 36.922 ms
(16 rows)

And this is the second one:
prove=# explain analyze SELECT d.* from t_campi_ricerche natural join 
v_dati_attuali d natural join tt_rice where rice_id='CODICE';
 QUERY PLAN
-
 Nested Loop  (cost=43.29..38167065.82 rows=409498649 width=73) (actual 
time=2927.890..56922.415 rows=1 loops=1)
   ->  Hash Join  (cost=38.83..430557.39 rows=1899736 width=73) (actual 
time=2915.990..56910.510 rows=1 loops=1)
 Hash Cond: (d.dato_t = tt_rice.dato_t)
 ->  Hash Join  (cost=1.15..402765.04 rows=2335285 width=73) (actual 
time=191.261..55238.816 rows=2394966 loops=1)
   Hash Cond: (d.camp_id = t_campi_ricerche.camp_id)
   ->  Seq Scan on t_dati d  (cost=0.00..326867.12 rows=14011712 
width=73) (actual time=16.612..42797.766 rows=14011712 loops
 Filter: dato_flag
   ->  Hash  (cost=1.09..1.09 rows=5 width=15) (actual 
time=0.053..0.053 rows=5 loops=1)
 ->  Seq Scan on t_campi_ricerche  (cost=0.00..1.09 rows=5 
width=15) (actual time=0.031..0.041 rows=5 loops=1)
   Filter: (rice_id = 'CODICE'::text)
 ->  Hash  (cost=22.30..22.30 rows=1230 width=32) (actual 
time=0.009..0.009 rows=1 loops=1)
   ->  Seq Scan on tt_rice  (cost=0.00..22.30 rows=1230 width=32) 
(actual time=0.003..0.004 rows=1 loops=1)
   ->  Bitmap Heap Scan on tt_data  (cost=4.46..16.62 rows=216 width=8) 
(actual time=11.885..11.886 rows=1 loops=1)
 Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND 
(d.dato_scadenza > tt_data.data_corr))
 ->  Bitmap Index Scan on tt_data_pkey  (cost=0.00..4.41 rows=216 
width=0) (actual time=0.033..0.033 rows=1 loops=1)
   Index Cond: ((d.dato_validita <= tt_data.data_corr) AND 
(d.dato_scadenza > tt_data.data_corr))
 Total runtime: 56922.563 ms
(17 rows)


The v_dati_attuali is a view and is common to both queries.
The structure of indexes is on t_vcol and t_campi_ricerche is very similar and
both tt_rice and tt_elem have just one row wirh one field being primary key.

Of course I'd like the second query to behave the same as the first one but
have no clue on how to achieve it.

Is there any hint?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] select count() out of memory

2007-10-25 Thread tfinneid
Hi

I am volume testing a db model that consists of a paritioned tables. The
db has been running for a week and a half now and has built up to contain
approx 55000 partition tables of 18000 rows each. The root table therefore
contains about 1 billion rows. When I try to do a "select count(*)" of the
root table, it does some work for a while, perhaps 5-10 minutes and the
aborts with

ERROR:  out of memory
DETAIL:  Failed on request of size 130.

Does anybody have any suggestion as to which parameter I should tune to
give it more memory to be able to perform queries on the root table?

regards

thomas


The last parts of the db log is the following, I dont think anything other
than the last 2 lines are relevant.

pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0
chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768
used
MdSmgr: 4186112 total in 9 blocks; 911096 free (4 chunks); 3275016 used
LockTable (locallock hash): 2088960 total in 8 blocks; 418784 free (25
chunks); 1670176 used
Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR:  out of memory
DETAIL:  Failed on request of size 130.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/