Re: [GENERAL] keeping 3 tables in sync w/ each other

2007-09-19 Thread Ow Mun Heng
On Wed, 2007-09-19 at 19:31 +0800, Ow Mun Heng wrote: 
> On Wed, 2007-09-19 at 11:05 +0100, Filip Rembiałkowski wrote:
> > 2007/9/19, Ow Mun Heng <[EMAIL PROTECTED]>:
> > 
> > (...)
> > 
> > > simulate a delete
> > > => delete from parent where id in (select id from child);
> > > DELETE 6
> > >
> > > => select * from parent;
> > >  id |  data1
> > > +-
> > >   2 | parent2
> > >   3 | parent3
> > >   4 | parent4
> > >
> > > => select * from child;
> > >  id | data1
> > > +---
> > > (0 rows)
> > >
> > Yes. You can however try
> > 
> > SELECT FROM  ... ONLY parent   ...
> > (that's what I used in example)
> > 
> > and
> > DELETE FROM  ... ONLY parent ...
> 
> Let me re-try this and see how it goes.
> 

I tested this last night and it works (to a fault) anyway.

just FYI.. the process I'm doing..

pull from mssql
\copy into PG temp table
begin
delete unique_id from master if exists in child
insert into master from child
truncate child
update sync_log
commit;

I tested the above last night and the issue I'm seeing here is locking.
and I've to rewrite the queries such that they will only read from the
parent table.

=> select * from ONLY parent where x = Y etc..

and I can't do a :

=> select * from  parent where x = Y etc..

as the table truncation step will lock the entire table (?) (I see an
ExclusiveLock in one of the transactions)

This is good to know anyway, so it's still usable, but will likely need
user training etc which may be bad.

Are there any other suggestions? Else I think a plpgsql function to add
in new columns automatically to the 3 different tables will be a another
good option as well.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stuck on Foreign Keys

2007-09-19 Thread A. Kretschmer
am  Thu, dem 20.09.2007, um 14:13:40 +1000 mailte Chester folgendes:
> Hi
> 
> I have a question regarding foreign keys, I just cannot get it to create 
> them for meI must be doing something wrong but I have no idea what 
> that might be :)
> 
> I have a table "clients"
> 
> clientID (primary)
> ticode
> Firstname
> SecondName
> 
> I have a second table "titles"
> 
> ticode (primary)
> Title
> 
> I am trying to create a foreign key on TIcode "clients" table as in below,
> 
> ALTER TABLE clients ADD CONSTRAINT the_title FOREIGN KEY (ticode) 
> REFERENCES titles (ticode) ;
> 
> I keep getting this error
> 
> ERROR:  insert or update on table "clients" violates foreign key 
> constraint "the_title"
> DETAIL:  Key (ticode)=( ) is not present in table "titles".
> 
> Sorry, I have no idea where I  am going wrong...Any help would be great

my guess: Table clients, column ticode isn't a INT. It it a TEXT-type
and contains an entry ' '.


test=> create table clients (clientid serial primary key, ticode text);
NOTICE:  CREATE TABLE will create implicit sequence "clients_clientid_seq" for 
serial column "clients.clientid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "clients_pkey" 
for table "clients"
CREATE TABLE
test=*> create table titles(ticode text unique, title text);
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "titles_ticode_key" 
for table "titles"
CREATE TABLE
test=*> insert into clients values (1, ' ');
INSERT 0 1
test=*> alter table clients add constraint the_title FOREIGN KEY (ticode) 
REFERENCES titles (ticode);
ERROR:  insert or update on table "clients" violates foreign key constraint 
"the_title"
DETAIL:  Key (ticode)=( ) is not present in table "titles".



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Stuck on Foreign Keys

2007-09-19 Thread Chester

Michael Glaesemann wrote:


On Sep 19, 2007, at 23:13 , C wrote:

ERROR:  insert or update on table "clients" violates foreign key 
constraint "the_title"

DETAIL:  Key (ticode)=( ) is not present in table "titles".

Sorry, I have no idea where I  am going wrong...Any help would be great


You've shown us the errors but not the commands you used that threw 
the errors. Show us the exact commands you used along with the errors 
so we might see what's going wrong. It looks like you're not providing 
a value for ticode when inserting or updating clients, but that's just 
a guess.


Michael Glaesemann
grzm seespotcode net




Thanks Michael

The table's are  existing populated table  the error was happening 
whilst  altering the client table to add the FC, sorry if that  was not 
clear. Maurice put me right there were a couple of rows in the client 
table with null values in the ticode column.


Thanks again guys

Cheers

C

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stuck on Foreign Keys

2007-09-19 Thread Michael Glaesemann


On Sep 19, 2007, at 23:13 , Chester wrote:

ERROR:  insert or update on table "clients" violates foreign key  
constraint "the_title"

DETAIL:  Key (ticode)=( ) is not present in table "titles".

Sorry, I have no idea where I  am going wrong...Any help would be  
great


You've shown us the errors but not the commands you used that threw  
the errors. Show us the exact commands you used along with the errors  
so we might see what's going wrong. It looks like you're not  
providing a value for ticode when inserting or updating clients, but  
that's just a guess.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Stuck on Foreign Keys

2007-09-19 Thread Chester

Hi

I have a question regarding foreign keys, I just cannot get it to create 
them for meI must be doing something wrong but I have no idea what 
that might be :)


I have a table "clients"

clientID (primary)
ticode
Firstname
SecondName

I have a second table "titles"

ticode (primary)
Title

I am trying to create a foreign key on TIcode "clients" table as in below,

ALTER TABLE clients ADD CONSTRAINT the_title FOREIGN KEY (ticode) 
REFERENCES titles (ticode) ;


I keep getting this error

ERROR:  insert or update on table "clients" violates foreign key 
constraint "the_title"

DETAIL:  Key (ticode)=( ) is not present in table "titles".

Sorry, I have no idea where I  am going wrong...Any help would be great

TIA

C

.





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Postgresql and SSL

2007-09-19 Thread Benjamin Smith
I'm using 8.1 RPMs for CentOS and so far, it's been great. 

Now, I'm going to enable SSL. I had no trouble with the instructions on the 
documentation for server-only certificates, and verified that psql (Linux) 
acknowledges the SSL connection. 

But I am stumped as to how to create a client certificate that's enforced!  

I tried the instructions found 
http://marc.info/?l=tomcat-user&m=106293430225790&w=2

and used the "ca.pem" created there as the postgres root.crt and although the 
PG daemon no longer indicates that it couldn't find root.crt, it also doesn't 
require a client certificate installed to access with psql. 

Any pointers for somebody who is NOT an ssl guru? (like myself!) 

-Ben 

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Planning a Postgresql based Database

2007-09-19 Thread Gokulakannan Somsundaram
Hi,
Can anyone please advise on the steps that can be followed to start a
Postgresql based database firm? I expect guidelines on the use of Postgresql
resources and on ways to maintain relationship with the Postgresql
community.

Thanks,
Gokul.


Re: [GENERAL] Building Windows fat clients

2007-09-19 Thread Bill Bartlett


> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Martin Gainty
> Sent: Tuesday, September 19, 2000 5:58 PM
> To: johnf; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Building Windows fat clients
> 
> 
> Hello Guys
> 
> Using C# means .NET framework will need to be installed and 
> your webapp will 
> only work with Microsoft OS


Not entirely true.  The Mono project ( www.mono-project.com ) has
implemented a decent amount of the .NET Framework in a cross-platform
environment, including much of ASP.NET. 


> Be aware scripting languages such as PHP and Python will 
> necessitate that 
> you acquire all of the libraries for your web app..
> As long as you stay mainstream you should be ok
> But if you have specific requirements for XYZ Db that nobody 
> supports or 
> protocols or device drivers that nobody has written you'll 
> have to write the 
> libraries yourself
> 
> Martin--
> - Original Message - 
> From: "johnf" <[EMAIL PROTECTED]>
> To: 
> Sent: Wednesday, September 19, 2007 5:20 PM
> Subject: Re: [GENERAL] Building Windows fat clients
> 
> 
> > On Wednesday 19 September 2007 10:19, Scott Ribe wrote:
> >> I'm asking this group because we tend to think alike wrt to data 
> >> modeling and separation of concerns ;-)
> >>
> >> Any recommendations on ORM libraries for new Windows 
> development? The
> >> last
> >> time I started anything from scratch was over 10 years 
> ago, and the 
> >> "state
> >> of the art" seemed to be to smash everything together into 
> event handlers
> >> on GUI objects. Ugh. I pulled the M of the MVC out into 
> separate coherent
> >> classes and implemented a *very* simple ORM, leaving the VC mostly
> >> conflated in the event handlers--which is not too bad 
> since this app will
> >> never need to be cross-platform.
> >>
> >> So the dev tool was discontinued, some closed-source libraries are
> >> getting
> >> less and less compatible by the year, and we're going to 
> rewrite. Where 
> >> to
> >> start? It's a custom Windows-only app, only installed at 
> one site. Using
> >> .NET would be fine. C# or C++ would be most-preferred 
> language choices,
> >> although we could suck it up and use Java. I don't want to 
> put VB on the
> >> table.
> >>
> >> Leaning toward Visual Studio .NET because I know it will be around 
> >> (in whatever morphed form) for a while; but also considering 
> >> Borland's supposedly revitalized C++ tools because I used 
> C++ Builder 
> >> with success back when MS C++ compilers were still awful. I should 
> >> probably mention that the Windows apps, with the exception of one 
> >> complicated "explore customer's
> >> entire history here" screen, are pretty simple; the 
> complexity is in
> >> reports and stored procedures.
> >>
> >> Suggestions where to start?
> > If you like python you might want to check www.dabodev.com. 
>  Dabo was
> > designed
> > to access data.
> > -- 
> > John Fabiani
> >
> > ---(end of 
> > broadcast)---
> > TIP 6: explain analyze is your friend
> > 
> 
> 
> ---(end of 
> broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Restore 8.1.4 backup in 8.2.5

2007-09-19 Thread Hiroshi Saito


Hi.

pgAdminIII uses pg_restore by restoration. Then, pg_restore does not recognize a PlainText 
format.

Please use psql from a command line...

Regards,
Hiroshi Saito

- Original Message - 
From: [EMAIL PROTECTED]


I have a backup file from Postgresql 8.1.4. When I try to use 8.2.5 pg_restore the restore 
doesn't recognize the archive format. The archive file was made with the command below. Is 
there a way to get 8.2.5 to recognize the file, or a way to run the file as a script from a 
different program like PG_ADMIN III? As far as I can see the archive is text. My postgresql 
8.1.4 installation is no longer working.


su --command="/usr/local/pgsql/bin/pg_dump -p 55432 -U postgres -C -D -f /tmp/$(date 
+%F)owl.sql owl"


*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297

This e-mail message and any attachment(s) are for the sole use of the intended recipient(s) 
and may contain proprietary and/or confidential information which may be privileged or 
otherwise protected from disclosure.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of the message as well 
as any attachment(s) to the original message. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Configuration starting point...

2007-09-19 Thread Nathan Wilhelmi
Hello - Several people kindly responded directly to me with some 
specific suggestions for this, however the message was mistakenly 
deleted and not in the archives. If you remember who you are and 
wouldn't mind resending that would be fantastic!


Thanks!

-Nate

Hello - Just installed 8.2.4 on a Solaris 9 box. It's an 8-way (15000 
MHz sparc) with 32GB of ram. We don't know the exact table structure yet 
or access patterns, although the first thing that will be looked at is a 
Sesame triple store DB. I would expect that this DB will be more skewed 
to reads than writes. Based on this, are the out of the box configs 
pretty good or are there any recommended changes I should be making to 
start with?


Thanks!

-Nate






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Restore 8.1.4 backup in 8.2.5

2007-09-19 Thread MargaretGillon
I have a backup file from Postgresql 8.1.4. When I try to use 8.2.5 
pg_restore the restore doesn't recognize the archive format. The archive 
file was made with the command below. Is there a way to get 8.2.5 to 
recognize the file, or a way to run the file as a script from a different 
program like PG_ADMIN III? As far as I can see the archive is text. My 
postgresql 8.1.4 installation is no longer working. 

su --command="/usr/local/pgsql/bin/pg_dump -p 55432 -U postgres -C -D -f 
/tmp/$(date +%F)owl.sql owl" 

*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 
*** ***
Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297

This e-mail message and any attachment(s) are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure.  Any unauthorized review, use, disclosure or distribution is 
prohibited.  If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachment(s) to the original message.

Re: [GENERAL] Configuration starting point...

2007-09-19 Thread Filip Rembiałkowski
On 19/09/2007, Filip Rembiałkowski <[EMAIL PROTECTED]> wrote:
> here you are, all posts from this thread are below.
>
> BTW, what happened to the archives?
sorry nothing happened... I mixed it up :|

Nathan, your post is there...

http://archives.postgresql.org/pgsql-general/2007-08/msg00534.php

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Configuration starting point...

2007-09-19 Thread Filip Rembiałkowski
2007/9/19, Nathan Wilhelmi <[EMAIL PROTECTED]>:
> Hello - Several people kindly responded directly to me with some
> specific suggestions for this, however the message was mistakenly
> deleted and not in the archives. If you remember who you are and
> wouldn't mind resending that would be fantastic!

here you are, all posts from this thread are below.

BTW, what happened to the archives?



Nathan Wilhelmi <[EMAIL PROTECTED]>   10 August 2007 17:25
To: pgsql-general@postgresql.org
Hello - Just installed 8.2.4 on a Solaris 9 box. It's an 8-way (15000
MHz sparc) with 32GB of ram. We don't know the exact table structure yet
or access patterns, although the first thing that will be looked at is a
Sesame triple store DB. I would expect that this DB will be more skewed
to reads than writes. Based on this, are the out of the box configs
pretty good or are there any recommended changes I should be making to
start with?

Thanks!

-Nate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Ben <[EMAIL PROTECTED]> 10 August 2007 18:00
To: Nathan Wilhelmi <[EMAIL PROTECTED]>
Cc: pgsql-general@postgresql.org
The out-of-the-box configs are pretty awful for you. Read some
list archives (from this list and pgsql-performance) and also take a look
at http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
[Quoted text hidden]
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




Greg Smith <[EMAIL PROTECTED]>  10 August 2007 18:31
To: pgsql-general@postgresql.org
On Fri, 10 Aug 2007, Nathan Wilhelmi wrote:

> are the out of the box configs pretty good or are there any recommended
> changes I should be making to start with?

The out of the box configuration is wildly inappropriate for your system,
and there are few examples of something appropriate to point you at--much
of the information floating around is out of date for your class of
hardware.

See http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm for
something current to get you started.  That will lead you to additional
resources you can drill into from there, and includes some disclaimers
about what you should ignore in the guides that haven't been updated
recently.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
[Quoted text hidden]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ON INSERT => execute AWK/SH/EXE?

2007-09-19 Thread Chris Browne
[EMAIL PROTECTED] ("Bima Djaloeis") writes:
> Hi there,
> I am new to PostgreSQL, is it possible to create something so that
> 1) If I insert / update / delete an item from my DB...
> 2) ... an awk / shell / external program is executed in my UNIX System?
> If yes, how do I do this and if no, thanks for telling.
> Thanks for reading, any help is appreciated.

I Would Not try to do that directly, as that could lead to arbitrary
numbers of processes getting scheduled, which could cause Plenty O
Heartburn.

I would instead suggest having a trigger in place that would, upon
doing this:

 a) Insert an ID, if needed, into a work queue table.
(This may be optional.)

 b) Use NOTIFY to tell a process that uses LISTEN to wake up and
do whatever work is necessary, possibly processing *multiple*
items.

The LISTENING process needs to be prepared to process all the
queued-up work; that should lead to *vastly* more efficient processing
than spawning a worker for each item.
-- 
"cbbrowne","@","acm.org"
http://www3.sympatico.ca/cbbrowne/rdbms.html
Rules of the  Evil Overlord #60. "My five-year-old  child advisor will
also  be asked to  decipher any  code I  am thinking  of using.  If he
breaks the code  in under 30 seconds, it will not  be used. Note: this
also applies to passwords." 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Building Windows fat clients

2007-09-19 Thread Martin Gainty

Hello Guys

Using C# means .NET framework will need to be installed and your webapp will 
only work with Microsoft OS
Be aware scripting languages such as PHP and Python will necessitate that 
you acquire all of the libraries for your web app..

As long as you stay mainstream you should be ok
But if you have specific requirements for XYZ Db that nobody supports or 
protocols or device drivers that nobody has written you'll have to write the 
libraries yourself


Martin--
- Original Message - 
From: "johnf" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, September 19, 2007 5:20 PM
Subject: Re: [GENERAL] Building Windows fat clients



On Wednesday 19 September 2007 10:19, Scott Ribe wrote:

I'm asking this group because we tend to think alike wrt to data modeling
and separation of concerns ;-)

Any recommendations on ORM libraries for new Windows development? The 
last
time I started anything from scratch was over 10 years ago, and the 
"state

of the art" seemed to be to smash everything together into event handlers
on GUI objects. Ugh. I pulled the M of the MVC out into separate coherent
classes and implemented a *very* simple ORM, leaving the VC mostly
conflated in the event handlers--which is not too bad since this app will
never need to be cross-platform.

So the dev tool was discontinued, some closed-source libraries are 
getting
less and less compatible by the year, and we're going to rewrite. Where 
to

start? It's a custom Windows-only app, only installed at one site. Using
.NET would be fine. C# or C++ would be most-preferred language choices,
although we could suck it up and use Java. I don't want to put VB on the
table.

Leaning toward Visual Studio .NET because I know it will be around (in
whatever morphed form) for a while; but also considering Borland's
supposedly revitalized C++ tools because I used C++ Builder with success
back when MS C++ compilers were still awful. I should probably mention 
that
the Windows apps, with the exception of one complicated "explore 
customer's

entire history here" screen, are pretty simple; the complexity is in
reports and stored procedures.

Suggestions where to start?
If you like python you might want to check www.dabodev.com.  Dabo was 
designed

to access data.
--
John Fabiani

---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Building Windows fat clients

2007-09-19 Thread johnf
On Wednesday 19 September 2007 10:19, Scott Ribe wrote:
> I'm asking this group because we tend to think alike wrt to data modeling
> and separation of concerns ;-)
>
> Any recommendations on ORM libraries for new Windows development? The last
> time I started anything from scratch was over 10 years ago, and the "state
> of the art" seemed to be to smash everything together into event handlers
> on GUI objects. Ugh. I pulled the M of the MVC out into separate coherent
> classes and implemented a *very* simple ORM, leaving the VC mostly
> conflated in the event handlers--which is not too bad since this app will
> never need to be cross-platform.
>
> So the dev tool was discontinued, some closed-source libraries are getting
> less and less compatible by the year, and we're going to rewrite. Where to
> start? It's a custom Windows-only app, only installed at one site. Using
> .NET would be fine. C# or C++ would be most-preferred language choices,
> although we could suck it up and use Java. I don't want to put VB on the
> table.
>
> Leaning toward Visual Studio .NET because I know it will be around (in
> whatever morphed form) for a while; but also considering Borland's
> supposedly revitalized C++ tools because I used C++ Builder with success
> back when MS C++ compilers were still awful. I should probably mention that
> the Windows apps, with the exception of one complicated "explore customer's
> entire history here" screen, are pretty simple; the complexity is in
> reports and stored procedures.
>
> Suggestions where to start?
If you like python you might want to check www.dabodev.com.  Dabo was designed 
to access data.
-- 
John Fabiani

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Bjørn T Johansen
Ok, thx for the advice :)

BTJ

On Wed, 19 Sep 2007 10:51:57 -0700
Jeff Davis <[EMAIL PROTECTED]> wrote:

> On Wed, 2007-09-19 at 15:32 +0200, Bjørn T Johansen wrote:
> > > > Well, it isn't really the largest database or the database that
> > > > need the most performance... At the moment, the database isn't
> > > > larger than 15MB and is growing slowly... It is a webapp that is
> > > 
> > > That'll fit in shared memory.  Very fast.
> > > 
> > > Where will it be in a year?
> > 
> > Well, twice as much I guess...
> > 
> > > 
> > > > using the database and at the most (at the moment) there is about
> > > > 12-14 concurrent users and not much data volume...
> > > 
> > > How many users in a year?
> > 
> > It's an internal webapp for a company, so I guess not that much more...
> 
> I think, by far, your biggest concern is going to be reliability and
> availability. It doesn't sound like you're really worried about
> performance.
> 
> In that case, you might want to do RAID-1 or RAID-10 (requires at least
> 4 drives, of course).
> 
> Make sure you disable write caching on the individual drives, I think
> it's actually enabled by default (weird setting for a RAID controller). 
> 
> It's safe to enable writeback caching on the battery backed controller,
> but I'd advise leaving it off. There's no reason to worry about the
> battery if you don't need the performance anyway (however, it will help
> your write latency, so you still might consider it).
> 
> Get dual power supplies to mitigate the chance of a power supply
> failure, even if you don't have two independent circuits. 
> 
> Oh, and if you're running linux make sure to use a safe setting for
> these settings:
>   vm.oom-kill
>   vm.overcommit_ratio
>   vm.overcommit_memory
> 
> The default is not very safe for postgresql*. If a java process gets out
> of control and eats memory, there's a good chance that it will kill
> postgresql before it kills the out-of-control java process :(
> 
> Regards,
>   Jeff Davis
> 
> *: I consider this a linux bug: http://lkml.org/lkml/2007/2/9/275 
> 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Uninstall mess Postgresql 7.3.4, 8.1.4 on Redhat 9

2007-09-19 Thread MargaretGillon
> This suggsts your init script is broken.  You ought to be able to
> test whether postgres will run properly by setting PGDATA correctly,
> and then running /path/to/pg8.x/bin/pg_ctl start.  Whether postgres
> will run is not exactly the same question as whether your init script
> is correct.

The /data directory was owned by Postgres user. I ran the command you sent 
and the system didn't like it because root owned the /bin directoy. I 
changed the owner to postgres and now the message that I get is
pg_ctl: no database directory specified and environmental variable PGDATA 
unset

> It wouldn't, but your old data still need to be dumped and restored;
> and without a running 8.1, that won't help you.  Unless you mean that
> you'd install 8.2.x and load from a backup. 

Yes, that's what I meant.

>Even there, you might
> have problems, because when doing an upgrade, you need to pg_dump
> with the new pg_dump, not the old one.

The database is very simple, probably doesn't have any version specific 
code. There are no blobs. It was backed up in plain text. This is my 
backup command.

su --command="/usr/local/pgsql/bin/pg_dump -p 55432 -U postgres -C -D -f 
/tmp/$(date +%F)owl.sql owl" 

Margaret Gillon

Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Jeff Davis
On Wed, 2007-09-19 at 15:32 +0200, Bjørn T Johansen wrote:
> > > Well, it isn't really the largest database or the database that
> > > need the most performance... At the moment, the database isn't
> > > larger than 15MB and is growing slowly... It is a webapp that is
> > 
> > That'll fit in shared memory.  Very fast.
> > 
> > Where will it be in a year?
> 
> Well, twice as much I guess...
> 
> > 
> > > using the database and at the most (at the moment) there is about
> > > 12-14 concurrent users and not much data volume...
> > 
> > How many users in a year?
> 
> It's an internal webapp for a company, so I guess not that much more...

I think, by far, your biggest concern is going to be reliability and
availability. It doesn't sound like you're really worried about
performance.

In that case, you might want to do RAID-1 or RAID-10 (requires at least
4 drives, of course).

Make sure you disable write caching on the individual drives, I think
it's actually enabled by default (weird setting for a RAID controller). 

It's safe to enable writeback caching on the battery backed controller,
but I'd advise leaving it off. There's no reason to worry about the
battery if you don't need the performance anyway (however, it will help
your write latency, so you still might consider it).

Get dual power supplies to mitigate the chance of a power supply
failure, even if you don't have two independent circuits. 

Oh, and if you're running linux make sure to use a safe setting for
these settings:
  vm.oom-kill
  vm.overcommit_ratio
  vm.overcommit_memory

The default is not very safe for postgresql*. If a java process gets out
of control and eats memory, there's a good chance that it will kill
postgresql before it kills the out-of-control java process :(

Regards,
Jeff Davis

*: I consider this a linux bug: http://lkml.org/lkml/2007/2/9/275 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Building Windows fat clients

2007-09-19 Thread Scott Ribe
I'm asking this group because we tend to think alike wrt to data modeling
and separation of concerns ;-)

Any recommendations on ORM libraries for new Windows development? The last
time I started anything from scratch was over 10 years ago, and the "state
of the art" seemed to be to smash everything together into event handlers on
GUI objects. Ugh. I pulled the M of the MVC out into separate coherent
classes and implemented a *very* simple ORM, leaving the VC mostly conflated
in the event handlers--which is not too bad since this app will never need
to be cross-platform.

So the dev tool was discontinued, some closed-source libraries are getting
less and less compatible by the year, and we're going to rewrite. Where to
start? It's a custom Windows-only app, only installed at one site. Using
.NET would be fine. C# or C++ would be most-preferred language choices,
although we could suck it up and use Java. I don't want to put VB on the
table.

Leaning toward Visual Studio .NET because I know it will be around (in
whatever morphed form) for a while; but also considering Borland's
supposedly revitalized C++ tools because I used C++ Builder with success
back when MS C++ compilers were still awful. I should probably mention that
the Windows apps, with the exception of one complicated "explore customer's
entire history here" screen, are pretty simple; the complexity is in reports
and stored procedures.

Suggestions where to start?

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Uninstall mess Postgresql 7.3.4, 8.1.4 on Redhat 9

2007-09-19 Thread Andrew Sullivan
On Wed, Sep 19, 2007 at 08:52:29AM -0700, [EMAIL PROTECTED] wrote:
> 
> When the system boots there is a FAILED error when the server tries to 
> load postgresql8.
> 
> If I manually run 
> /etc/rc.d/init.d/postgresql8 start
> the message is 
> standard in must be tty
> [FAILED]

This suggsts your init script is broken.  You ought to be able to
test whether postgres will run properly by setting PGDATA correctly,
and then running /path/to/pg8.x/bin/pg_ctl start.  Whether postgres
will run is not exactly the same question as whether your init script
is correct.

> > How did you remake the account?  Same UID?
> I don't know what the UID was. I used adduser to remake the postgres 
> Redhat account.

But adduser likely changed the UID of the postgres account, and if
the old binaries or data directory were owned by a deleted user, then
the ownership is probably still held by that UID.  (I can't be sure
how RH does this, but on every other UNIX I've ever used, useradd
foo; userdel foo; useradd foo causes the second "foo" user to have a
different UID than the first one.)

This would mean that the "new" postgres user still doesn't own the
data areas of the "old" postgres installation.  You could discover
this by finding out the ownership of the data files (e.g., by using
ls -l).

> >Adding 8.2.x to your headaches won't make it better, for sure -- it's
> >just another layer of incompatibility.
> 
> If versions can run in parallel why would installing the new version 
> affect the 8.1.4 version?

It wouldn't, but your old data still need to be dumped and restored;
and without a running 8.1, that won't help you.  Unless you mean that
you'd install 8.2.x and load from a backup.  Even there, you might
have problems, because when doing an upgrade, you need to pg_dump
with the new pg_dump, not the old one.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
However important originality may be in some fields, restraint and 
adherence to procedure emerge as the more significant virtues in a 
great many others.   --Alain de Botton

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Erik Jones

On Sep 19, 2007, at 11:00 AM, Richard Broersma Jr wrote:


--- Erik Jones <[EMAIL PROTECTED]> wrote:


Also, note that once we have HOT...


I am not sure what the acronym "HOT" stands for.  Does it have  
something to do with MVCC?




Heap Only Tuple.  Here's a link to the (latest?) readme for it:  
http://archives.postgresql.org/pgsql-patches/2007-09/msg00261.php


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to know the current size of a database

2007-09-19 Thread D. Dante Lorenso

[EMAIL PROTECTED] wrote:

I want to know about the size of my database. For example, I want to know
how many Mb of data for current myDatabase database in a postgres server.


SELECT pg_size_pretty(pg_database_size(current_database()));

-- Dante

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Tsearch2 - spanish

2007-09-19 Thread Teodor Sigaev

prueba1=# select to_tsvector('espanol','melón  perro mordelón');
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> 



Hmm, can you provide backtrace?

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] NOT NULL Issue

2007-09-19 Thread Scott Ribe
Another way is to remember that NULL is a distinguished thing that is
absence of a value, not any value of any type, and this applies to all
types:

- the integer 0 is a value, not null
- the date 1/1/1900 (or 1904 or ) is a value, not null
- the time 00:00:00 is a value, not null
- and the string '' is a value, not null


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Richard Broersma Jr
--- Erik Jones <[EMAIL PROTECTED]> wrote:

> Also, note that once we have HOT...

I am not sure what the acronym "HOT" stands for.  Does it have something to do 
with MVCC?

Regards,
Richard Broersma Jr.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Uninstall mess Postgresql 7.3.4, 8.1.4 on Redhat 9

2007-09-19 Thread MargaretGillon
>>On Tue, Sep 18, 2007 at 03:06:32PM -0700, [EMAIL PROTECTED] 
wrote:
>> account and the system's knowledge of the  posgresql 8.1.4 software. 
The 
>> system will no longer run 8.1.4 even when I go to the /bin/ to run the 
>> commands. The data is still present and so is the 8.1.4 software. I 
remade 

>What happens then?  Error messages?  It's hard to tell what the
>problem might be.

When the system boots there is a FAILED error when the server tries to 
load postgresql8.

If I manually run 
/etc/rc.d/init.d/postgresql8 start
the message is 
standard in must be tty
[FAILED]


>> the postgres account on the server and gave it permission's to 8.1.4 
>> directories but still no dice. I have multiple backups of the database 
>> that are okay. 

> How did you remake the account?  Same UID?
I don't know what the UID was. I used adduser to remake the postgres 
Redhat account.
 
>> Would my best bet be to download and install 8.2.5 and import the 
backup? 
>> Or do I need to get 8.1.4 running, and how do I do that? 

>Adding 8.2.x to your headaches won't make it better, for sure -- it's
>just another layer of incompatibility.

If versions can run in parallel why would installing the new version 
affect the 8.1.4 version?

Margaret Gillon

Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Erik Jones


On Sep 19, 2007, at 10:30 AM, Richard Broersma Jr wrote:


Not quite.  Once a page has reached it's fill factor percentage full,
no more inserts will happen on that page, only updates.  Also, I
think you have large/small backwards wrt fill factor.  If  you have a
fill factor of, say, 40% then once a page has reached 40% full no
more inserts will happen (unless some space is reclaimed by vacuum).
So, smaller fill factors == bigger holes.  The bigger the fill
factor, the smaller the whole:  if you have a fill factor of 90%,
only 10% is reserved for updates of rows on that page.


So (just to reiterate), fill factor can be applied to both a table  
and/or an index(es).  But the
"holes" built into the page of a table or index can only be filled  
by UPDATE Statements.


Thanks for the clarification!


Yep.  Although, to be strictly honest, I guess the term UPDATE isn't  
the best term to use for indexes.  My description works best for  
tables, see the section on FILLFACTOR in http://www.postgresql.org/ 
docs/8.2/interactive/sql-createindex.html for a better description of  
what happens for indexes -- slightly different semantics, but the  
same general effect.


Also, note that once we have HOT, figuring out fill factor for  
indexes will be a whole different ball game.  Currently, an update to  
any tuple in a table, results in a new index entry.  With hot, index  
entries will only happen if the indexed column is changed in the update.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Gregory Stark
Bjørn T Johansen <[EMAIL PROTECTED]> writes:

> It's a Dell server with the following spec:
>
> PE2950 Quad-Core Xeon E5335 2.0GHz, dual 
> 4GB 667MHz memory
> 3 x 73GB SAS 15000 rpm disk
> PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x 
> 6 backplane
>
> Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone know if 
> this PERC controller is supported under
> Linux (not heard of it before...)

PERC is Dell's name from whatever RAID OEM flavour of the week they're buying.

I think the PERC 5 is going to want the megaraid driver which is in the stock
kernel tree but may or may not be compiled in your binary kernel distribution
packages.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] how to know the current size of a database

2007-09-19 Thread Decibel!

On Sep 19, 2007, at 5:36 AM, [EMAIL PROTECTED] wrote:
I want to know about the size of my database. For example, I want  
to know
how many Mb of data for current myDatabase database in a postgres  
server.


If you don't need an exact size, this query will be a lot faster than  
the size functions:


SELECT pg_size_pretty(sum(relpages)*8192) FROM pg_class;
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Bjørn T Johansen
Well, it isn't really the largest database or the database that need the most 
performance... At the moment, the
database isn't larger than 15MB and is growing slowly... It is a webapp that is 
using the database and at the most
(at the moment) there is about 12-14 concurrent users and not much data 
volume...

We are thinking about this spec. because the web app is a java app, and we need 
need something that can run java
fast as well as postgresql...


BTJ

On Wed, 19 Sep 2007 14:11:01 +0200
[EMAIL PROTECTED] wrote:

> Hi, you have forgot to note some very important information - what load do you
> expect and what is the size of the database? Is this an upgrade (is the
> database already running somewhere - this would give you some performance
> requirements) or is it a completely new database? Hom nay users / transactions
> do you expect?
> 
> Anyway the machine seems quite powerful to me - maybe I'd use more RAM but
> that's easy to do in the future and depends on the size of the dabase. The
> disks seem quite fast, just think about partitioning (raid scheme, where to 
> put
> xlog, etc.)
> 
> I guess we have PERC in some of our Dell servers, and it works fine - but I'm
> not sure about the exact type / version as I'm not responsible for the 
> servers.
> 
> Tomas
> 
> > It's a Dell server with the following spec:
> >
> > PE2950 Quad-Core Xeon E5335 2.0GHz, dual
> > 4GB 667MHz memory
> > 3 x 73GB SAS 15000 rpm disk
> > PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x
> > 6 backplane
> >
> >
> > Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone know if
> > this PERC controller is supported under
> > Linux (not heard of it before...)
> >
> >
> > Regards,
> >
> > BTJ
> >
> > --
> >
> ---
> > Bjørn T Johansen
> >
> > [EMAIL PROTECTED]
> >
> ---
> > Someone wrote:
> > "I understand that if you play a Windows CD backwards you hear strange
> > Satanic messages"
> > To which someone replied:
> > "It's even worse than that; play it forwards and it installs Windows"
> >
> ---
> >
> > ---(end of broadcast)---
> > TIP 1: if posting/reading through Usenet, please send an appropriate
> >subscribe-nomail command to [EMAIL PROTECTED] so that your
> >message can get through to the mailing list cleanly
> >
> 
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Richard Broersma Jr
--- Phoenix Kiula <[EMAIL PROTECTED]> wrote:

> 2. Is this fill factor enough to have on the table, or should I also
> do a fill factor for specific indexes? Or both the table and the
> index? (I have four btree indexes on the table)

I don't think that fill factor can be applied to the table.  The CREATE TABLE 
reference doc show
that fill factor can be used in the CREATE TABLE statement, but it is only 
applied to syntax that
creates an implied index.

i.e.  CREATE TABLE test (
test_idINTEGER   PRIMARY KEY WITH ( FILLFACTOR = 70 ),
test_val   TEXT );

Primary key will create an implied index.  Fill factor is applied to that 
implied index.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Decibel!

On Sep 19, 2007, at 6:30 AM, Bjørn T Johansen wrote:

It's a Dell server with the following spec:

PE2950 Quad-Core Xeon E5335 2.0GHz, dual
4GB 667MHz memory
3 x 73GB SAS 15000 rpm disk
PERC 5/I Integrated controller card (8 ports, 256MB cache, battery  
backup) x 6 backplane


RAID5 is not a recipe for performance on a database, if that's what  
you were thinking.


Of course, without having any idea of database size or transaction  
rate, it's impossible to tell you if that's a good server for your  
needs or not. Maybe all you need is a 486. :)

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Richard Broersma Jr
> Not quite.  Once a page has reached it's fill factor percentage full,  
> no more inserts will happen on that page, only updates.  Also, I  
> think you have large/small backwards wrt fill factor.  If  you have a  
> fill factor of, say, 40% then once a page has reached 40% full no  
> more inserts will happen (unless some space is reclaimed by vacuum).   
> So, smaller fill factors == bigger holes.  The bigger the fill  
> factor, the smaller the whole:  if you have a fill factor of 90%,  
> only 10% is reserved for updates of rows on that page.

So (just to reiterate), fill factor can be applied to both a table and/or an 
index(es).  But the
"holes" built into the page of a table or index can only be filled by UPDATE 
Statements.

Thanks for the clarification!

Regards,

Richard Broesma Jr.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread tv
Hi, you have forgot to note some very important information - what load do you
expect and what is the size of the database? Is this an upgrade (is the
database already running somewhere - this would give you some performance
requirements) or is it a completely new database? Hom nay users / transactions
do you expect?

Anyway the machine seems quite powerful to me - maybe I'd use more RAM but
that's easy to do in the future and depends on the size of the dabase. The
disks seem quite fast, just think about partitioning (raid scheme, where to put
xlog, etc.)

I guess we have PERC in some of our Dell servers, and it works fine - but I'm
not sure about the exact type / version as I'm not responsible for the servers.

Tomas

> It's a Dell server with the following spec:
>
> PE2950 Quad-Core Xeon E5335 2.0GHz, dual
> 4GB 667MHz memory
> 3 x 73GB SAS 15000 rpm disk
> PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x
> 6 backplane
>
>
> Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone know if
> this PERC controller is supported under
> Linux (not heard of it before...)
>
>
> Regards,
>
> BTJ
>
> --
>
---
> Bjørn T Johansen
>
> [EMAIL PROTECTED]
>
---
> Someone wrote:
> "I understand that if you play a Windows CD backwards you hear strange
> Satanic messages"
> To which someone replied:
> "It's even worse than that; play it forwards and it installs Windows"
>
---
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Erik Jones

On Sep 19, 2007, at 9:29 AM, Richard Broersma Jr wrote:


--- Phoenix Kiula <[EMAIL PROTECTED]> wrote:


Then I am confused again about how the fill factor stuff works. Let's
say I have a table with four BTREE indexes. Should all of them have a
fill factor of about 60 (lower than the default 90, that is) to be
effective? Or will it help if I lower the fill factor on only a  
couple

of the most often used ones? The primary key index is very, very
rarely updated so I don't need it to have a fill factor.

I could try and see these one by one, but that's no better than
touching/feeling the database blind-folded. I would love to see some
writeup about this whole thing, but it seems hard to come by!


I will try to explain everything that I understand about indexes  
and tables.  I am sure that if
some part of my understanding is incorrect, someone will chime in  
to correct me.


In PostgreSQL, tables are physically distinct from indexes.  This  
means that any give table is
written to disk as a file(s), and indexes are also written to disk  
as a separate file(s).  A table
and index are both saved to disk in segmented block referred to a  
pages (I believe the default

size is 8K).


Yes, that can be changed at compile time, although I  don't think  
I've ever heard of any advantages to doing that.




The advantage of the index file is that it is significantly smaller  
in size, so it takes less time

to sequentially scan and less time to read from disk.

Now when you want to find a record, PostgreSQL will/(may choose to)  
sequentially scan the index
until it find the record entry that corresponds with your  
criteria.  This entry has a table
cross-reference to the actual page that contains the record that is  
"pointed" at by the index.
Lastly, the entire table page containing your record is read from  
disk to memory for further query

processing.


When you insert a record into a table that generates an entry into  
the b-tree index file,
PostgreSQL will scan the pages of the index file to find the  
correct place and index page to add
this entry.  If the page is already full, PostgreSQL "probably"  
replaces the old full pages with
two new pages with a distribution of that chunk of the B-tree  
index, and then adds the new entry
to one of those pages. This operation can become very expensive if  
many new pages need to be

created from single INSERT/UPDATE statement.



Yes, the point of B-trees is that they have fast lookup times, but  
updates can be expensive when you have to re-balance your leaf nodes.


By using fill factor, you are telling PostgreSQL to automatically  
leave a portion of any newly
created index page partially blank for future use.  When a newly  
created index entry needs to be

saved, it can be stored in one of the "holes" left in the index page.


That future use is only for updates.



A large fill factor create both advantages and dis-advantages.  For  
writing operations, it is a
big advantage because, a large fill factor will leave alot of holes  
for INSERT and UPDATE
operations to use.  This can help increase the number of UPDATE/ 
INSERT per seconds that you server
can handle since, they index pages do not have to be rebuilt very  
often.


Not quite.  Once a page has reached it's fill factor percentage full,  
no more inserts will happen on that page, only updates.  Also, I  
think you have large/small backwards wrt fill factor.  If  you have a  
fill factor of, say, 40% then once a page has reached 40% full no  
more inserts will happen (unless some space is reclaimed by vacuum).   
So, smaller fill factors == bigger holes.  The bigger the fill  
factor, the smaller the whole:  if you have a fill factor of 90%,  
only 10% is reserved for updates of rows on that page.




However, the disadvantage is that, a newly created index with a  
large fill factor has "index
bloat" designed into it.  This mean that the index pages have a  
large portion of holes.  So
PostgreSQL will create more index pages than it would normally in  
order to hold both your index
and the pre-defined holes that you specified by your fill-factor.   
Larger indexes require more
time to be read from disk to memory and will require more time to  
sequentially scan to find to
find the cross-reference table page location of records of  
interest.  So the net effect is that

larger indexes will make SELECT statement slower.

This is my understanding for tables indexes and fill factor. I hope  
it helps.


Again, with the large v. small fill factor point.  Using fill factor  
seems to be a trade-off between space and update efficiency.  Let's  
say that after so many (potential) updates you know that each row  
will become static, i.e. no more updates will happen.  Let's use some  
numbers and make them easy to work with.  Say each row will be  
updated exactly once and you use a fill factor of 50%.  Now, say 5K  
of fresh index data is written.  The first 4K will go into one page  
at which point it has 

Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Albe Laurenz
Phoenix Kiula wrote:
> Then I am confused again about how the fill factor stuff works. Let's
> say I have a table with four BTREE indexes. Should all of them have a
> fill factor of about 60 (lower than the default 90, that is) to be
> effective? Or will it help if I lower the fill factor on only a couple
> of the most often used ones? The primary key index is very, very
> rarely updated so I don't need it to have a fill factor.
> 
> I could try and see these one by one, but that's no better than
> touching/feeling the database blind-folded. I would love to see some
> writeup about this whole thing, but it seems hard to come by!

Let me present my understanding; I hope I won't confuse issues further.

Indexes and tables are bath organized in pages, each page contains
several
entries or rows.

When an INSERT or UPDATE on the table occurs, a new row (version) is
created (and the old version of the row will be freed upon VACUUM).

Any index entry that points to this row will have to be changed
because the location of the row has changed.
That means that there will also have to be a new entry in the index,
even if the key has not changed (simply modifying the existing
index entry to point to the new row location won't do, because there
may be transactions that still need the old version of the row).

Reducing fillfactor on tables (default 100) will reduce the number
of table pages that need to be touched during an UPDATE.

Reducing fillfactor on a B-tree index (default 90) will reduce the
frequency of page splits that can happen upon INSERT or UPDATE.

Both at the cost of wasting some disk (and memory) space.

So I *guess* that when you decide that a table will be heavily updated
and you want to reduce disk I/O at the cost of wasting some space,
it will be a good idea to reduce fillfactor on the table and all its
indexes.

I emphasize the "guess" because
a) I may have made a mistake in my deductions :^) and
b) I cannot tell you good numbers to choose.

As in most performance tuning questions, the best thing you can probably
is to test and compare various settings and see which performs best
for you

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Q:Aggregrating Weekly Production Data. How do you do it?

2007-09-19 Thread Scott Marlowe
On 9/18/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> On Tue, 2007-09-18 at 09:55 -0500, Scott Marlowe wrote:
> > On 9/17/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > > Just wondering how everyone is doing aggregration of production data.
> >
> > Where I work, we aggregate by the minute in the application, then dump
> > those stats entries into the database en masse from each machine.  So,
> [snip]
> > The fail1/2/3 are types of failures, the totalresptime
> > time is all the successful requests added together, so that
> > totalresptime/success = average for that minute, and max is the
> > longest running request ending in that minute.
>
> So, it's basicaly a summation rather than using avg() all the way.
> That would mean using something like bigint or something to deal with
> the large numbers

Yeah.  We use numeric(18) (bonus points if anyone knows what other
database I'm pumping data out of to get a numeric(18) as a default.)

The advantage is that I don't have to re-weight my measurements.  When
the system was first designed, it stored avg response and number of
responses, which I had to then multiply to get the original total time
and process to get a weighted average.  This method removes the
averaging step in the client software AND the multiplying step to get
the raw numbers back on my end, win-win.

> > > (getting min/max/count isn't much of an issue. Stdev is the main issue I
> > > believe)
> > >
> > > One such instance I've read about is..
> >
> > Isn't stddev() working for you?
>
> Stdev is based on a population of data. if I do slices and if I want to
> stdev across 2 months (8 weeks eg) then it would be wrong.

But if you ask for the data set grouped by
date_trunc('minute',timestmp) in an inner select, then in the outer
select you can do a stddev on that set and get it.  Or is that not
accurate?

> > What I do is aggregate the minute time slices by grouping by
> > date_trunc('xxx',timestamp) and then use that as a subselect to a
> > query that does the stddev() outside of that.  works pretty well for
> > us, and makes it easy to identify trends.
>
> Trending analysis is very critical and the only reference I've found on
> how to get "stdev" is based on what I posted.

Well, what we're doing seems to be getting us what look like proper numbers.

> > One of the things we started doing is to aggregate the last six weeks
> > data by the day / hour and then comparing the last 24 hours worth of
> > data to those six weeks worth to see trends for each hour of each day.
> >  The queries are huge and ugly, but they work, and run in about 4
> > minutes on a fairly good sized chunk of data.  We have about 150k to
> > 300k entries a day put into this db.
>
> I'm not sure how many K entries in a day(yet to count it) but I'm
> getting into trouble w/ one-2-many relationships and PG is choosing to
> do nested loops etc. (lots of left joins on same tables)

Yeah, for this kind of thing, you want to pre-process your tables into
one big flat table if you can.  We do have one to many relationships
within the same table (parent / child system requests are one to many)
and when we join the table to itself several times it can get hairy.
We had some nested loop problems until I upped the stats target on a
few of the columns and scheduled regular analyzes for the main table.
Now the system picks the right plan 99.9% or more of the time.  And
when it doesn't, it's usually something fresh and new that's the
issue.

> So, I've to resort to using SRF and function scans. a >2 hour Complex
> query (on my laptopn,PG, runs in 20min on high end mssql) is reduced to
> ~60secs.

That's an excellent improvement!

> Thanks for the response.
>
> BTW, are user queries an issue?   Does it interfere with the loading.

Not usually. I've got them trained pretty well, and the system has
lots more I/O bandwidth than CPU horsepower, so generally a long
running query just waits on I/O.  RAID-10 has made this system much
faster than the old single drive workstation (mine) it was running on.

The data loading on this system runs once a minute and it pumps the
data out of the production db, and puts it into the reporting db.
Average run time to insert those rows is < 1 second.

> (That's one of my concerns over here). I noticed  that you do the calc
> at the app before sending it to the DB, which is a good thing and every
> minute too. (is it adviseable to do it in chunks of 1min? less data per
> minute vs per 30min/1 hour )

exactly.  Each minute is easy to keep track of, and if the app goes
crazy for a bit and loses the data / gets restarted, you only lose a
minutes or so worth of data.

Also, we use this system for real time alerting and performance
monitoring.  If some request or other starts taking too long or the
failure rate shoots up, it generates alerts.  If it was collected
every 30 minutes or an hour that would be far too late.

---(end of broadcast)---
TIP 6: explain analyze 

Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Scott Marlowe
On 9/19/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
> On 09/19/07 08:32, Bjørn T Johansen wrote:
> > On Wed, 19 Sep 2007 07:59:36 -0500
> >>
> >> Am I so old that (even accepting Tomcat and Java) that seems
> >> "excessive"?
> >
> > Yes, I think that it's a bit excessive but the company can afford it so why 
> > not... :)
>
> Lucky SOB.
>
> I can't get my company to spring for a dual-core 2GB system with
> SATA drives.

Hehe.  I wanted a new reporting server so I wound up donating a 4 port
SATA card for expanding an old workstation.  Now I just need to stuff
two more drives into it, bringing it up to a 6 drive sw RAID 10.
Built it in a day.  Meanwhile, the project to build a RAC cluster has
been ongoing for about 2 months.  But it's close!  :)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Version 8.2.5 for Windows doesn't startup normallyafter upgrading from 8.2.4

2007-09-19 Thread Magnus Hagander
On Wed, Sep 19, 2007 at 09:10:43AM -0500, Walter Roeland wrote:
> Dave,
> 
> For the reasons you mention for using pg_ctl -w, it seemed to me the wisest 
> option to recreate the postgres database. After the normal startup messages, 
> the following error appears once in the log:
> 
> 127.0.0.1 postgres postgres FATAL:  the database system is starting up
> 
> I assume that this is due to the -w option. The service is now working 
> normally.

Yeah, this is normal. pg_ctl polls the server before it has finished the
startup sequence, causing this message to be logged.

It is kind of "scary" that it's logged as FATAL, but it's actually not a
problem at all.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Richard Broersma Jr
--- Phoenix Kiula <[EMAIL PROTECTED]> wrote:

> Then I am confused again about how the fill factor stuff works. Let's
> say I have a table with four BTREE indexes. Should all of them have a
> fill factor of about 60 (lower than the default 90, that is) to be
> effective? Or will it help if I lower the fill factor on only a couple
> of the most often used ones? The primary key index is very, very
> rarely updated so I don't need it to have a fill factor.
> 
> I could try and see these one by one, but that's no better than
> touching/feeling the database blind-folded. I would love to see some
> writeup about this whole thing, but it seems hard to come by!

I will try to explain everything that I understand about indexes and tables.  I 
am sure that if
some part of my understanding is incorrect, someone will chime in to correct me.

In PostgreSQL, tables are physically distinct from indexes.  This means that 
any give table is
written to disk as a file(s), and indexes are also written to disk as a 
separate file(s).  A table
and index are both saved to disk in segmented block referred to a pages (I 
believe the default
size is 8K).  

The advantage of the index file is that it is significantly smaller in size, so 
it takes less time
to sequentially scan and less time to read from disk.

Now when you want to find a record, PostgreSQL will/(may choose to) 
sequentially scan the index
until it find the record entry that corresponds with your criteria.  This entry 
has a table
cross-reference to the actual page that contains the record that is "pointed" 
at by the index.
Lastly, the entire table page containing your record is read from disk to 
memory for further query
processing.


When you insert a record into a table that generates an entry into the b-tree 
index file,
PostgreSQL will scan the pages of the index file to find the correct place and 
index page to add
this entry.  If the page is already full, PostgreSQL "probably" replaces the 
old full pages with
two new pages with a distribution of that chunk of the B-tree index, and then 
adds the new entry
to one of those pages. This operation can become very expensive if many new 
pages need to be
created from single INSERT/UPDATE statement.


By using fill factor, you are telling PostgreSQL to automatically leave a 
portion of any newly
created index page partially blank for future use.  When a newly created index 
entry needs to be
saved, it can be stored in one of the "holes" left in the index page.  

A large fill factor create both advantages and dis-advantages.  For writing 
operations, it is a
big advantage because, a large fill factor will leave alot of holes for INSERT 
and UPDATE
operations to use.  This can help increase the number of UPDATE/INSERT per 
seconds that you server
can handle since, they index pages do not have to be rebuilt very often.

However, the disadvantage is that, a newly created index with a large fill 
factor has "index
bloat" designed into it.  This mean that the index pages have a large portion 
of holes.  So
PostgreSQL will create more index pages than it would normally in order to hold 
both your index
and the pre-defined holes that you specified by your fill-factor.  Larger 
indexes require more
time to be read from disk to memory and will require more time to sequentially 
scan to find to
find the cross-reference table page location of records of interest.  So the 
net effect is that
larger indexes will make SELECT statement slower.

This is my understanding for tables indexes and fill factor. I hope it helps.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a problem with pg_hba.conf

2007-09-19 Thread Martijn van Oosterhout
On Wed, Sep 19, 2007 at 03:02:26PM +0200, Patricia Rodriguez Tome wrote:
> Hello
> I have installed without problem 8.2.4, but I must have missed something 
> in my pg_hba.conf file.
> I want all users, from anywhere even the local host, to connect using a 
> password.

Did you reload the postmaster after making the changes?

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Version 8.2.5 for Windows doesn't startup normallyafter upgrading from 8.2.4

2007-09-19 Thread Walter Roeland
Dave,

For the reasons you mention for using pg_ctl -w, it seemed to me the wisest 
option to recreate the postgres database. After the normal startup messages, 
the following error appears once in the log:

127.0.0.1 postgres postgres FATAL:  the database system is starting up

I assume that this is due to the -w option. The service is now working normally.

Tanks,

Walter Roeland


-Mensaje original-
De: Dave Page [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 19 de Septiembre de 2007 03:01
Para: Walter Roeland
CC: pgsql-general@postgresql.org
Asunto: Re: [GENERAL] Version 8.2.5 for Windows doesn't startup normallyafter 
upgrading from 8.2.4

On Tue, 2007-09-18 at 14:58 -0500, Walter Roeland wrote:
> 2007-09-18 14:28:36 127.0.0.1 postgres postgres FATAL:  database
> "postgres" does not exist
> 
> And I have to abort the startup.
> 
> Maybe the next is a hint:
> When I had blocked the access to localhost with SSL=ON (using hostnossl
> pg_hba.conf) there was a constant complaint (2 times per second) with:
> 127.0.0.1 postgres postgres FATAL:  no pg_hba.conf entry for host
> "127.0.0.1", user "postgres", database "postgres", SSL on
> 
> -
> Have I something wrong with the configuration of the service?

Prior to 8.2.5, the -w option for pg_ctl was broken which meant that the
server would report itself running to the service control manager when
in actual fact it was still starting up. This would mean that dependent
services such as Slony or pgAgent would fail to start because the server
wasn't necessarily accepting connections at that time.

The -w option tells pg_ctl to attempt to connect to the server every few
seconds until successful - and only then report running status to the
service control manager, which will then attempt to start the dependent
services.

In your case, it seems like the 'postgres' database has been removed
which prevents pg_ctl connecting. When you blocked access through
pg_hba.conf, the fact that the database didn't exist was masked by the
lack of access to even attempt the connection.

To fix this, either:

- Modify the registry key (having taken a backup first of course) with
the pg_ctl command line, removing the -w option. 

- Recreate the postgres database, and ensure it's accessible.

Regards, Dave.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 08:32, Bjørn T Johansen wrote:
> On Wed, 19 Sep 2007 07:59:36 -0500
> Ron Johnson <[EMAIL PROTECTED]> wrote:
> 
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> On 09/19/07 07:33, Bjørn T Johansen wrote:
>>> Well, it isn't really the largest database or the database that
>>> need the most performance... At the moment, the database isn't
>>> larger than 15MB and is growing slowly... It is a webapp that is
>> That'll fit in shared memory.  Very fast.
>>
>> Where will it be in a year?
> 
> Well, twice as much I guess...
> 
>>> using the database and at the most (at the moment) there is about
>>> 12-14 concurrent users and not much data volume...
>> How many users in a year?
> 
> It's an internal webapp for a company, so I guess not that much more...
> 
>>> We are thinking about this spec. because the web app is a java
>>> app, and we need need something that can run java fast as well as
>>> postgresql...
>> 12-14 users on a Quad-core system with 4GB RAM?
>>
>> Am I so old that (even accepting Tomcat and Java) that seems
>> "excessive"?
> 
> Yes, I think that it's a bit excessive but the company can afford it so why 
> not... :)

Lucky SOB.

I can't get my company to spring for a dual-core 2GB system with
SATA drives.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8Sa4S9HxQb37XmcRAiOzAKDh3TGGuYLoJvK5bAJzGfouYDqVeQCgzcp4
lUjG26gFkQwccLuG9WuT+Do=
=oFhQ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] a problem with pg_hba.conf

2007-09-19 Thread Patricia Rodriguez Tome

Hello
I have installed without problem 8.2.4, but I must have missed something 
in my pg_hba.conf file.
I want all users, from anywhere even the local host, to connect using a 
password.

So I did:
# my pg_hba file
local   all all   md5
# IPv4 local connections:
hostall all 127.0.0.1/32  md5
# IPv6 local connections:
hostall all ::1/128   md5
# my machine different of db server
hostall all 156.148.120.63/32  md5
# internet
hostall all0.0.0.0/0 reject

Now if I do:
psql -U myself -h dbmachine mydb
---> no password required

psql -U myself -h dbmachine.crs4.it mydb
---> password required

Why ? I want the password required even if I am not giving the full name 
of the machine. What am I doing wrong ?

I have been googling around, but could not find an answer.
Thanks for any pointers
P.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Bjørn T Johansen
On Wed, 19 Sep 2007 07:59:36 -0500
Ron Johnson <[EMAIL PROTECTED]> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 09/19/07 07:33, Bjørn T Johansen wrote:
> > Well, it isn't really the largest database or the database that
> > need the most performance... At the moment, the database isn't
> > larger than 15MB and is growing slowly... It is a webapp that is
> 
> That'll fit in shared memory.  Very fast.
> 
> Where will it be in a year?

Well, twice as much I guess...

> 
> > using the database and at the most (at the moment) there is about
> > 12-14 concurrent users and not much data volume...
> 
> How many users in a year?

It's an internal webapp for a company, so I guess not that much more...

> 
> > We are thinking about this spec. because the web app is a java
> > app, and we need need something that can run java fast as well as
> > postgresql...
> 
> 12-14 users on a Quad-core system with 4GB RAM?
> 
> Am I so old that (even accepting Tomcat and Java) that seems
> "excessive"?

Yes, I think that it's a bit excessive but the company can afford it so why 
not... :)


BTJ

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] problemes pga_hba.conf

2007-09-19 Thread Patricia Rodriguez Tome

Sorry, forget my previous question.
holidays have slowed my brain ... I have a .pgpass file under that user 
...


oups ;)
p.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Robert Gravsjö

Bjørn T Johansen wrote:

It's a Dell server with the following spec:

PE2950 Quad-Core Xeon E5335 2.0GHz, dual 
4GB 667MHz memory

3 x 73GB SAS 15000 rpm disk
PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x 6 
backplane


Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone know if this 
PERC controller is supported under
Linux (not heard of it before...)



I've been running Gentoo Linux on a PE2950 with PERC 5 controller, so 
yes Linux runs on it. (Not sure about the "I"... not sure in what flavor 
the PERC 5 exists.)


Regards,
Roppert





Regards,

BTJ





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 06:30, Bjørn T Johansen wrote:
> It's a Dell server with the following spec:
> 
> PE2950 Quad-Core Xeon E5335 2.0GHz, dual 
> 4GB 667MHz memory
> 3 x 73GB SAS 15000 rpm disk
> PERC 5/I Integrated controller card (8 ports, 256MB cache, battery 
> backup) x 6 backplane

You *know* we're going to say something obvious like "it depends on
the size of the database and the workload".

> Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone
> know if this PERC controller is supported under Linux (not heard
> of it before...)

Google says "yes".

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8Rn3S9HxQb37XmcRAmEXAKDuh3tm+8am5Baopiwzinxh009xdgCdGgxS
5RhuTNIo88h227syqIIzfdA=
=/YEE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 07:33, Bjørn T Johansen wrote:
> Well, it isn't really the largest database or the database that
> need the most performance... At the moment, the database isn't
> larger than 15MB and is growing slowly... It is a webapp that is

That'll fit in shared memory.  Very fast.

Where will it be in a year?

> using the database and at the most (at the moment) there is about
> 12-14 concurrent users and not much data volume...

How many users in a year?

> We are thinking about this spec. because the web app is a java
> app, and we need need something that can run java fast as well as
> postgresql...

12-14 users on a Quad-core system with 4GB RAM?

Am I so old that (even accepting Tomcat and Java) that seems
"excessive"?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8R04S9HxQb37XmcRAhblAJ9AIS90c+xjOs4KOLqkYOg7gf2PwgCgleFw
gZ82nICVs6tEKVY7IxGD1Fs=
=xrCi
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Phoenix Kiula
On 19/09/2007, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> --- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>
> > 2. Is this fill factor enough to have on the table, or should I also
> > do a fill factor for specific indexes? Or both the table and the
> > index? (I have four btree indexes on the table)
>
> I don't think that fill factor can be applied to the table.  The CREATE TABLE 
> reference doc show
> that fill factor can be used in the CREATE TABLE statement, but it is only 
> applied to syntax that
> creates an implied index.
>
> i.e.  CREATE TABLE test (
> test_idINTEGER   PRIMARY KEY WITH ( FILLFACTOR = 70 ),
> test_val   TEXT );
>
> Primary key will create an implied index.  Fill factor is applied to that 
> implied index.
>
> Regards,
> Richard Broersma Jr.
>



Then I am confused again about how the fill factor stuff works. Let's
say I have a table with four BTREE indexes. Should all of them have a
fill factor of about 60 (lower than the default 90, that is) to be
effective? Or will it help if I lower the fill factor on only a couple
of the most often used ones? The primary key index is very, very
rarely updated so I don't need it to have a fill factor.

I could try and see these one by one, but that's no better than
touching/feeling the database blind-folded. I would love to see some
writeup about this whole thing, but it seems hard to come by!

Many thanks

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] how to know the current size of a database

2007-09-19 Thread Farhan Mughal

>>I want to know about the size of my database. For example, I want to know
>>how many Mb of data for current myDatabase database in a postgres server.

>>How I can do that?

Try the following query:
select pg_size_pretty(pg_database_size(''));

--Farhan 



- Original Message 
From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Sent: Wednesday, 19 September, 2007 3:36:02 PM
Subject: [GENERAL] how to know the current size of a database

Hello

I want to know about the size of my database. For example, I want to know
how many Mb of data for current myDatabase database in a postgres server.

How I can do that?
Thanks



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq







  ___ 
Want ideas for reducing your carbon footprint? Visit Yahoo! For Good  
http://uk.promotions.yahoo.com/forgood/environment.html

Re: [GENERAL] how to know the current size of a database

2007-09-19 Thread A. Kretschmer
am  Wed, dem 19.09.2007, um 22:36:02 +1200 mailte [EMAIL PROTECTED] folgendes:
> Hello
> 
> I want to know about the size of my database. For example, I want to know
> how many Mb of data for current myDatabase database in a postgres server.

http://www.postgresql.org/docs/8.1/interactive/functions-admin.html
Table 9-47. Database Object Size Functions


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] keeping 3 tables in sync w/ each other

2007-09-19 Thread Ow Mun Heng
On Wed, 2007-09-19 at 11:05 +0100, Filip Rembiałkowski wrote:
> 2007/9/19, Ow Mun Heng <[EMAIL PROTECTED]>:
> 
> (...)
> 
> > simulate a delete
> > => delete from parent where id in (select id from child);
> > DELETE 6
> >
> > => select * from parent;
> >  id |  data1
> > +-
> >   2 | parent2
> >   3 | parent3
> >   4 | parent4
> >
> > => select * from child;
> >  id | data1
> > +---
> > (0 rows)
> >
> Yes. You can however try
> 
> SELECT FROM  ... ONLY parent   ...
> (that's what I used in example)
> 
> and
> DELETE FROM  ... ONLY parent ...

Here's the obigatory Ooohhh or the simpson's "Doh!" 

Let me re-try this and see how it goes.

Many thanks for the Ooo... ( I feel like the "green aliens" in Toy
Story)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Is this good spec for a PostgreSQL server?

2007-09-19 Thread Bjørn T Johansen
It's a Dell server with the following spec:

PE2950 Quad-Core Xeon E5335 2.0GHz, dual 
4GB 667MHz memory
3 x 73GB SAS 15000 rpm disk
PERC 5/I Integrated controller card (8 ports, 256MB cache, battery backup) x 6 
backplane


Is this ok to run PostgreSQL 8.2.x and Tomcat on? And does anyone know if this 
PERC controller is supported under
Linux (not heard of it before...)


Regards,

BTJ

-- 
---
Bjørn T Johansen

[EMAIL PROTECTED]
---
Someone wrote:
"I understand that if you play a Windows CD backwards you hear strange Satanic 
messages"
To which someone replied:
"It's even worse than that; play it forwards and it installs Windows"
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Calculate Weighted Aggregation between Two Tables

2007-09-19 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/19/07 04:13, Stefan Schwarzer wrote:
> Hi,
> 
> I have global national statistics (GDP, Fish Catch etc.), which I would
> like to aggregate to and calculate per Capita data for given regions
> (Africa, Europe...) and subregions (Western Africa, ) on-the-fly.
> 
> From a statistical point of view it seems best to use something like this:
> 
> given that the variable is "v" and the Total Population table is "tp":
> 
>(country_1_of_region_1_for_v / country_1_of_region_1_for_tp)
> + (country_2_of_region_1_for_v / country_2_of_region_1_for_tp) + 
>  
>  
>   
>  (sum_countries_all_of_region_1_for_v /
> sum_countries_all_of_region_1_for_tp)
> 
> 
> and then same thing for the other regions (in my case a total of 6
> regions).
> 
> 
> I have a table of countries with a field country_id and region_id (which
> specifies to which region this country belongs).
> I have the table for the variable with a field country_id, which has
> kind of a "Excel design", i.e. columns for each year.

Well that's a Very Bad Thing.  (Wasn't there a thread about this
last week?)

Since the "years" are columns, are the "variables" in rows?

> And I have a table of regions with a field region_id.
> 
> It seems like a rather complicated thing to do, as one needs first to
> get all the countries of a specific region for both tables (variable &
> pop_total), have the SUM(of all countries) available and run the
> mathematical procedure... 
> 
> Could someone give me a hint how this could be achieved?

This would be bog-simple if your tables were structured like this:

CREATE TABLE T_REGION (
REGION_ID CHAR(8) PRIMARY KEY,
DESCRIP  TEXT);

CREATE TABLE T_COUNTRY (
COUNTRY_ID  CHAR(8) PRIMARY KEY,
REGION_ID   CHAR(8) FOREIGN KEY REFERENCES (T_REGION.REGION_ID),
DESCRIP TEXT);

CREATE TABLE T_NATIONAL_STATISTICS (
COUNTRY_ID CHAR(8),
STAT_YEAR  SMALLINT CHECK VALUES BETWEEN(1900 AND 2020),
FORECAST   BOOL,
GOVT_TYPE  CHAR(4) FOREIGN KEY REFERENCES (T_GOVERNMENT.GOVT_TYPE),
POPULATION BIGINT,
GDPBIGINT,
FISH_CATCH BIGINT,
FISH_CONSUMED BIGINT,
WHEAT_HARVEST BIGINT,
WHEAT_CONSUMED BIGINT
CRUDE_PROD_BBL BIGINT, -- crude oil production,
CRUDE_CONSUM_BBL BIGINT, -- crude oil consumtion,
etc,
etc,
etc,
PRIMARY_KEY (COUNTRY_ID, STAT_YEAR));

(Syntax is a bit off, but you get the meaning...)
(I like human-readable keys.)

Whenever you find a new statistic you want to track, you add a new
column to T_NATIONAL_STATISTICS.

So, to (I think) get info on per-capita fish "usage" by year, by
country, for West Africa, you do:

SELECT NS.STAT_YEAR,
   NS.COUNTRY_ID,
   CAST(CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION
AS NUMERIC(8,2)) AS CATCH_PERCAP,
   CAST(CAST(NS.FISH_CONSUMED AS DOUBLE)/NS.POPULATION
AS NUMERIC(8,2)) AS EATEN_PERCAP,
   CAST(CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION) -
 CAST(NS.FISH_CATCH AS DOUBLE)/NS.POPULATION)
 AS NUMERIC(8,2)) AS FIST_SELF_SUFFICIENCY
FROM T_REGION R,
 T_COUNTRY C,
 T_NATIONAL_STATISTICS NS
WHERE R.REGION_ID = 'WESTAFRI'
  AND C.REGION_ID = R.REGION_ID
  AND NS.COUNTRY_ID = C.COUNTRY_ID;

You might want to left outer-join NS to C to indicate any west
African countries with missing statistics.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFG8QGvS9HxQb37XmcRAprKAKCiDphSaLe1nTIf8tfaIwNCx/zZngCgpfeo
QF+ojV0/jYlLbLP+BDV86p4=
=7o+G
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] how to know the current size of a database

2007-09-19 Thread son
Hello

I want to know about the size of my database. For example, I want to know
how many Mb of data for current myDatabase database in a postgres server.

How I can do that?
Thanks



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] keeping 3 tables in sync w/ each other

2007-09-19 Thread Filip Rembiałkowski
2007/9/19, Ow Mun Heng <[EMAIL PROTECTED]>:

(...)

> simulate a delete
> => delete from parent where id in (select id from child);
> DELETE 6
>
> => select * from parent;
>  id |  data1
> +-
>   2 | parent2
>   3 | parent3
>   4 | parent4
>
> => select * from child;
>  id | data1
> +---
> (0 rows)
>
Yes. You can however try

SELECT FROM  ... ONLY parent   ...
(that's what I used in example)

and
DELETE FROM  ... ONLY parent ...



-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] keeping 3 tables in sync w/ each other

2007-09-19 Thread Ow Mun Heng
On Tue, 2007-09-18 at 08:37 -0400, [EMAIL PROTECTED] wrote:
> Ow Mun Heng wrote:
> > Hi,
> >
> > create table foo (a int, b int, c int)
> > create table foo_loading_source1 (a int, b int, c int)
> > create table foo_loading_source2 (a int, b int, c int)
> >
> > Is there a way which can be made easier to keep these 3 tables DDL in
> > sync?

> Since these are temporary tables, why don't you just create them on the 
> fly as temporary tables?
> 
> CREATE TEMPORARY TABLE foo_loading_source1 (LIKE foo);
> 
> CREATE TEMPORARY TABLE foo_loading_source2 (LIKE foo);
> 
> Then do your loading process. Then you don't really have to worry about 
> maintaining the loading tables at all.
> 

Yes, I've thought of this, but wouldn't this cause additional overhead
as the loading process may be initiated between every 60sec to 30mins
for a handful of tables each time. (which was why I asked the list)




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] keeping 3 tables in sync w/ each other

2007-09-19 Thread Ow Mun Heng
On Tue, 2007-09-18 at 09:56 +0100, Filip Rembiałkowski wrote:
> 2007/9/18, Ow Mun Heng <[EMAIL PROTECTED]>:
> > Hi,
> >
> > I have 3 tables
> >
> > foo
> > foo_loading_source1
> > foo_loading_source2
> >
> > which is something like
> >
> > create table foo (a int, b int, c int)
> > create table foo_loading_source1 (a int, b int, c int)
> > create table foo_loading_source2 (a int, b int, c int)
> >
> > Is there a way which can be made easier to keep these 3 tables DDL in
> > sync?
> >
> > the loading_sourceX tables are just a temporary-in-transit table for
> > data \copy'ied into the DB before being inserted into the main foo
> > table.
> >
> > Currently, each time I add a new column to foo, I have to "remember" to
> > add the same to the other 2 table.
> >
> > Can I use inheritance? References?
> 
> Inheritance might work in this case. But it will be a bit weird,
> because you will see non-constraint data in parent unless you will
> SELECT ... FROM ONLY parent
> 
> 
> Try this example:
> 
> create table parent ( id serial, data1 text );
> create table child () inherits( parent );
> \d child
> alter table only parent add check ( data1 like '%fits parent' );
> insert into parent(data1) select 'this data fits parent';
> insert into child(data1) select 'this data was inserted to child';
> select * from parent;
> select * from only parent;
> select * from child;
> alter table parent add column data2 text default 'new column default';
> \d child

Nope. Doesn't work as it should be.

Note : I've removed the check as well as it's not needed for my purpose.

create table parent ( id int primary key, data1 text );
create table child () inherits( parent );
insert into parent(id,data1) values (1,'parent1');
insert into parent(id,data1) values (2,'parent2');
insert into parent(id,data1) values (3,'parent3');
insert into parent(id,data1) values (4,'parent4');

insert into child(id,data1) values (6,'child1-bastard');
insert into child(id,data1) values (7,'child2-bastard');
insert into child(id,data1) values (8,'child3-bastard');
insert into child(id,data1) values (9,'child4-bastard');

=> select * from parent;
 id | data1
+
  1 | parent1
  2 | parent2
  3 | parent3
  4 | parent4
  6 | child1-bastard
  7 | child2-bastard
  8 | child3-bastard
  9 | child4-bastard
  1 | parent1-new

simulate a delete
=> delete from parent where id in (select id from child);
DELETE 6

=> select * from parent;
 id |  data1
+-
  2 | parent2
  3 | parent3
  4 | parent4

=> select * from child;
 id | data1
+---
(0 rows)

Doesn't do what I want which is to use the child table as a temp holding
ground prior to data insertion into parent table.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Calculate Weighted Aggregation between Two Tables

2007-09-19 Thread Stefan Schwarzer

Hi,

I have global national statistics (GDP, Fish Catch etc.), which I  
would like to aggregate to and calculate per Capita data for given  
regions (Africa, Europe...) and subregions (Western Africa, ) on- 
the-fly.


From a statistical point of view it seems best to use something like  
this:


given that the variable is "v" and the Total Population table is  
"tp":


   (country_1_of_region_1_for_v / country_1_of_region_1_for_tp) +  
(country_2_of_region_1_for_v / country_2_of_region_1_for_tp) + 

 
 
---
 
(sum_countries_all_of_region_1_for_v /  
sum_countries_all_of_region_1_for_tp)



and then same thing for the other regions (in my case a total of  
6 regions).



I have a table of countries with a field country_id and region_id  
(which specifies to which region this country belongs).
I have the table for the variable with a field country_id, which has  
kind of a "Excel design", i.e. columns for each year.

And I have a table of regions with a field region_id.

It seems like a rather complicated thing to do, as one needs first to  
get all the countries of a specific region for both tables (variable  
& pop_total), have the SUM(of all countries) available and run the  
mathematical procedure...


Could someone give me a hint how this could be achieved?

Thanks for any help!

Stef

  ___

  Stefan Schwarzer

  Nature Photography: http://photoblog.la-famille-schwarzer.de
  UNEP GEO Data Portal: http://geodata.grid.unep.ch
  ___





Re: [GENERAL] help w/ SRF function

2007-09-19 Thread Ow Mun Heng
On Wed, 2007-09-19 at 07:57 +0200, A. Kretschmer wrote:
> am  Mon, dem 17.09.2007, um  9:21:22 +0800 mailte Ow Mun Heng folgendes:
> > CREATE OR REPLACE FUNCTION foo_func(fromdate timestamp, todate
> > timestamp, code text)
> >   RETURNS SETOF foo AS
> > $BODY$
> > SELECT
> > TRH.ID,
> > TRH.data1,
> > TRH.data2,
> > FROM D 
> > INNER JOIN  TS 
> >  ON TS.id = D.id
> > inner join TRH
> >  on ts.id = trh.id
> > WHERE D.start_timestamp BETWEEN fromdate AND todate
> > And D.code IN (code)
> > $BODY$
> > LANGUAGE 'sql' IMMUTABLE STRICT;
> > 
> > How can I go about this this? The above will fail due to missing columns
> > fromdate/todate/code.
> 
> Use $1, 2 and $3 within the function-body instead fromdate, todate and
> code.


Yep.. that works as advertised. 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Understanding pgstatindex

2007-09-19 Thread Shlomi Marco
Hey all.

 

When running pgstatindex(some_index) I'm getting the usual result:

Version, tree_level, index_size, root_block_no, internal_pages, leaf_pages,
empty_pages, deleted_pages, avg_leaf_density, leaf_fragmentation

 

with a bunch of numbers.

Now, my question are

What each one represent (aside from the label), what exactly is "average
leaf density" or "leaf fragmentation" or any other label?

And even more importantly, how can I make sense out of this number regarding
my index?

 

I've been told that if "Leaf Frag" is above certain number, then reindex.
But I'm looking for a more empirical approach.

Intuitively, I assume that an index state depends on many params that are
somehow reflected by those numbers, and if I'll understand those, I could
understand my indexes state better.

 

Thanks.

 

Shlomi.

 



[GENERAL] understanding pgstatindex

2007-09-19 Thread Shlomi Marco
Hey all.

 

When running pgstatindex(some_index) I'm getting the usual result:

Version, tree_level, index_size, root_block_no, internal_pages, leaf_pages,
empty_pages, deleted_pages, avg_leaf_density, leaf_fragmentation

 

with a bunch of numbers.

Now, my question are

What each one represent (aside from the label), what exactly is "average
leaf density" or "leaf fragmentation" or any other label?

And even more importantly, how can I make sense out of this number regarding
my index?

 

I've been told that if "Leaf Frag" is above magic number, then reindex. But
I'm looking for a more empirical approach.

Intuitively, I assume that an index state depends on many params that are
somehow reflected by those number, and if I'll understand those, I could
understand my indexes state better.

 

Thanks.

 

Shlomi.



Re: [GENERAL] Version 8.2.5 for Windows doesn't startup normally after upgrading from 8.2.4

2007-09-19 Thread Dave Page
On Tue, 2007-09-18 at 14:58 -0500, Walter Roeland wrote:
> 2007-09-18 14:28:36 127.0.0.1 postgres postgres FATAL:  database
> "postgres" does not exist
> 
> And I have to abort the startup.
> 
> Maybe the next is a hint:
> When I had blocked the access to localhost with SSL=ON (using hostnossl
> pg_hba.conf) there was a constant complaint (2 times per second) with:
> 127.0.0.1 postgres postgres FATAL:  no pg_hba.conf entry for host
> "127.0.0.1", user "postgres", database "postgres", SSL on
> 
> -
> Have I something wrong with the configuration of the service?

Prior to 8.2.5, the -w option for pg_ctl was broken which meant that the
server would report itself running to the service control manager when
in actual fact it was still starting up. This would mean that dependent
services such as Slony or pgAgent would fail to start because the server
wasn't necessarily accepting connections at that time.

The -w option tells pg_ctl to attempt to connect to the server every few
seconds until successful - and only then report running status to the
service control manager, which will then attempt to start the dependent
services.

In your case, it seems like the 'postgres' database has been removed
which prevents pg_ctl connecting. When you blocked access through
pg_hba.conf, the fact that the database didn't exist was masked by the
lack of access to even attempt the connection.

To fix this, either:

- Modify the registry key (having taken a backup first of course) with
the pg_ctl command line, removing the -w option. 

- Recreate the postgres database, and ensure it's accessible.

Regards, Dave.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] For index bloat: VACUUM ANALYZE vs REINDEX/CLUSTER

2007-09-19 Thread Phoenix Kiula
On 19/09/2007, Gregory Williamson <[EMAIL PROTECTED]> wrote:

...

>  Can't speak directly to PostgreSQL but in Informix the fill factor is
> useful for tweaking indexes. A very high fill factor is useful for tables
> that are static -- any inserts or changes to the index trigger a *lot* of
> moving of b-tree branches. But the high fill factor means that each page has
> more useful data references in it. A very low fill factor means that pages
> are "sparse" and so inserts and updates are less likely to trigger massive
> b-tree rebalancings.



Assuming pgsql's fill factor is similar to Informix' (yes, a highly
suspect assumption), could we say:

1. A small fill factor such as 10 or 20 would be good for the index
size and will not trigger massive btree rebalancings? (I'm first
playing with a value of 60 for now and seeing how it works out...seems
ok at the moment!)

2. Is this fill factor enough to have on the table, or should I also
do a fill factor for specific indexes? Or both the table and the
index? (I have four btree indexes on the table)

Thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/