Re: [GENERAL] How can I avoid PGPool as a single point of failure?

2008-01-31 Thread Aaron Glenn
On Jan 29, 2008 6:30 PM, Chander Ganesan [EMAIL PROTECTED] wrote:

  Is it possible to point two servers running PGPool at the same two
  database servers? If so, I seem to recall reading about being able to
  use some kind of virtual IP address to split the traffic to both.
 CARP...I've never tried to use it for such a thing (I've used it for
 firewall redundancy in the past..which is what I think it was designed
 for), but I'm fairly certain you could do so without too much trouble.

CARP *and* pfsync.
this late at night off the top of my head I can't see any blatantly
obvious reason this wouldn't work (with at least pgpool that is, dunno
about your data)

aaron.glenn

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] expression index on date_trunc

2008-01-31 Thread a . redhead
[EMAIL PROTECTED] writes:

   CREATE INDEX request_day_idx ON moksha_sm_request (date_trunc('day', 
request_received));
...
 I'd be grateful if someone could point out what part of the statement is not
 IMMUTABLE or how I could mark my create index statement as being immutable.

date_trunc(timestamp with time zone) is not immutable because it depends what
your current time zone is. That is, if you change what time zone you're in a
timestamp with time zone could appear to change from one day to another.

However date_trunc(timestamp without time zone) is immutable. So I think what
you have to do is build your index on:

date_trunc('day', request_received AT TINE ZONE 'GMT')

or whatever time zone you're interested in. That will get you the day part of
that timestamp at that time zone (because it first casts it to a timestamp
without time zone for the time zone you specify).

That worked perfectly, many thanks.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] PostgreSQL on a CompactFlash

2008-01-31 Thread Andre Nieder
Hello,

 

we like to use PostgreSQL in an embedded system that runs - don't laugh
- on a compact flash.

Therefore we need to know how often PostgreSQL will write to the disc. 

 

Once our system is configured there are very little changes to the
database.

We have some tables in memory and some stored on the cf.

If we change a table in memory, will PosgreSQL access the cf?

 

As you know a cf card has not an unlimited amount of accesses...

So is there any reason that should prevent us from using PostgreSQL in
combination with a compact flash?

 

 

Andre



Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?

2008-01-31 Thread Mayuresh Nirhali

Bob,

First, Which exact version of S10 are you using ?

In general, I have seen having logs and data on different pools helps.
Also, let us know your wal tunning parameters like commit_delay, fsync. 
If you are using S10u4, setting commit_delay to a non zero number should 
help get better performance.


Rgds
Mayuresh


[EMAIL PROTECTED] wrote:

In the hopes that someone has already blazed this trail ...


I'm running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database storage in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of X (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS filesystem) over to a ZFS-based filesystem (where I could do things like set quotas, etc.). When I do this, I get roughly half the performance (X/2) I did on the UFS system. I did try to run some low level I/O tests (i.e. http://www.iozone.org/) to see if there was a filesystem performance difference, but there doesn't seem to be any between the UFS and ZFS numbers I'm seeing. 

So, I was hoping someone might have already tried this type of setup and can provide some suggestions for helping boost the ZFS performance numbers (and save me a truss debugging session). 


Are there any known issues with using Postgresql and ZFS?

Things I have already tried:

- setting the record size of the ZFS filesystem to be 8K (vs 128K ) default -- 
no noticeable performance difference

- addind the ZFS filesystem as a loopback (i.e. lofs) filesystem in the 
non-global zone -- no noticeable difference

- adding the ZFS filesystem as a dataset filesystem in the non-global zone -- 
no noticeable difference

Bob


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?

2008-01-31 Thread jiniusatwork-postgresql
Mayuresh,

 comments in-lined, below ...

- Original Message 
 From: Mayuresh Nirhali [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Cc: pgsql-general@postgresql.org
 Sent: Thursday, January 31, 2008 6:23:23 AM
 Subject: Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global 
 zones on Solaris?
 
 Bob,
 
 First,
 
Which
 
exact
 
version
 
of
 
S10
 
are
 
you
 
using
 
?

 more /etc/release
   Solaris 10 8/07 s10s_u4wos_12b SPARC
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007

 pkginfo SUNWpostgr-server
system  SUNWpostgr-server The programs needed to create and run a 
PostgreSQL 8.1.10 server


 
 In
 
general,
 
I
 
have
 
seen
 
having
 
logs
 
and
 
data
 
on
 
different
 
pools
 
helps.
 Also,
 
let
 
us
 
know
 
your
 
wal
 
tunning
 
parameters
 
like
 
commit_delay,
 
fsync. 
 If
 
you
 
are
 
using
 
S10u4,
 
setting
 
commit_delay
 
to
 
a
 
non
 
zero
 
number
 
should 
 help
 
get
 
better
 
performance.
 
Thanks for the info. I'll give it a shot and let you know.
Bob
 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] referencing to computed columns in where clause

2008-01-31 Thread Sam Mason
On Tue, Jan 29, 2008 at 06:49:50PM +0100, Ivan Sergio Borgonovo wrote:
 On Tue, 29 Jan 2008 17:17:39 + Sam Mason [EMAIL PROTECTED] wrote:
  I've tended to do:
  
SELECT *
FROM (
  SELECT complicated expression AS pippo) x
WHERE pippo   12;
 
 It risk to be much harder to rewrite for dumber DB than repeating the
 complicated expression.

If you're worrying about how the various implementations handle the
cases then I'd suggest testing the code in each one and see how they
handle it.

 Does it come with some extra cost/improvement in term of performance
 compared to:
 - repeating the code of complicated expression

Most reasonable databases will expand subselects where possible.
Optimisation is always a tradeoff between different factors--one of
these being maintainability.

 - put it in a function with the proper attributes (I'd say
 IMMUTABLE in the above case)

That's almost never going to be a win; rewriting sub-selects is almost
always going to be easier than rewriting stored procedures.

 I've some argument for all cases but it should depend on the
 implementation.

My recommendation is generally to use the easiest solution that works
now and only worry about things when they actually fail.


  Sam

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Simon Riggs wrote:

On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
  

We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
  
It doesn't appear to. 



The design of the PITR system allows a product-agnostic backup. Anything
that can backup a file can backup PostgreSQL. There is no need for
special certifications of hologram logos.

You may need to write a few lines of script to do it, but that's not a
problem surely?

So you can use pg_dump or PITR, as you choose.
  
If you don't mind if you lose some transactions you can also use file 
system snapshottingwhich would work just as well as pg_dump, and 
probably have less impact on the running database (assuming you had 
decent enough hardware).


So long as the product you are using can snapshot the file system prior 
to performing a backup, you can use just about any product (without 
having to really do much/any work).  When you restore a snapshotted file 
system that contains your postgresql database, postgresql (on restart) 
will enter auto-recovery mode and recover (as it does in the case of the 
crash) to the last transaction that was completed successfully prior to 
the snapshot being created.  Note that this would only work if all your 
tablespaces were on the same file system, and would be unable to roll 
forward using WAL files generated after the backup.


However, you should keep in mind that - like a pg_dump - you won't be 
able to perform PITR recovery from such a backup.  Also, the recovery 
time may be non-trivial depending on your WAL settings.


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com
As me about Expert PostgreSQL  PostGIS Training delivered worldwide.



Re: [GENERAL] Getting the count(*) from two tables and two date ranges in same query

2008-01-31 Thread Sam Mason
On Wed, Jan 30, 2008 at 09:52:17AM +0100, H??kan Jacobsson wrote:
 I just realised that issuing the SQL on one table produces the correct count.
 
 SELECT sum(case when table2.date between '2007-07-13' and '2007-07-13' then 1 
 else 0
 end) as sumx FROM table2 WHERE id = n;
 
 This is working alright.
 So the problem should lie in the last part:
 
 from table2, table3
 where table2.id = table3.id
 and table2.id = n;
 
 I think I need the UNION statement instead?

Did you try something similar to the code I posted?


  Sam

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?

2008-01-31 Thread jiniusatwork-postgresql
[Following up on my own message.]

 
Also,
  
 let
  
 us
  
 know
  
 your
  
 wal
  
 tunning
  
 parameters
  
 like
  
 commit_delay,
  
 fsync. 

I haven't done any tuning as of yet. I'm running with the default settings 
produced by initdb.

Bob






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Converting from MySQL...need book ideas

2008-01-31 Thread T.J. Adami
On 29 jan, 18:42, cbmeeks [EMAIL PROTECTED] wrote:
 I love MySQL but I'm concerned about it's future.  And, I'm always
 looking to learn something new.  I love the licensing model of
 Postgres.

 Anyway, using online documentation, I've managed to get a server
 running but I find myself constantly having to google little things
 that I haven't learned yet.

 Does anyone out there have any book recommendations?

 I am looking for both administration and development books for medium
 to advanced levels.  My SQL skills are pretty strong as I am a full
 time C# / T-SQL developer during the day.

 My apps will be web/django/python based if that makes a difference.

 Thanks

 cbmeekshttp://codershangout.com

There is a lot of content over the entire WWW. I suggest you to do
specific searches on google, that's more productive than get ride on a
book when you need the basics.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres startup method for ubuntu

2008-01-31 Thread T.J. Adami
On 30 jan, 04:14, Jon Hancock [EMAIL PROTECTED] wrote:
 I have compiled postgres 8.3rc2 on ubuntu 7.04 and seem to a have
 working install.
 To start postgres, I do the following:

  su postgres
  /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data 

 ok, that works for dev mode.  But what is the recommended approach to
 launch postgres at system boot?
 Sure, I can hack the above command in an init.d script and if it seems
 to work, thats nice.
 But since I'm a nube here, I think I should ask for the refined
 approach for a startup script.  For example, in the mysql world, we
 have a safe_mysql script which takes care of startup best practices.

 thanks, Jon

I am also searching for a more 'professional' way to startup and
shutdown postgresql on ubuntu and debian systems.

Everytime I deploy a PostgreSQL server in a ubuntu or debian based
server, I create some scripts called pg_start, pg_stop and pg_reload,
save them on default postgresql home dir. Then I create symbolic links
on /usr/bin, and so on I put pg_start on /etc/init.d/bootmisc.sh.

However, I want to deploy it as a native linux service, as 'cron' or
'network', and really I dunno how to do it.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL on a CompactFlash

2008-01-31 Thread Peter Eisentraut
Am Donnerstag, 31. Januar 2008 schrieb Andre Nieder:
 So is there any reason that should prevent us from using PostgreSQL in
 combination with a compact flash?

I suggest you review the previous discussions on this topic.  Search 
for compact flash on archives.postgresql.org.  You will find that 
PostgreSQL very write-happy and you might have some work to do.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Simon Riggs
On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
 If you don't mind if you lose some transactions

That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.

As far as I am concerned, if any Postgres user loses data then we're all
responsible.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Enrico Sirola

Hi Willem,

Il giorno 30/gen/08, alle ore 22:15, Willem Buitendyk ha scritto:

I'm trying to replicate the use of Oracle's 'lag' and 'over  
partition by' analytical functions in my query.  I have a table  
(all_client_times) such as:


client_id, datetime
122, 2007-05-01 12:00:00
122, 2007-05-01 12:01:00
455, 2007-05-01 12:02:00
455, 2007-05-01 12:03:00
455, 2007-05-01 12:08:00
299, 2007-05-01 12:10:00
299, 2007-05-01 12:34:00

and I would like to create a new view that takes the first table and  
calculates the time difference in minutes between each row so that  
the result is something like:


client_id,datetime, previousTime, difftime
122,2007-05-01 12:01:00, 2007-05-01 12:00:00, 1
455,2007-05-01 12:03:00, 2007-05-01 12:02:00, 1
455,2007-05-01 12:08:00, 2007-05-01 12:03:00, 5
299,2007-05-01 12:34:00, 2007-05-01 12:10:00, 24


I'd create a previousTime column and manage it using a trigger.  
Anyway, it depends on the time-dependancy of the table

Then you can perform temporal in a much easier way.
You could be interested in taking a look at the following link

http://www.cs.arizona.edu/~rts/tdbbook.pdf

Cheers,
e.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Log file permissions?

2008-01-31 Thread Glyn Astill
I've noticed that by default postgres writes its log files read/write
only by the postgres user.

I have a nagios user I want to be able to analyse the logs.

Is there a way to make postgres output them so they can be read by a
group? Or am I going to have to write a script?

Glyn


  __
Sent from Yahoo! Mail - a smarter inbox http://uk.mail.yahoo.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres startup method for ubuntu

2008-01-31 Thread Karsten Hilbert
On Thu, Jan 31, 2008 at 02:35:01AM -0800, T.J. Adami wrote:

 Everytime I deploy a PostgreSQL server in a ubuntu or debian based
 server, I create some scripts called pg_start, pg_stop and pg_reload,
 save them on default postgresql home dir. Then I create symbolic links
 on /usr/bin, and so on I put pg_start on /etc/init.d/bootmisc.sh.
 
 However, I want to deploy it as a native linux service, as 'cron' or
 'network', and really I dunno how to do it.

Debian does it for me nicely on bootup. I wonder why you
think you need to start it manually ?

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgres startup method for ubuntu

2008-01-31 Thread Daniel Verite

T.J. Adami wrote:


I am also searching for a more 'professional' way to startup and
shutdown postgresql on ubuntu and debian systems.


The source comes with a startup script that can be installed by 
running:

# cp contrib/start-scripts/linux /etc/init.d/postgresql
# update-rc.d postgresql defaults
(update-rc.d is to debian-based systems what chkconfig is to 
Redhat-based systems).


--
Daniel
PostgreSQL-powered mail user agent and storage: 
http://www.manitou-mail.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Douglas McNaught
On 1/31/08, Glyn Astill [EMAIL PROTECTED] wrote:
 I've noticed that by default postgres writes its log files read/write
 only by the postgres user.

 I have a nagios user I want to be able to analyse the logs.

 Is there a way to make postgres output them so they can be read by a
 group? Or am I going to have to write a script?

PG itself only writes to stdout/stderr or uses syslog().  The way logs
are generated and stored is distro-specific, so you need to look at
how your distro does things (perhaps modifying the startup script).

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Martijn van Oosterhout
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
 That sentence has no place in any discussion about backup because the
 risk is not just a few transactions, it is a corrupt and inconsistent
 database from which both old and new data would be inaccessible.

Hmm? I thought the whole point of a filesystem snapshot was that it's
the same as if the system crashed. And I was fairly sure we could
recover from that...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] [pgsql-advocacy] PostgreSQL Certification

2008-01-31 Thread Tatsuo Ishii
 Josh,
 
  Myself and a small team of PostgreSQL contributors have started a new
  community project for PostgreSQL Certification. It is just launching
  but we wanted to get it out there so that people can join in on the
  discussion now :).
 
 Who else is in this?  Have you talked to the Venezualan folks?  SRA?
 
 As you know, I'm strongly in favor of a good, generally respected 
 certification.  Let's get all of the interested folks on one project.

Up to now SRA OSS, Inc. Japan's certification has more than 1,000
examinees. I'm proud of this, but am not satisfied with this. From the
beginning of the certification, I have a dream that someday the
certification be managed by public entity, not by a private company
like us. Yes, that's my goal. So if Josh and his folks are very
serious about making a good certfication, I'm more than happy to help
them.

However running a certification programs (not just making examins) is
not a trivial work. Moreover it costs a lot of money (over $40,000 per
year in our case). Josh, how do you overcome those problems?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Simon Riggs wrote:

On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
  

If you don't mind if you lose some transactions



That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.

As far as I am concerned, if any Postgres user loses data then we're all
responsible.
  
I understand your point, but indicating that you can't trust a 
point-in-time snapshot of the database is, IMHO, the same as saying you 
can't trust PostgreSQL's automatic crash recovery, since the two are 
essentially the same thing...


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Alvaro Herrera
Douglas McNaught wrote:
 On 1/31/08, Glyn Astill [EMAIL PROTECTED] wrote:
  I've noticed that by default postgres writes its log files read/write
  only by the postgres user.
 
  I have a nagios user I want to be able to analyse the logs.
 
  Is there a way to make postgres output them so they can be read by a
  group? Or am I going to have to write a script?
 
 PG itself only writes to stdout/stderr or uses syslog().  The way logs
 are generated and stored is distro-specific, so you need to look at
 how your distro does things (perhaps modifying the startup script).

Actually, as of 8.0 there is specialized process that captures stderr
and saves it to log files.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Magnus Hagander
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
 On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
  That sentence has no place in any discussion about backup because the
  risk is not just a few transactions, it is a corrupt and inconsistent
  database from which both old and new data would be inaccessible.
 
 Hmm? I thought the whole point of a filesystem snapshot was that it's
 the same as if the system crashed. And I was fairly sure we could
 recover from that...

That was my assumption as well. *Assuming* that the filesystem snapshot is
consistent. There are a bunch of solutions that don't do consistent
snapshots between different partitions, so if your WAL or one tablespace is
on a different partition, you'll get corruption anyway... (seen this in
Big Commercial Database, so that's not a pg problem)

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Glyn Astill

--- Alvaro Herrera [EMAIL PROTECTED] wrote:
  
  PG itself only writes to stdout/stderr or uses syslog().  The way
 logs
  are generated and stored is distro-specific, so you need to look
 at
  how your distro does things (perhaps modifying the startup
 script).
 
 Actually, as of 8.0 there is specialized process that captures
 stderr
 and saves it to log files.
 

Yes that's what I thought. I'm not piping it to a file, postgres is
managing the logs. Is there any way to manage the permissions, or do
I just need to create a script to change the permissions?





  __
Sent from Yahoo! Mail - a smarter inbox http://uk.mail.yahoo.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Alvaro Herrera
 Simon Riggs wrote:

 As far as I am concerned, if any Postgres user loses data then we're all
 responsible.

Remember, our license says this software is given without any warranty
whatsoever, implicit or explicit, written or implied, given or sold,
alive or deceased.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [pgsql-advocacy] PostgreSQL Certification

2008-01-31 Thread Joshua D. Drake

Tatsuo Ishii wrote:

Josh,


However running a certification programs (not just making examins) is
not a trivial work. Moreover it costs a lot of money (over $40,000 per
year in our case). Josh, how do you overcome those problems?


As the resources become required I am sure that I can make sure they are 
provided.


Sincerely,

Joshua D. Drake



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Adam Rich
  I'm trying to replicate the use of Oracle's 'lag' and 'over
  partition by' analytical functions in my query.  I have a table
  (all_client_times) such as:
  and I would like to create a new view that takes the first table and
  calculates the time difference in minutes between each row so that
  the result is something like:

I thought of a another way of doing this.  In my tests, it's a little
faster, too.

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.datetime, a.datetime as previousTime, (b.datetime -
a.datetime) as difftime from
(select nextval('seq1') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-31 Thread Erik Jones


On Jan 31, 2008, at 12:51 AM, Gregory Williamson wrote:


Bruce Momjian said:

 Tom Hart wrote:
  I definitely think that the lists are one of the shining stars for
  postgresql support. I've learned some good reference stuff from  
online
  docs/google but the really tricky questions were only answered  
here, and
  amazingly enough, quickly and with good humor. Perhaps what we  
really

  need is somebody to comb through the archives looking for common
  problems or exceptional solutions and compile them into a book.

 The good and bad news is that the best way to do things often  
changes

 from release to release, hence the need to get the most current
 information from the mailing lists.

Although I have solved almost every problem I have come up against  
in learning, partly with archives, I've often had to resort to  
asking the list because finding relevant missives in the archives  
can be hard if you don't know what month to look at, and even then  
the search results can produce a lot incidental wanderings to get  
to the solutions.


It seems that some intermediate ground (TWIKI or a document in some  
format) might help with some of these questions, perhaps with  
sections based on release.




That's an interesting idea.  Is there a general audience/ 
participation wiki for Postgres?  I know the developers have one, but  
a user-oriented sister wiki would probably be a good way to get lots  
of different people involved.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Chander Ganesan

Magnus Hagander wrote:

On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
  

On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:


That sentence has no place in any discussion about backup because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would be inaccessible.
  

Hmm? I thought the whole point of a filesystem snapshot was that it's
the same as if the system crashed. And I was fairly sure we could
recover from that...



That was my assumption as well. *Assuming* that the filesystem snapshot is
consistent. There are a bunch of solutions that don't do consistent
snapshots between different partitions, so if your WAL or one tablespace is
on a different partition, you'll get corruption anyway... (seen this in
Big Commercial Database, so that's not a pg problem)
  
Agreed.  That's why I made it a point to mention that all of your 
tablespaces should be on the same file system...  In hindsight, I should 
have also stated that your WAL logs should be on the same file system as 
well...


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How can I avoid PGPool as a single point of failure?

2008-01-31 Thread Vivek Khera


On Jan 31, 2008, at 4:28 AM, Aaron Glenn wrote:


CARP *and* pfsync.
this late at night off the top of my head I can't see any blatantly
obvious reason this wouldn't work (with at least pgpool that is, dunno
about your data)


we use CARP to balance and failover some webserver pairs.  We also use  
it to balance our internal DNS caches to our internal clients.  The  
only drawback is that once you pass a router, all traffic from that  
router will go to a single CARP host -- ie, you get failover but no  
balance.


Other than that, it makes upgrading systems nearly invisible to the  
other servers.  For critical things like DNS, this is a big win.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Alvaro Herrera
Glyn Astill wrote:

 I'm not piping it to a file, postgres is managing the logs. Is there
 any way to manage the permissions, or do I just need to create a
 script to change the permissions?

I think you should be able to chmod the files after they have been
created.  The postmaster changes its umask to 0077, so no file is
group-readable.  I don't think is configurable either.

Perhaps we should add a log_file_group option, to which we would chgrp()
the log files.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Simon Riggs
On Thu, 2008-01-31 at 10:02 -0500, Chander Ganesan wrote:
 Magnus Hagander wrote:
  On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:

  On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
  
  That sentence has no place in any discussion about backup because the
  risk is not just a few transactions, it is a corrupt and inconsistent
  database from which both old and new data would be inaccessible.

  Hmm? I thought the whole point of a filesystem snapshot was that it's
  the same as if the system crashed. And I was fairly sure we could
  recover from that...
  
 
  That was my assumption as well. *Assuming* that the filesystem snapshot is
  consistent. There are a bunch of solutions that don't do consistent
  snapshots between different partitions, so if your WAL or one tablespace is
  on a different partition, you'll get corruption anyway... (seen this in
  Big Commercial Database, so that's not a pg problem)

 Agreed.  That's why I made it a point to mention that all of your 
 tablespaces should be on the same file system...  In hindsight, I should 
 have also stated that your WAL logs should be on the same file system as 
 well...

I think we all understand and agree, I just start twitching when anyone
talks about it being OK to lose transactions when backing up. You meant
the ones currently in progress, not the ones already committed and on
disk.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Backup

2008-01-31 Thread Simon Riggs
On Thu, 2008-01-31 at 12:09 -0300, Alvaro Herrera wrote:
  Simon Riggs wrote:
 
  As far as I am concerned, if any Postgres user loses data then we're all
  responsible.
 
 Remember, our license says this software is given without any warranty
 whatsoever, implicit or explicit, written or implied, given or sold,
 alive or deceased.

Yes! ...I meant via the free press, not via the courts.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Recover by Log

2008-01-31 Thread T.J. Adami
How I can recover data (or replicate data to another) based on
postgresql logs?
Where they are, and how I do it?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] PostgreSQL 8.3 RC2 Install problems on Windows Vista Home Premium PT_BR

2008-01-31 Thread T.J. Adami
I'm trying to install PostgreSQL 8.3 RC2 for Windows on my Windows
Vista Home Premium (Brazillian Portuguese), and thus I'm getting an
error on 'initdb' after copy of all files into destination path.

The error is this:

Failed to run initdb: 1!
Please see the logfile in 'C:\Program Files\PostgreSQL\8.3RC2\tmp
\initdb.log'
Note! You must read/copy this logfile before you click OK,
or it will be automatically removed.

And on this file, I get a portuguese message, even using English as
language for the installer:

Falha na inicialização do aplicativo devido a configuração lado a
lado incorreta. Consulte o log de eventos do aplicativo para obter
mais informações.

The translation looks like:
Failed on application initialization due to side by side
missconfiguration. Check the application's event log for more
information.

Notice that I haven't PostgreSQL installed on this PC, and after
trying to install 8.3 RC2 many times I installed 8.2.6 with no one
error.

This appears to be an error on installer. Where and how I can report
it to the development group efficiently?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] warm standby examples.

2008-01-31 Thread Steve Clark

Hello List,

I am going to be setting up a warm standby postgresql 8.2.5 high 
availability 2 server system. I was
wondering if anybody that has done this can share some scripts, 
pertinent postgresql.conf entries,
etc so I don't have to reinvent the wheel. I have read the manual a 
couple of times and it is a lot to

pull together.

Anything would be greatly appreciated.

Thanks,
Steve

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is PostGreSql's Data storage mechanism inferior?

2008-01-31 Thread Tony Caduto

Swaminathan Saikumar wrote:

http://searchyourwebhost.com/web-hosting/articles/insight-database-hosting-using-sql

Check out the link. I am starting out on a new personal project  had 
zeroed in on PostGreSql with Mono-ASP.NET as ideal for my needs, 
mainly owing to a PostGreSql whitepaper.
Now, I chanced upon the article above. I've pasted the cons as 
mentioned in the article, and would like the community feedback on it, 
especially with regards to the inferior Data Storage mechanism.


The cons of PostgreSql Hosting
* Performance considerations: Inserts and Updates into the PostgreSql 
database is much slower compared to MySql. PostgreSql hosting thus 
might slow down the display of the web page online.
* BSD license issues: Since PostgreSql comes under the Berkeley 
license scheme, this is again considered to be too open.
* Availability of inferior Data Storage mechanism: PostgreSql uses 
Postgres storage system, which is not considered to be transaction sae 
during PostgreSql hosting.
* Its not far-flung: While MySql hosting and MSSql hosting have deeply 
penetrated into the market, PostgreSql hosting still remains to be 
passive in the database hosting market.
* Non-availability of required assistance for PostgreSql hosting: 
Assistance is being provided via mailing lists. However there is no 
guarantee that the issue faced during PostgreSql hosting would be 
resolved.



Those cons are seriously out of date.
They apply to very old versions of PostgreSQL and even that is 
stretching it.


The part about the BSD license is bogus. A BSD license is the most 
desirable of any Open Source license and gives you the right to use 
PostgreSQL in your commercial apps without worry.


The part about  inferior Data Storage mechanism is also flat out wrong.  
PostgreSQL uses a MVC system same as Oracle and it also has transaction 
logs,PITR etc.


Inserts and updates will be slightly slower than a Non MVC system, but 
the human eye would not detect any difference in a web page displaying, 
we are talking about miliseconds.  Who does massive amounts of inserts 
and updates from a web page anyway?  I have CMS such as Drupal running 
on both PostgreSQL and MySQL and I can't tell the difference in the 
speed the pages render.


The availability of assistance is also bogus as there are many ways to 
get support if you need it including commercial support companies and 
Enterprise DB, The mailing list is also very active and effective.


The only part that has any truth to it is the far flung part, and MySQL 
is king there, it even dwarfs M$ SQL server.  M$ SQL server is severely 
limited for hosting as well since it ONLY runs on windows and most 
hosting providers run some form of Unix where M$ cannot play at all.


So in conclusion I would not pay attention to this article, it was 
written by someone who really does not know what they are talking about.



Later,

Tony Caduto
AM Software Design
Home of Lightning Admin for PostgreSQL and MySQL


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] postgres startup method for ubuntu

2008-01-31 Thread T.J. Adami
On 31 jan, 12:24, [EMAIL PROTECTED] (Daniel Verite) wrote:
 T.J. Adami wrote:

  I am also searching for a more 'professional' way to startup and
  shutdown postgresql on ubuntu and debian systems.

 The source comes with a startup script that can be installed by
 running:
 # cp contrib/start-scripts/linux /etc/init.d/postgresql
 # update-rc.d postgresql defaults
 (update-rc.d is to debian-based systems what chkconfig is to
 Redhat-based systems).

 --
  Daniel
  PostgreSQL-powered mail user agent and storage:http://www.manitou-mail.org

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/

 Debian does it for me nicely on bootup. I wonder why you
 think you need to start it manually ?

 Karsten

I always compile my own version from sources for Python and Perl
support, and also because of newest versions that are update with a
lack of time (for reasonable test reasons).


 The source comes with a startup script that can be installed by
 running:
 # cp contrib/start-scripts/linux /etc/init.d/postgresql
 # update-rc.d postgresql defaults
 (update-rc.d is to debian-based systems what chkconfig is to
 Redhat-based systems).

 --
  Daniel

This adds shutdown scripts too? I need the server shutdown by 'pg_ctl
stop', not by linux killing the process (init 0 or shutdown commands
are very useful sometimes ;) ). I know this is a specific distro
question, however we started this thread yet...

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Enrico Sirola

Hi Adam,

Il giorno 31/gen/08, alle ore 16:13, Adam Rich ha scritto:


I'm trying to replicate the use of Oracle's 'lag' and 'over
partition by' analytical functions in my query.  I have a table
(all_client_times) such as:
and I would like to create a new view that takes the first table and
calculates the time difference in minutes between each row so that
the result is something like:


I thought of a another way of doing this.  In my tests, it's a little
faster, too.

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.datetime, a.datetime as previousTime,  
(b.datetime -

a.datetime) as difftime from
(select nextval('seq1') as s, client_id, datetime from  
all_client_times

order by client_id, datetime OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, datetime from  
all_client_times

order by client_id, datetime OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id


very interesting indeed. I guess this strategy is more interesting than
the trigger (or rule) based one when you perform much more inserts on  
the table that
the select you proposed above. It sounds strange that the select  
proposed
is faster than a (single) select on the same table with an additional  
previousTime

column populated via trigger/rule.
Bye,
e.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres startup method for ubuntu

2008-01-31 Thread Daniel Verite

T.J. Adami wrote:


 The source comes with a startup script that can be installed by
 running:
 # cp contrib/start-scripts/linux /etc/init.d/postgresql
 # update-rc.d postgresql defaults
 (update-rc.d is to debian-based systems what chkconfig is to
 Redhat-based systems).

 --
  Daniel

This adds shutdown scripts too? I need the server shutdown by 'pg_ctl
stop', not by linux killing the process (init 0 or shutdown commands
are very useful sometimes ;)



Yes. update-rc.d set things up so that at boot time the system will 
execute: /etc/init.d/postgresql start, and at shutdown time: 
/etc/init.d/postgresql stop


Additionally, the root user can also execute when needed:
/etc/init.d/postgresql {stop|start|reload|restart}

That's the standard way of starting and stopping services on linux, 
both automatically and manually.


--
Daniel
PostgreSQL-powered mail user agent and storage: 
http://www.manitou-mail.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Log file permissions?

2008-01-31 Thread jr
hi, allow me to show-off my ignorance..  I think that logging via 
'syslogd' and managing log files with 'logrotate' already meets the 
requirements.


Alvaro Herrera wrote:


Perhaps we should add a log_file_group option, to which we would chgrp()
the log files.



regards, jr.  ([EMAIL PROTECTED])

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Log file permissions?

2008-01-31 Thread jr

hi, allow me to show-off my ignorance..  I think that logging via
'syslogd' and managing log files with 'logrotate' already meets the
requirements.

Alvaro Herrera wrote:


Perhaps we should add a log_file_group option, to which we would chgrp()
the log files.



regards, jr.  ([EMAIL PROTECTED])



--

regards, jr.  ([EMAIL PROTECTED])

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Log file permissions?

2008-01-31 Thread jr

hi, allow me to show-off my ignorance..  I think that logging via
'syslogd' and managing log files with 'logrotate' already meets the
requirements.

Alvaro Herrera wrote:


Perhaps we should add a log_file_group option, to which we would chgrp()
the log files.



regards, jr.  ([EMAIL PROTECTED])



--

regards, jr.  ([EMAIL PROTECTED])

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 31 Jan 2008 18:13:53 +
jr [EMAIL PROTECTED] wrote:

 hi, allow me to show-off my ignorance..  I think that logging via
 'syslogd' and managing log files with 'logrotate' already meets the
 requirements.

Unless you don't have access to /var/log (on linux) but do have access
to postgresql logs.

Joshua D. Drake

- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHohLkATb/zqfZUUQRAo9BAKCuRHq/nF+NybRfZV7D7a0+VbmRQgCcCZy8
lda+kq5/vmTtfU+yipEnQdc=
=G47G
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Vivek Khera


On Jan 31, 2008, at 10:21 AM, Alvaro Herrera wrote:


Glyn Astill wrote:


I'm not piping it to a file, postgres is managing the logs. Is there
any way to manage the permissions, or do I just need to create a
script to change the permissions?


I think you should be able to chmod the files after they have been
created.  The postmaster changes its umask to 0077, so no file is
group-readable.  I don't think is configurable either.


just move the logs into a subdir which has permissions applied to it,  
then not worry about the files inside, since nobody can break through  
the directory anyhow.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Adam Rich
Hi Willem,

 for some reason the order by's aren't working.

Could you provide more details?  Do you get a specific error message?

 only returning 658 rows instead of the 750K.

You should not expect the same row count in both source table and
result set.  Even in your example -- you provided 8 source rows, and
4 result rows.  You can determine the correct number of results via
the number of records, related to client_ids having two or more records 
in all_client_times, minus one.  It may be true that you have 750k
records but only 658 rows that satisfy this requirement.

What do you get for this query?

select count(*) from ( select client_id, count(*) as rows 
from all_client_times group by client_id having count(*)  1 ) as x


Adam




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-31 Thread Vivek Khera


On Jan 31, 2008, at 10:14 AM, Erik Jones wrote:

That's an interesting idea.  Is there a general audience/ 
participation wiki for Postgres?  I know the developers have one,  
but a user-oriented sister wiki would probably be a good way to get  
lots of different people involved.


I'm of the opinion that the documentation should provide guidance like  
best practices in addition to just being a reference.  To that end,  
the interactive online docs seem like a great place for people to  
make suggestions and recommendations like this, and these comments can  
be folded into the next release of the docs.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk

Hey Adam,

I tried your sequence method this morning on an unsorted table and for 
some reason the order by's aren't working.  If I create a sorted view 
(client_id, datetime) on the 'all_client_times' table and then use that 
view with your sequence method all works fine.  The strange thing is 
that my table which has about 750K rows only ends up returning 658 rows 
with your sequence method using the unsorted table.  In fact, when I 
tried the same thing with the lagfunc() method you wrote earlier on an 
unsorted table the same thing occurs - only returning 658 rows instead 
of the 750K. Again, all works well with lagfunc() if I use it on a 
sorted view and I remove the order by in the function.  This is not too 
much of a problem as I can use a sorted view first but I don't 
understand why this is happening.  Perhaps this is a bug?


As well, I am finding that the lagfunc() is consistently faster than the 
sequence method.


cheers,

Willem

Adam Rich wrote:

I'm trying to replicate the use of Oracle's 'lag' and 'over
partition by' analytical functions in my query.  I have a table
(all_client_times) such as:
and I would like to create a new view that takes the first table and
calculates the time difference in minutes between each row so that
the result is something like:
  


I thought of a another way of doing this.  In my tests, it's a little
faster, too.

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.datetime, a.datetime as previousTime, (b.datetime -
a.datetime) as difftime from
(select nextval('seq1') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] variable table names in trigger functions

2008-01-31 Thread Hugo

Hi all,

I've written a simple trigger function to store the old row in another table to 
log the data:

CREATE FUNCTION logusers() RETURNS trigger AS $$
   BEGIN
INSERT INTO log.users SELECT FROM data.users WHERE id=OLD.id;
RETURN NEW;
   END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER log BEFORE INSERT OR UPDATE ON data.users
   FOR EACH ROW EXECUTE PROCEDURE logusers();

This works but I whould like to make a function without the table names 
hardcoded in it so it's usable for other tables.
I tried this:

CREATE FUNCTION log() RETURNS trigger AS $$
   DECLARE
log_table name  := 'log.'||TG_TABLE_NAME;
data_table name := TG_TABLE_SCHEMA||'.'||TG_TABLE_NAME;
BEGIN
INSERT INTO log_table SELECT FROM data_table WHERE id=OLD.id;
RETURN NEW;
   END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER log BEFORE INSERT OR UPDATE ON data.users
   FOR EACH ROW EXECUTE PROCEDURE log();

CREATE TRIGGER log BEFORE INSERT OR UPDATE ON data.posts
   FOR EACH ROW EXECUTE PROCEDURE log();

But this doesn't work.
Any ideas on how to do this?

Thanks,
Hugo



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] warm standby examples.

2008-01-31 Thread Erik Jones

On Jan 31, 2008, at 10:04 AM, Steve Clark wrote:


Hello List,

I am going to be setting up a warm standby postgresql 8.2.5 high  
availability 2 server system. I was
wondering if anybody that has done this can share some scripts,  
pertinent postgresql.conf entries,
etc so I don't have to reinvent the wheel. I have read the manual a  
couple of times and it is a lot to

pull together.

Anything would be greatly appreciated.


The complexity in the docs comes from explaining what everything is  
and how it all works.  There are a couple available options to you:   
use the walmgr.py portion of the Skype's SkyTools package with will  
handle PITR backups from a primary to a single slave or manually,  
I'll cover manually here.  To actually get  a warm standby up is  
actually a pretty simple process.


Pre-process recommendations:
a.) Use pg_standby for your restore_command in the recovery.conf file  
on the standby
b.) Set up your standby host's environment and directory structure  
exactly the same as your primary.  Otherwise you'll need to spend  
time changing any symlinks you've created on the primary for xlogs,  
tablespaces, or whatnot which is really just opportunity for error.
c.) Pre-configure both the postgresql.conf and recovery.conf files  
for your standby.  I usually keep all of my different config files  
for all of my different servers in a single, version-controlled  
directory that I can then check out and symlink to.  Again,  
consistent environment  directory setups make symlinks your best  
friend.
d.) Use ssh keys for simply, and safely, transferring files between  
hosts.

e.) Follow all of the advice in the manual wrt handling errors.

1.  Set archive_command in your postgresql.conf,  rysnc is a popular  
choice or you can just use one of the examples from the docs.  I use:  
rsync -a %p [EMAIL PROTECTED]:/path/to/wal_archive/%f
2.  Reload your config -- either: SELECT pg_reload_conf(); from psql  
or: pg_ctl reload -D data_dir/

3.  Verify that the WALs are being shipped to their destination.
4.  In psql, SELECT pg_start_backup('some_label');
5.  Run your base backup.  Again, rsync is good for this with  
something as simple as: rsync -a --progress /path/to/data_dir/*  
[EMAIL PROTECTED]:/path/to/data_dir/
I'd suggest running this in a screen term window, the --progress  
flag will let you watch to see how far along the rsync is. The -a  
flag will preserve symlinks as well as all file permissions  ownership.

6.  In psql, SELECT pg_stop_backup();
	-- this drops a file to be archived that will have the same name as  
the first WAL shipped after the call to pg_start_backup() with  
a .backup suffix.  Inside will be the start  stop WAL records  
defining the range of WAL files needed to be replayed before you can  
consider bringing the standby out of recovery.
7.  Drop in, or symlink, your recovery.conf file in the standby's  
data_dir.
	-- The restore command should use pg_standby (it's help/README are  
simple and to the point).  I'd recommend redirecting all output from  
pg_standby to a log file that you can then watch to verify that  
everything is working correctly once you've started things.

8.  Drop in, or symlink, your standby's postgresql.conf file.
8 a.) If you don't symlink your pg_xlog directory to write WALs to a  
separate drive, you can safely delete everything under data_dir/ 
pg_xlog on the standby host.
9. Start the standby db server with a normal: pg_ctl start -D /path/ 
to/data_dir/
10. run a: tail -f on your standby log and watch to make sure that  
it's replaying logs.  If everything's cool you'll see some info on  
each WAL file, in order, that the standby looks for along with  
'success' messages.  If it can't find the files for some reason,  
you'll see repeated messages like: 'WAL file not present yet.  
Checking for trigger file...' (assuming you set up pg_standby to look  
for a trigger file in your recovery_command).


Execute this entire process at least a couple times, bringing up the  
standby into normal operations mode once it's played through all of  
the necessary WAL files (as noted in the .backup file) so that you  
can connect to it and verify that everything looks good, before doing  
all of this and leaving it running indefinitely.  Once you do it a  
couple times, it becomes dirt simple.  If you have any questions  
about any of this, don't hesitate to ask.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Recover by Log

2008-01-31 Thread Erik Jones

On Jan 31, 2008, at 10:02 AM, T.J. Adami wrote:


How I can recover data (or replicate data to another) based on
postgresql logs?
Where they are, and how I do it?


See the chapters 23, 24 and 27 (at least) of the manual.  For one  
example of the process once you've read through all of that, see my  
response just now to Steve Clark's question about warm standby servers.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk
The 'all_client_times' table has 753698 rows.  The lagfunc() on the 
sorted view returns 753576 rows and appears to work exactly as needed.  
Using the function on an unsorted table returns only 686 rows and is 
missing a whole lot of data.  Running the count query returns 122 - 
which is correct as the amount of clients that I have.  Each client has 
between 5 - 7K  records each.


The way I see it is for each client there will be one row, namely, the 
first in the series, that will not be included in the final results as 
it would not have a previous time.  With that in mind, if I take my 
table row count as 753698 and minus the amount of clients I have, 122,  
then I should get the number of results as 753576 which is correct when 
I use your methods on a sorted table but which is not correct when I  
use your methods on an unsorted table.


willem

Adam Rich wrote:

Hi Willem,

  

for some reason the order by's aren't working.



Could you provide more details?  Do you get a specific error message?

  

only returning 658 rows instead of the 750K.



You should not expect the same row count in both source table and
result set.  Even in your example -- you provided 8 source rows, and
4 result rows.  You can determine the correct number of results via
the number of records, related to client_ids having two or more records 
in all_client_times, minus one.  It may be true that you have 750k

records but only 658 rows that satisfy this requirement.

What do you get for this query?

select count(*) from ( select client_id, count(*) as rows 
from all_client_times group by client_id having count(*)  1 ) as x



Adam




  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-31 Thread Robert Treat
On Wednesday 30 January 2008 02:54, Ow Mun Heng wrote:
 On Tue, 2008-01-29 at 19:16 +, Dave Page wrote:
  On Jan 29, 2008 6:16 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
   I try to be reasonable (no laughing people :)).
 
  Oh it's hard, so very, very hard!

 But seriously, I've ranted on this some time ago( and you can tell that
 I'm about to start again)

 rant
 One of the worst aspect of PG is the documentation, or the lack of it in
 terms of traditional house. The Manual is fine and all, but in most
 cases, what I find that it lacks is actually examples. Either examples
 to show what it a particular field/query means but also as a way to show
 exactly how a particular problem can be solved.

 When I played with both MSSQL and MySQL, I had loads of books (and I
 bought a bit of it too, didn't bother subscribing to safari, it just
 ain't a book!) to be used as reference and what not.

 In PG, all there is, is the manual, a book by Robert Treat, the Book
 from Joshua, 1 or 2 other books authored by someone I can't remember etc
 and that's about it.

 Then I would have to go hunt(via google) for any bit of blog/
 presentation slides from a meetup/talk etc for ways to find out how to
 do a particular thing. (Thanks Bruce M, Thanks Robert T - excellent
 partitioning talk!, Thanks PgCon!) and pore over those.

 Other than that, it's more or less, Bang you head here and send email
 to the list and hope someone answers

 I hang on to my O'reilly SQL Hacks book tightly as it gives me
 examples on how to solve a problem and even how other DBs solve it.

 I wish there was a book like MySQL Cookbook (which I have a copy)
 /rant

Just so you know, I approached OReally about writing a PostgreSQL Cookbook, 
and they turned it down. They did offer me some other titles, but those don't 
seem to have gone anywhere. 

I have thought of going the self-publishing route, but the reason against it 
is the same one as you don't see a lot of book publishers working on PG 
books; the sales just aren't that strong. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] warm standby examples.

2008-01-31 Thread Steve Clark

Erik Jones wrote:

On Jan 31, 2008, at 10:04 AM, Steve Clark wrote:



Hello List,

I am going to be setting up a warm standby postgresql 8.2.5 high  
availability 2 server system. I was
wondering if anybody that has done this can share some scripts,  
pertinent postgresql.conf entries,
etc so I don't have to reinvent the wheel. I have read the manual a  
couple of times and it is a lot to

pull together.

Anything would be greatly appreciated.



The complexity in the docs comes from explaining what everything is  
and how it all works.  There are a couple available options to you:   
use the walmgr.py portion of the Skype's SkyTools package with will  
handle PITR backups from a primary to a single slave or manually,  
I'll cover manually here.  To actually get  a warm standby up is  
actually a pretty simple process.


Pre-process recommendations:
a.) Use pg_standby for your restore_command in the recovery.conf file  
on the standby
b.) Set up your standby host's environment and directory structure  
exactly the same as your primary.  Otherwise you'll need to spend  
time changing any symlinks you've created on the primary for xlogs,  
tablespaces, or whatnot which is really just opportunity for error.
c.) Pre-configure both the postgresql.conf and recovery.conf files  
for your standby.  I usually keep all of my different config files  
for all of my different servers in a single, version-controlled  
directory that I can then check out and symlink to.  Again,  
consistent environment  directory setups make symlinks your best  
friend.
d.) Use ssh keys for simply, and safely, transferring files between  
hosts.

e.) Follow all of the advice in the manual wrt handling errors.

1.  Set archive_command in your postgresql.conf,  rysnc is a popular  
choice or you can just use one of the examples from the docs.  I use:  
rsync -a %p [EMAIL PROTECTED]:/path/to/wal_archive/%f
2.  Reload your config -- either: SELECT pg_reload_conf(); from psql  
or: pg_ctl reload -D data_dir/

3.  Verify that the WALs are being shipped to their destination.
4.  In psql, SELECT pg_start_backup('some_label');
5.  Run your base backup.  Again, rsync is good for this with  
something as simple as: rsync -a --progress /path/to/data_dir/*  
[EMAIL PROTECTED]:/path/to/data_dir/
 I'd suggest running this in a screen term window, the --progress  
flag will let you watch to see how far along the rsync is. The -a  
flag will preserve symlinks as well as all file permissions  ownership.

6.  In psql, SELECT pg_stop_backup();
	-- this drops a file to be archived that will have the same name as  
the first WAL shipped after the call to pg_start_backup() with  
a .backup suffix.  Inside will be the start  stop WAL records  
defining the range of WAL files needed to be replayed before you can  
consider bringing the standby out of recovery.
7.  Drop in, or symlink, your recovery.conf file in the standby's  
data_dir.
	-- The restore command should use pg_standby (it's help/README are  
simple and to the point).  I'd recommend redirecting all output from  
pg_standby to a log file that you can then watch to verify that  
everything is working correctly once you've started things.

8.  Drop in, or symlink, your standby's postgresql.conf file.
8 a.) If you don't symlink your pg_xlog directory to write WALs to a  
separate drive, you can safely delete everything under data_dir/ 
pg_xlog on the standby host.
9. Start the standby db server with a normal: pg_ctl start -D /path/ 
to/data_dir/
10. run a: tail -f on your standby log and watch to make sure that  
it's replaying logs.  If everything's cool you'll see some info on  
each WAL file, in order, that the standby looks for along with  
'success' messages.  If it can't find the files for some reason,  
you'll see repeated messages like: 'WAL file not present yet.  
Checking for trigger file...' (assuming you set up pg_standby to look  
for a trigger file in your recovery_command).


Execute this entire process at least a couple times, bringing up the  
standby into normal operations mode once it's played through all of  
the necessary WAL files (as noted in the .backup file) so that you  
can connect to it and verify that everything looks good, before doing  
all of this and leaving it running indefinitely.  Once you do it a  
couple times, it becomes dirt simple.  If you have any questions  
about any of this, don't hesitate to ask.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com





Thanks much Erik - this is exactly what I was looking for.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL][UMN_MAPSERVER-USERS] query postgres AND oracle

2008-01-31 Thread John Smith
guys,
i got geometry in postgres, some other data in oracle. is it possible
to filter postgres geometry based on oracle data (without using
services)?
cheers, jzs

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL][UMN_MAPSERVER-USERS] query postgres AND oracle

2008-01-31 Thread Scott Marlowe
On Jan 31, 2008 1:38 PM, John Smith [EMAIL PROTECTED] wrote:
 guys,
 i got geometry in postgres, some other data in oracle. is it possible
 to filter postgres geometry based on oracle data (without using
 services)?

You either import the data from oracle to pgsql and let the db do it
(fastest), use some form of dbilink to cross query to the oracle
database (slower), or you do it in the application (usually slowest)

My recommendation is to write some kind of simple script to import the
data from oracle and keep it up to date, especially if it's not real
high volume.

Explaining more about your particular problem will get better answers.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Adam Rich

Ah, ok.  I see what's happening.  The data is retrieved from the tables,
and the sequence values are added, PRIOR to the order by, so that after
the order by, they are no longer sorted.  (The same thing can happen
in Oracle with ROWNUM).

You can go the sorted view route, or just an inline view, like this:

select a.client_id, b.arbnum, a.arbnum as previousarbnum, (b.arbnum -
a.arbnum) as diffarbnum from
(select nextval('seq1') as s, * from (select client_id, arbnum from arb_test
order by client_id, arbnum OFFSET 0) as y OFFSET 0) as a 
inner join (select nextval('seq2') as s, * from (select client_id, arbnum
from arb_test order by client_id, arbnum OFFSET 0)as z OFFSET 0) as b 
on a.s=(b.s-1) where a.client_id=b.client_id;



 -Original Message-
 From: Willem Buitendyk [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 31, 2008 2:48 PM
 To: Adam Rich
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Oracle Analytical Functions
 
 Here is a little test example.  It seems that the second order by
 condition is not working - in this case datetime.
 
 create table arb_test (
 client_id integer,
 arbnum integer);
 
 insert into arb_test values (2,1);
 insert into arb_test values (2,33);
 insert into arb_test values (2,6);
 insert into arb_test values (2,76);
 insert into arb_test values (2,111);
 insert into arb_test values (2,10);
 insert into arb_test values (2,55);
 insert into arb_test values (7,12);
 insert into arb_test values (7,6);
 insert into arb_test values (7,144);
 insert into arb_test values (7,63);
 insert into arb_test values (7,87);
 insert into arb_test values (7,24);
 insert into arb_test values (7,22);
 insert into arb_test values (1,14);
 insert into arb_test values (1,23);
 insert into arb_test values (1,67);
 insert into arb_test values (1,90);
 insert into arb_test values (1,2);
 insert into arb_test values (1,5);
 insert into arb_test values (5,8);
 insert into arb_test values (5,42);
 insert into arb_test values (5,77);
 insert into arb_test values (5,9);
 insert into arb_test values (5,89);
 insert into arb_test values (5,23);
 insert into arb_test values (5,11);
 
 DROP SEQUENCE if exists seq1;
 DROP SEQUENCE if exists seq2;
 CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
 CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;
 
 select a.client_id, b.arbnum, a.arbnum as previousarbnum, (b.arbnum -
 a.arbnum) as diffarbnum from
 (select nextval('seq1') as s, client_id, arbnum from arb_test
 order by client_id, arbnum OFFSET 0) as a
 inner join
 (select nextval('seq2') as s, client_id, arbnum from arb_test
 order by client_id, arbnum OFFSET 0) as b
 on a.s=(b.s-1) where a.client_id=b.client_id;
 
 --create or replace view arb_view as select * from arb_test order by
 client_id, arbnum;
 
 Here are the results:
 
  client_id | arbnum | previousarbnum | diffarbnum
 ---+++
  1 | 23 | 14 |  9
  1 | 67 | 23 | 44
  1 | 90 | 67 | 23
  1 |  2 | 90 |-88
  1 |  5 |  2 |  3
  2 | 33 |  1 | 32
  2 |  6 | 33 |-27
  2 | 76 |  6 | 70
  2 |111 | 76 | 35
  2 | 10 |111 |   -101
  2 | 55 | 10 | 45
  5 | 42 |  8 | 34
  5 | 77 | 42 | 35
  5 |  9 | 77 |-68
  5 | 89 |  9 | 80
  5 | 23 | 89 |-66
  5 | 11 | 23 |-12
  7 |  6 | 12 | -6
  7 |144 |  6 |138
  7 | 63 |144 |-81
  7 | 87 | 63 | 24
  7 | 24 | 87 |-63
 
 When I used a sorted view:
 
 create or replace view arb_view as select * from arb_test order by
 client_id, arbnum;
 
 and redid it the results are:
 
  client_id | arbnum | previousarbnum | diffarbnum
 ---+++
  1 |  5 |  2 |  3
  1 | 14 |  5 |  9
  1 | 23 | 14 |  9
  1 | 67 | 23 | 44
  1 | 90 | 67 | 23
  2 |  6 |  1 |  5
  2 | 10 |  6 |  4
  2 | 33 | 10 | 23
  2 | 55 | 33 | 22
  2 | 76 | 55 | 21
  2 |111 | 76 | 35
  5 |  9 |  8 |  1
  5 | 11 |  9 |  2
  5 | 23 | 11 | 12
  5 |

Re: [GENERAL] Is PostGreSql's Data storage mechanism inferior?

2008-01-31 Thread Andrej Ricnik-Bay
On 01/02/2008, Tony Caduto [EMAIL PROTECTED] wrote:
 The part about the BSD license is bogus. A BSD license is the most
 desirable of any Open Source license and gives you the right to use
 PostgreSQL in your commercial apps without worry.
While I'm a big fan of the BSD license (for varied reasons) I think that
OpenSource hardliners like RMS would argue that the BSD license is *NOT*
in the true spirit of OpenSource *BECAUSE* of what you list as a bonus
of it ... the locking down of benefits reaped from OpenSource not getting
back into the stream.


Cheers,
Andrej


-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Alvaro Herrera
Vivek Khera wrote:

 On Jan 31, 2008, at 10:21 AM, Alvaro Herrera wrote:

 I think you should be able to chmod the files after they have been
 created.  The postmaster changes its umask to 0077, so no file is
 group-readable.  I don't think is configurable either.

 just move the logs into a subdir which has permissions applied to it,  
 then not worry about the files inside, since nobody can break through  
 the directory anyhow.

That doesn't work because the files won't be readable by anyone but the
postgres user.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is PostGreSql's Data storage mechanism inferior?

2008-01-31 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 1 Feb 2008 10:04:53 +1300
Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:

 On 01/02/2008, Tony Caduto [EMAIL PROTECTED] wrote:
  The part about the BSD license is bogus. A BSD license is the most
  desirable of any Open Source license and gives you the right to use
  PostgreSQL in your commercial apps without worry.

 While I'm a big fan of the BSD license (for varied reasons) I think
 that OpenSource hardliners like RMS would argue that the BSD license
 is *NOT* in the true spirit of OpenSource *BECAUSE* of what you list
 as a bonus of it ... the locking down of benefits reaped from
 OpenSource not getting back into the stream.

RMS wouldn't make any such argument at all. His argument would be
centered around FREE not OpenSource software.

Joshua D. Drake

 
 
 Cheers,
 Andrej
 
 


- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHonjaATb/zqfZUUQRAjfrAKCZu2KZigaCrNT6c9nbuAFYImRhdQCeI4uT
2gdMn7CA9XExIynw5mFogBs=
=pqTR
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] PL/Tcl implementation

2008-01-31 Thread fschmidt

Why doesn't the PL/Tcl (and PL/Python) implementation use the SPI functions? 
For example pltcl_set_tuple_values() calls NameStr() instead of SPI_fname()
and heap_getattr() instead of SPI_getbinval().  Why?  This makes the code
impossible to follow for someone who is not familiar with the postgres
source and has only read the online documentation.

-- 
View this message in context: 
http://www.nabble.com/PL-Tcl-implementation-tp15215251p15215251.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk
Here is a little test example.  It seems that the second order by 
condition is not working - in this case datetime.


create table arb_test (
client_id integer,
arbnum integer);

insert into arb_test values (2,1);
insert into arb_test values (2,33);
insert into arb_test values (2,6);
insert into arb_test values (2,76);
insert into arb_test values (2,111);
insert into arb_test values (2,10);
insert into arb_test values (2,55);
insert into arb_test values (7,12);
insert into arb_test values (7,6);
insert into arb_test values (7,144);
insert into arb_test values (7,63);
insert into arb_test values (7,87);
insert into arb_test values (7,24);
insert into arb_test values (7,22);
insert into arb_test values (1,14);
insert into arb_test values (1,23);
insert into arb_test values (1,67);
insert into arb_test values (1,90);
insert into arb_test values (1,2);
insert into arb_test values (1,5);
insert into arb_test values (5,8);
insert into arb_test values (5,42);
insert into arb_test values (5,77);
insert into arb_test values (5,9);
insert into arb_test values (5,89);
insert into arb_test values (5,23);
insert into arb_test values (5,11);

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.arbnum, a.arbnum as previousarbnum, (b.arbnum -
a.arbnum) as diffarbnum from
(select nextval('seq1') as s, client_id, arbnum from arb_test
order by client_id, arbnum OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, arbnum from arb_test
order by client_id, arbnum OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id;

--create or replace view arb_view as select * from arb_test order by 
client_id, arbnum;


Here are the results:

client_id | arbnum | previousarbnum | diffarbnum
---+++
1 | 23 | 14 |  9
1 | 67 | 23 | 44
1 | 90 | 67 | 23
1 |  2 | 90 |-88
1 |  5 |  2 |  3
2 | 33 |  1 | 32
2 |  6 | 33 |-27
2 | 76 |  6 | 70
2 |111 | 76 | 35
2 | 10 |111 |   -101
2 | 55 | 10 | 45
5 | 42 |  8 | 34
5 | 77 | 42 | 35
5 |  9 | 77 |-68
5 | 89 |  9 | 80
5 | 23 | 89 |-66
5 | 11 | 23 |-12
7 |  6 | 12 | -6
7 |144 |  6 |138
7 | 63 |144 |-81
7 | 87 | 63 | 24
7 | 24 | 87 |-63

When I used a sorted view:

create or replace view arb_view as select * from arb_test order by 
client_id, arbnum;


and redid it the results are:

client_id | arbnum | previousarbnum | diffarbnum
---+++
1 |  5 |  2 |  3
1 | 14 |  5 |  9
1 | 23 | 14 |  9
1 | 67 | 23 | 44
1 | 90 | 67 | 23
2 |  6 |  1 |  5
2 | 10 |  6 |  4
2 | 33 | 10 | 23
2 | 55 | 33 | 22
2 | 76 | 55 | 21
2 |111 | 76 | 35
5 |  9 |  8 |  1
5 | 11 |  9 |  2
5 | 23 | 11 | 12
5 | 42 | 23 | 19
5 | 77 | 42 | 35
5 | 89 | 77 | 12
7 | 12 |  6 |  6
7 | 22 | 12 | 10
7 | 24 | 22 |  2
7 | 63 | 24 | 39
7 | 87 | 63 | 24
7 |144 | 87 | 57
(23 rows)


This works the way it should. 


--drop table arb_test;
--drop view arb_view;

willem
The 'all_client_times' table has 753698 rows.  The lagfunc() on the 
sorted view returns 753576 rows and appears to work exactly as 
needed.  Using the function on an unsorted table returns only 686 rows 
and is missing a whole lot of data.  Running the count query returns 
122 - which is correct as the amount of clients that I have.  Each 
client has between 5 - 7K  records each.


The way I see it is for each client there will be one row, namely, the 
first in the series, that will not be included in the final results as 
it would not have a previous time.  With 

Re: [GENERAL] postgresql book - practical or something newer?

2008-01-31 Thread Guy Rouillier

Robert Treat wrote:

Just so you know, I approached OReally about writing a PostgreSQL Cookbook, 
and they turned it down. They did offer me some other titles, but those don't 
seem to have gone anywhere. 


As someone else pointed out in this thread, very much of what you need 
to know has been previously discussed at one point; the hard part is 
finding it.


What we need is for some of the people with the big brains ;) to come up 
with some new kind of hyperbook.  That would be the documentation in 
some form similar to what it is today, but somehow connected to the 
discussions that happen in the mailing lists.  That way, when something 
really insightful or helpful gets said in the mailing lists, it can get 
connected to a particular place in the documentation.  Then over time, 
the doc maintainers can take the best of those and incorporate them 
directly into the docs at the appropriate place.


This would not only benefit those looking for information, but also 
those hearty and knowledgeable souls (like Tom) who patiently provide it 
repeatedly as the same questions pop up every couple weeks/months. 
Plus, the documentation would grow and become much more useful over 
time.  Then, instead of repeating answers to repeating questions, we can 
just point to the appropriate place in the docs.  The unattached 
discussions could identify sections lacking in the docs; i.e., if enough 
unattached discussions accumulate for a particular topic, then that 
probably indicates the need for a new section in the docs on that topic.


To be honest, I think a hyperbook would be easier to implement with 
forums than with mailing lists.  The former are permanently resident in 
a known place, while the latter are out there in the ether (or in some 
unorganized archive that is notoriously hard to link to.)


--
Guy Rouillier

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PL/Tcl implementation

2008-01-31 Thread Alvaro Herrera
fschmidt escribió:
 
 Why doesn't the PL/Tcl (and PL/Python) implementation use the SPI functions? 
 For example pltcl_set_tuple_values() calls NameStr() instead of SPI_fname()
 and heap_getattr() instead of SPI_getbinval().  Why?  This makes the code
 impossible to follow for someone who is not familiar with the postgres
 source and has only read the online documentation.

If you are reading the PL/Tcl source (i.e. the Postgres source, really),
you should definitely be using a source code cross-referencing system;
be it doxygen, or cscope, glimpse, or whatever tool fits the bill.  If
you cannot find out at a keystroke where to find the definition of
NameStr() you are doomed [to take a lot longer to understand what's
going on].

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Log file permissions?

2008-01-31 Thread Chander Ganesan

Alvaro Herrera wrote:

Vivek Khera wrote:
  

On Jan 31, 2008, at 10:21 AM, Alvaro Herrera wrote:



I think you should be able to chmod the files after they have been
created.  The postmaster changes its umask to 0077, so no file is
group-readable.  I don't think is configurable either.
  
just move the logs into a subdir which has permissions applied to it,  
then not worry about the files inside, since nobody can break through  
the directory anyhow.



That doesn't work because the files won't be readable by anyone but the
postgres user.

  
You could just write a cron job that periodically goes to the log 
directory and changes the permissions on the existing log files to allow 
reading by whatever group owns the log files, then make nagios a member 
of that group.  Even if the log file is currently in use, once you 
change the permissions, they should stick.  Of course, there would be a 
permission change lag between the time the log file switch occurs and 
the cron job runs...


As to Alvaro's recommendation of having a setting to change the log 
group, I think another idea would be to have a 'log_rotate_script' 
setting...thus allowing a script to be called with the log file name 
after a log file is rotated.  In such a case one could archive off 
existing files, and since the switch to a new log file had already 
occurred, also change permissions, etc if needed.


--
Chander Ganesan
The Open Technology Group
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com



[GENERAL] Dump schema without the functions

2008-01-31 Thread Stefan Schwarzer

Hi there,

how can I dump a schema with all tables, but without the functions? Is  
there a way to do it, or do I have to manually drop the functions  
later when having used the pg_restore?


Thanks for any advice,

Stef

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Is PostGreSql's Data storage mechanism inferior?

2008-01-31 Thread Tom Lane
Andrej Ricnik-Bay [EMAIL PROTECTED] writes:
 On 01/02/2008, Tony Caduto [EMAIL PROTECTED] wrote:
 The part about the BSD license is bogus. A BSD license is the most
 desirable of any Open Source license and gives you the right to use
 PostgreSQL in your commercial apps without worry.

 While I'm a big fan of the BSD license (for varied reasons) I think that
 OpenSource hardliners like RMS would argue that the BSD license is *NOT*
 in the true spirit of OpenSource *BECAUSE* of what you list as a bonus
 of it ... the locking down of benefits reaped from OpenSource not getting
 back into the stream.

The quoted article knocked *both* GPL and BSD as being too open.  Too
open for whom, he didn't say.  The rest of the article is at about the
same quality level :-(  I have seldom seen such a sterling example of
cluelessness combined with FUD-spouting.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Dump schema without the functions

2008-01-31 Thread Adam Rich
 how can I dump a schema with all tables, but without the functions? Is
 there a way to do it, or do I have to manually drop the functions
 later when having used the pg_restore?

Stef,
You can edit the data between dump and restore, to comment out the
function references. Or, you can use the -L argument with pg_restore
to provide a list of the specific items you want to restore.

For example:

pg_dump -Fc mydb  db.dump
pg_restore -l db.dump | grep -v FUNCTION  db.nofunc.dump
pg_restore -d newdb db.nofunc.dump

(assuming the word FUNCTION doesn't appear elsewhere in your schema 
object names.  If it does, you might try appending the schema, such as 
grep -v FUNCTION public)

Adam







---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Dump schema without the functions

2008-01-31 Thread Tom Lane
Stefan Schwarzer [EMAIL PROTECTED] writes:
 how can I dump a schema with all tables, but without the functions?

There's no built-in single command for that.  You can accomplish it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the objects in the edited list.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Dump schema without the functions

2008-01-31 Thread Stefan Schwarzer

how can I dump a schema with all tables, but without the functions?


There's no built-in single command for that.  You can accomplish it by
using pg_restore -l to make a list of objects, then edit the list,
then pg_restore -L to restore only the objects in the edited list.


Hmmm.. I probably should have mentioned that it's not a normal dump,  
but one including imported shapefiles. So my dump comes from this:


pg_dump -Fc ...

and - sorry, myself not being an expert - it seems to me that this  
file is not editable anymore.


When I try to dump the file in text form, it gets rather big, and when  
trying to import it, I get this:


server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


Stef




smime.p7s
Description: S/MIME cryptographic signature