Re: [GENERAL] Building Pg 8.2.4 on AIX 5.3 doesn't produce shared libs?

2007-05-08 Thread Albe Laurenz
> Sic Transit Gloria Mundi writes:
>> I would like to use Perl, DBI, and DBD::Pg on AIX.  As I 
>> understand it, I need a shared lib version of the client libs 
>> for this.
>> 
>> When building on AIX5.3 ML04 (powerpc_power5, 64 bit), it 
>> seems only the static libraries are built.  This seems true 
>> with either xlc (8.x) or gcc (3.3.2).  Everything works, 
>> there are just no ./lib/*.so at the end of the process.
> 
> IIRC, on AIX the .a files actually are shared libraries.  You don't
need
> anything named .so.  Check the -hackers and -patches archives from
back
> around mid-Sept 2006 for details.

Correct. libpq.a IS the shared library.
Run 'ar -t libpq.a' to see the shared object therein.

Try to run 'ldd psql', the output should contain 'libpq.a(libpq.so)'
if it is dynamically linked.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Spliting a string in plpgsql

2007-05-08 Thread Albe Laurenz
> Jasbinder Singh Bali <[EMAIL PROTECTED]> schrieb:
>> I'm writing a function in plpgsql and i need to do the following:
>> 
>> I have a string in the following format.
>> 
>> mail.yahoo.com
>> 
>> In this string, i need to figure out the number of dots in it and
split the
>> string into two on last but one dot.
> 
> Number of dots:
> 
> test=*# select 
> length(regexp_replace('mail.yahoo.com','[^\.]','','g'));
>  length
> 
>   2
> (1 row)

I think that this is the desired split:

test=> SELECT regexp_replace('mail.yahoo.com',
E'^(.*)\\.([^.]*\\.[^.]*)$', E'\\1');
 regexp_replace 

 mail
(1 row)

test=> SELECT regexp_replace('mail.yahoo.com',
E'^(.*)\\.([^.]*\\.[^.]*)$', E'\\2');
 regexp_replace 

 yahoo.com
(1 row)

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Views- Advantages and Disadvantages

2007-05-08 Thread Andrej Ricnik-Bay

On 5/9/07, Ashish Karalkar <[EMAIL PROTECTED]> wrote:

Hello All,

Hi,


Can anybody please point me to Advantages and Disadvantages
of using view

The most obvious advantage (for me, anyway) is to restrict which
columns a user can see in a table.

I'm sure there are others.





With Regards
Ashish...

Cheers,
Andrej


--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] problem with a conditional statement

2007-05-08 Thread Albe Laurenz
Kirk Wythers wrote:
>>> I am struggling to get a CASE WHEN statement to work within another
>>> CASE WHEN. Here is my original code:
>>>
>>> SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id
ELSE
>>> s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,
>>> w.year, w.doy, w.precip, w.tmin, w.tmax,
>>>
>>> --replace missing solar values (-999) with the average of all solar
>>> --values from that month (s.month)
>>>
>>> --CASE  s.par WHEN -999 THEN AVG( s.par) ELSE s.par END
>>> --FROM solar s
>>> --GROUP BY s.month;
>>>
>>> FROM site_near INNER JOIN solar s ON
>>> site_near.ref_solar_station_id = s.station_id  AND
>>> site_near.obs_year = s.year
>>> INNER JOIN weather w ON site_near.ref_weather_station_id =
>>> w.station_id AND site_near.obs_year = w.year AND s.date = w.date
>>> WHERE w.station_id = 211630;
>>
> Thank you for the reply. I see what you are doing in the creating of  
> avgsol. That should work perfectly. However, I am unsure how you are  
> working it into the existing code.

I did not provide the complete statement because
a) I am lazy and
b) I didn't want to create the impression that it was bulletproof
   tested SQL :^)

> to look this like this:
> 
> SELECT CASE WHEN w.station_id = site_near.station_id THEN 
> w.obs_id ELSE
> s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,  
> w.year, w.doy, w.precip, w.tmin, w.tmax,
> --replace missing values (-999) with the monthly average
> CASE s.par WHEN -999 THEN avgsol.par ELSE s.par END
> 
> Correct?

Yes!

>>   FROM solar s INNER JOIN ...,
> 
> I can't quite figure out what you are suggesting here?
> 
>> (SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol
>>   WHERE s.month = avgsol.month
>> AND ...
> 
> Do you mean:
> 
> FROM site_near INNER JOIN solar s ON
> (SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol WHERE  
> s.month = avgsol.month
> AND site_near.ref_solar_station_id = s.station_id  AND  
> site_near.obs_year = s.year
> INNER JOIN weather w ON site_near.ref_weather_station_id =  
> w.station_id AND site_near.obs_year = w.year AND s.date = w.date
> WHERE w.station_id = 211630;
> 
> I think my trouble is figuring how to place the code snipit:
> 
> (SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol
>WHERE s.month = avgsol.month
>  AND ...
> 
> Sorry for being so dull

Sorry for being so lazy :^)

Here is a more elaborate version, I'm trying to add 'avgsol' to
your original FROM clause:

FROM site_near
  INNER JOIN solar s ON
(site_near.ref_solar_station_id = s.station_id
 AND site_near.obs_year = s.year)
  INNER JOIN weather w ON
(site_near.ref_weather_station_id = w.station_id
 AND site_near.obs_year = w.year
 AND s.date = w.date)
  INNER JOIN (SELECT month, AVG(par) FROM solar GROUP BY month) AS
avgsol ON
(s.month = avgsol.month)
WHERE ...

Still no claim for correctness.

Does it make more sense now?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Views- Advantages and Disadvantages

2007-05-08 Thread Ashish Karalkar
Hello All,

Can anybody please point me to Advantages and Disadvantages of using view


With Regards
Ashish...

Re: [GENERAL] printing variable values in pl/pgsql functions

2007-05-08 Thread Martin Gainty
\echo will print contents of whatever follows
http://www.postgresql.org/docs/8.1/static/app-psql.html

M--
This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

  - Original Message - 
  From: Harpreet Dhaliwal 
  To: pgsql-general@postgresql.org 
  Sent: Tuesday, May 08, 2007 8:40 PM
  Subject: [GENERAL] printing variable values in pl/pgsql functions


  Hi,

  I have a pl/pgsql in which i have to print various variable values during the 
execution of the function to see
  what values are getting populated inside those variables with due course of 
time.

  PRINT  
  doesn't work.

  Can anyone tell me how to print these values??


  Thanks,
  ~Harpreet


Re: [GENERAL] printing variable values in pl/pgsql functions

2007-05-08 Thread Jasbinder Singh Bali

you can use
RAISE NOTICE 'i want to print % and %', var1,var2;

then run your function and click the MESSAGE tab at the bottom of your query
analyzer screen and you'll see sometime like

NOTICE: i want to print  and  wrote:




Hi,

I have a pl/pgsql in which i have to print various variable values during
the execution of the function to see
what values are getting populated inside those variables with due course
of time.

PRINT 
doesn't work.

Can anyone tell me how to print these values??


Thanks,
~Harpreet



[GENERAL] printing variable values in pl/pgsql functions

2007-05-08 Thread Harpreet Dhaliwal

Hi,

I have a pl/pgsql in which i have to print various variable values during
the execution of the function to see
what values are getting populated inside those variables with due course of
time.

PRINT 
doesn't work.

Can anyone tell me how to print these values??


Thanks,
~Harpreet


[GENERAL] printing variable values in pl/pgsql functions

2007-05-08 Thread Harpreet Dhaliwal

Hi,

I have a pl/pgsql in which i have to print various variable values during
the execution of the function to see
what values are getting populated inside those variables with due course of
time.

PRINT 
doesn't work.

Can anyone tell me how to print these values??


Thanks,
~Harpreet


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 17:35, Sean Murphy wrote:
> Scott Marlowe wrote:

> > Well, you could set it on individual workstations instead of on the
> > server.  I.e. if you set tcp_keepalive on your workstation to 500, but
> > leave Wally and Dilbert set at the default 7200 then they'd still
> > timeout and you wouldn't.
> > 
> 
> Unfortunately, my individual workstations are all running Windows...:(
> any idea if/where this could be set in MS-land?

Nope.  You're much more likely to get a useful answer from google than
from me regarding windows. :)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Paul Lambert

Sean Murphy wrote:

Scott Marlowe wrote:

On Tue, 2007-05-08 at 15:59, Sean Murphy wrote:

Scott Marlowe wrote:

On Tue, 2007-05-08 at 12:19, Sean Murphy wrote:

Tom Lane wrote:

Sean Murphy <[EMAIL PROTECTED]> writes:

I'm WAY out of my depth here, but my impression, based on the
circumstances, is that there is some sort of an idle session timeout
kicking in (most likely on the client side) and dropping the connection.

There's no such timeout in the Postgres server, for sure.  I would
actually bet that your problem is in some router between the client and
the server.  In particular, routers that do NAT address mapping
typically have a timeout after which they will forget the mapping for an
idle connection.  If you've got one of those, see if it'll let you
change the timeout.

If you can't do that, you might think about teaching your client-side
code to send dummy queries every so often.

regards, tom lane


I've already maxed out the connection timeout at the firewall... and
I've been using dummy queries every five minutes, but it just feels like
a crutch to do so.

Have you looked into tcp keepalive settings?

net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 500

Not sure if those settings will help with an NAT router or not but it's
worth a try. Pgsql 8.2 can set those for you.


I may be celebrating prematurely,

Never stopped me :)


 but resetting the tcp_keepalive
parameters seems to have done the trick - I left a pgAdmin connection
that *always* drops after inactivity up while I went to lunch and it was
still alive when I got back.

Is there a way to alter the tcp_keepalive settings on an app-by-app
basis rather than for the whole system?

Well, you could set it on individual workstations instead of on the
server.  I.e. if you set tcp_keepalive on your workstation to 500, but
leave Wally and Dilbert set at the default 7200 then they'd still
timeout and you wouldn't.



Unfortunately, my individual workstations are all running Windows...:(
any idea if/where this could be set in MS-land?



You need to alter some settings in the registry.

See: http://msdn2.microsoft.com/en-us/library/aa302363.aspx

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Sean Murphy
Scott Marlowe wrote:
> On Tue, 2007-05-08 at 15:59, Sean Murphy wrote:
>> Scott Marlowe wrote:
>>> On Tue, 2007-05-08 at 12:19, Sean Murphy wrote:
 Tom Lane wrote:
> Sean Murphy <[EMAIL PROTECTED]> writes:
>> I'm WAY out of my depth here, but my impression, based on the
>> circumstances, is that there is some sort of an idle session timeout
>> kicking in (most likely on the client side) and dropping the connection.
> There's no such timeout in the Postgres server, for sure.  I would
> actually bet that your problem is in some router between the client and
> the server.  In particular, routers that do NAT address mapping
> typically have a timeout after which they will forget the mapping for an
> idle connection.  If you've got one of those, see if it'll let you
> change the timeout.
>
> If you can't do that, you might think about teaching your client-side
> code to send dummy queries every so often.
>
>   regards, tom lane
>
 I've already maxed out the connection timeout at the firewall... and
 I've been using dummy queries every five minutes, but it just feels like
 a crutch to do so.
>>> Have you looked into tcp keepalive settings?
>>>
>>> net.ipv4.tcp_keepalive_intvl = 75
>>> net.ipv4.tcp_keepalive_probes = 9
>>> net.ipv4.tcp_keepalive_time = 500
>>>
>>> Not sure if those settings will help with an NAT router or not but it's
>>> worth a try. Pgsql 8.2 can set those for you.
>>>
>> I may be celebrating prematurely,
> 
> Never stopped me :)
> 
>>  but resetting the tcp_keepalive
>> parameters seems to have done the trick - I left a pgAdmin connection
>> that *always* drops after inactivity up while I went to lunch and it was
>> still alive when I got back.
>>
>> Is there a way to alter the tcp_keepalive settings on an app-by-app
>> basis rather than for the whole system?
> 
> Well, you could set it on individual workstations instead of on the
> server.  I.e. if you set tcp_keepalive on your workstation to 500, but
> leave Wally and Dilbert set at the default 7200 then they'd still
> timeout and you wouldn't.
> 

Unfortunately, my individual workstations are all running Windows...:(
any idea if/where this could be set in MS-land?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 15:59, Sean Murphy wrote:
> Scott Marlowe wrote:
> > On Tue, 2007-05-08 at 12:19, Sean Murphy wrote:
> >> Tom Lane wrote:
> >>> Sean Murphy <[EMAIL PROTECTED]> writes:
>  I'm WAY out of my depth here, but my impression, based on the
>  circumstances, is that there is some sort of an idle session timeout
>  kicking in (most likely on the client side) and dropping the connection.
> >>> There's no such timeout in the Postgres server, for sure.  I would
> >>> actually bet that your problem is in some router between the client and
> >>> the server.  In particular, routers that do NAT address mapping
> >>> typically have a timeout after which they will forget the mapping for an
> >>> idle connection.  If you've got one of those, see if it'll let you
> >>> change the timeout.
> >>>
> >>> If you can't do that, you might think about teaching your client-side
> >>> code to send dummy queries every so often.
> >>>
> >>>   regards, tom lane
> >>>
> >> I've already maxed out the connection timeout at the firewall... and
> >> I've been using dummy queries every five minutes, but it just feels like
> >> a crutch to do so.
> > 
> > Have you looked into tcp keepalive settings?
> > 
> > net.ipv4.tcp_keepalive_intvl = 75
> > net.ipv4.tcp_keepalive_probes = 9
> > net.ipv4.tcp_keepalive_time = 500
> > 
> > Not sure if those settings will help with an NAT router or not but it's
> > worth a try. Pgsql 8.2 can set those for you.
> > 
> 
> I may be celebrating prematurely,

Never stopped me :)

>  but resetting the tcp_keepalive
> parameters seems to have done the trick - I left a pgAdmin connection
> that *always* drops after inactivity up while I went to lunch and it was
> still alive when I got back.
> 
> Is there a way to alter the tcp_keepalive settings on an app-by-app
> basis rather than for the whole system?

Well, you could set it on individual workstations instead of on the
server.  I.e. if you set tcp_keepalive on your workstation to 500, but
leave Wally and Dilbert set at the default 7200 then they'd still
timeout and you wouldn't.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PG on Debian 4.0.x ?

2007-05-08 Thread Brent Wood

Andreas wrote:

Hi,

I compiled the latest and greatest PG server on SUSE systems until now.

Now I want to stay with the "official" binaries of the Debian project 
because I plan to evaluate if it is really that good for a server as 
everybody is telling me.
It should get a server with only the minimum of programs that don't 
belong to the dedicated task, like in this case "run a PG server".
Especially I don't want any development stuff like gcc on this server 
for security reasons. So I can't build it myself - at least not with 
this box.


BTW ... the Debian installer had an anonymous option "SQL server" and 
installed PostgreSQL 8.1.8 as default without further question. This 
is cool because I - being a known pessimistic - expected to find 
MySQL.   ;-)


Do you know where I find PG 8.2.4 and pgAdmin 1.6.3 binaries for 
Debian 4.0.x ?

Hi Andreas,

I have spent the last couple of weeks looking at Debian 4 as a GIS 
workstation using PostGIS, GRASS, GEOS, GDAL/OGR, Proj.4, QGIS, etc. I 
figured it was worth a look because I'd heard so many good things about 
apt vs yast for package management & the Debian GIS group is realy 
trying to provide a good GIS capability in this distro.


I don't really wanna get into distro wars, so this is simply my opinion, 
but I had lots of issues with Debian, especially in that I needed some 
of the latest versions of several packages, which I built from source 
anyway, just as on Suse. I am currently using OpenSuse, but also 
use/have used Ubuntu, Mandriva, Kubuntu, Fedora, SLED & SimplyMepis so 
I'm not too fixated on any particular distro, just use what seems 
easiest for me at the time.


It was great having pretty recent versions of GRASS, PostGIS, etc 
installed & working very easily, but they weren't the versions I needed, 
I  tried Ubuntu & simplyMepis to see if the setup tools, etc, made 
things easier, but these were not compatible with the Debian 
repositories anyway (kernel panics on trying to start after auto updates).


I've gone back to OpenSuse 10.2, as I don't see YAST package management 
as very different to apt, the number of supported packages is certainly 
much less, but it just works better for me.


As far as a GIS workstation is concerned, I'd say that if you need to 
build from source to get current versions, Debian has no real 
advantages. If you want to get something working quickly & easily, but 
don't need the latest vesrions, Debian works well.



Cheers,

 Brent Wood

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] log shipping/DB recovery with PostgreSQL 7.4

2007-05-08 Thread Joshua D. Drake

Porell, Chris wrote:

Thanks for the info.

What, then, is the best way to achieve my goal of having a "standby" DB that
can be kept within 15 minutes or so in sync with the primary?  periodic
pg_dump/restore?  Unfortunately, I can't upgrade right now.


www.slony.info





-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 4:41 PM
To: Porell, Chris
Cc: 'pgsql-general@postgresql.org'
Subject: Re: [GENERAL] log shipping/DB recovery with PostgreSQL 7.4


Porell, Chris wrote:

I am trying to bring up a disaster recovery database using a cold-copy of
the datafiles on a different machine.  I've got all WAL files since this
copy was made.  Is it possible to make a 7.4 DB apply the WALs to the DB
when I bring it up?


No, you need to be running >8.x.

Sincerely,

Joshua D. Drake




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] log shipping/DB recovery with PostgreSQL 7.4

2007-05-08 Thread Porell, Chris
Thanks for the info.

What, then, is the best way to achieve my goal of having a "standby" DB that
can be kept within 15 minutes or so in sync with the primary?  periodic
pg_dump/restore?  Unfortunately, I can't upgrade right now.



-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 4:41 PM
To: Porell, Chris
Cc: 'pgsql-general@postgresql.org'
Subject: Re: [GENERAL] log shipping/DB recovery with PostgreSQL 7.4


Porell, Chris wrote:
> I am trying to bring up a disaster recovery database using a cold-copy of
> the datafiles on a different machine.  I've got all WAL files since this
> copy was made.  Is it possible to make a 7.4 DB apply the WALs to the DB
> when I bring it up?

No, you need to be running >8.x.

Sincerely,

Joshua D. Drake

-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/
-
IMPORTANT: The sender intends that this electronic message is for
exclusive use by the person to whom it is addressed. This message
may contain information that is confidential or privileged and
exempt from disclosure under applicable law. If the reader of this
message is not an intended recipient, be aware that any disclosure,
dissemination, distribution or copying of this communication, or
the use of its contents, is prohibited. If you have received this
message in error, please immediately notify the sender of your
inadvertent receipt and delete this message from all data storage
systems. Thank you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Sean Murphy
Scott Marlowe wrote:
> On Tue, 2007-05-08 at 12:19, Sean Murphy wrote:
>> Tom Lane wrote:
>>> Sean Murphy <[EMAIL PROTECTED]> writes:
 I'm WAY out of my depth here, but my impression, based on the
 circumstances, is that there is some sort of an idle session timeout
 kicking in (most likely on the client side) and dropping the connection.
>>> There's no such timeout in the Postgres server, for sure.  I would
>>> actually bet that your problem is in some router between the client and
>>> the server.  In particular, routers that do NAT address mapping
>>> typically have a timeout after which they will forget the mapping for an
>>> idle connection.  If you've got one of those, see if it'll let you
>>> change the timeout.
>>>
>>> If you can't do that, you might think about teaching your client-side
>>> code to send dummy queries every so often.
>>>
>>> regards, tom lane
>>>
>> I've already maxed out the connection timeout at the firewall... and
>> I've been using dummy queries every five minutes, but it just feels like
>> a crutch to do so.
> 
> Have you looked into tcp keepalive settings?
> 
> net.ipv4.tcp_keepalive_intvl = 75
> net.ipv4.tcp_keepalive_probes = 9
> net.ipv4.tcp_keepalive_time = 500
> 
> Not sure if those settings will help with an NAT router or not but it's
> worth a try. Pgsql 8.2 can set those for you.
> 

I may be celebrating prematurely, but resetting the tcp_keepalive
parameters seems to have done the trick - I left a pgAdmin connection
that *always* drops after inactivity up while I went to lunch and it was
still alive when I got back.

Is there a way to alter the tcp_keepalive settings on an app-by-app
basis rather than for the whole system?


Thanks!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Pulling data from a filesystem level backup

2007-05-08 Thread Steve Atkins

So, I have a customer who just had their "pg_dump?
what's that?" moment after some broken software
managed to delete everything in a table.

But they do have filesystem level backups. It's running
on 7.4.something, and the tables contain just varchar,
text, integer and timestamp data. I'm going to try just
spinning them up in a new instance and see what
happens, but I'm not expecting much joy.

Can anyone suggest any tools or approaches that
might help recover the data?

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] log shipping/DB recovery with PostgreSQL 7.4

2007-05-08 Thread Joshua D. Drake

Porell, Chris wrote:

I am trying to bring up a disaster recovery database using a cold-copy of
the datafiles on a different machine.  I've got all WAL files since this
copy was made.  Is it possible to make a 7.4 DB apply the WALs to the DB
when I bring it up?


No, you need to be running >8.x.

Sincerely,

Joshua D. Drake

--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] log shipping/DB recovery with PostgreSQL 7.4

2007-05-08 Thread Porell, Chris

I am trying to bring up a disaster recovery database using a cold-copy of
the datafiles on a different machine.  I've got all WAL files since this
copy was made.  Is it possible to make a 7.4 DB apply the WALs to the DB
when I bring it up?

Thanks!
Chris

-
IMPORTANT: The sender intends that this electronic message is for
exclusive use by the person to whom it is addressed. This message
may contain information that is confidential or privileged and
exempt from disclosure under applicable law. If the reader of this
message is not an intended recipient, be aware that any disclosure,
dissemination, distribution or copying of this communication, or
the use of its contents, is prohibited. If you have received this
message in error, please immediately notify the sender of your
inadvertent receipt and delete this message from all data storage
systems. Thank you.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Server specs to run PostgreSQL

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 14:25, Scott Marlowe wrote:
> On Tue, 2007-05-08 at 14:04, MaRCeLO PeReiRA wrote:
> > Hi there,
> > 
> > Actually my PostgreSQL server is running on a Athlon
> > XP 1800+, 512Mb RAM, IDE Disks, 10/100 netcard.
> > 
> > I would like to buy a new server, could you please
> > give me some information about the specs??
> > 
> > Is it important to have a Dual (or even Quad)
> > processor??? Will PostgreSQL use them??
> > 
> > What about memory??? How much???
> > 
> > SCSI Disks and 10/100/1000 netcards, it's ok!! No
> > doubt!
> > 
> > It's a medium use server, about 3000 transactions/day.
> 
> Assuming that most of your transactions happen from 9-5, then that's
> about 6 transactions per second.  Assuming these are mostly lightweight
> transaction, you'll be needing a pentium 100 with 64 megs of ram. :)

TYPO ALERT!  I meant 6 transactions per minute.  hence the Pentium 100
comment.  anyway

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSql embedded available?

2007-05-08 Thread Björn Lundin


8 maj 2007 kl. 21.07 skrev Rich Shepard:


On Tue, 8 May 2007, Scott Marlowe wrote:


Is there a version of PostgreSql that can be embedded with an
application?



Nope, and it's not real likely to happen.  Take a look at sqllite.


  Yes, SQLite (http://www.sqlite.org/) is the way to go for  
embedded RDBMS.


or perhaps Mimer 

/Björn

Björn Lundin



Re: [GENERAL] Server specs to run PostgreSQL

2007-05-08 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/08/07 14:04, MaRCeLO PeReiRA wrote:
> Hi there,
> 
> Actually my PostgreSQL server is running on a Athlon
> XP 1800+, 512Mb RAM, IDE Disks, 10/100 netcard.
> 
> I would like to buy a new server, could you please
> give me some information about the specs??
> 
> Is it important to have a Dual (or even Quad)
> processor??? Will PostgreSQL use them??
> 
> What about memory??? How much???
> 
> SCSI Disks and 10/100/1000 netcards, it's ok!! No
> doubt!
> 
> It's a medium use server, about 3000 transactions/day.

Presumably in the 10 hours from 08:00 to 18:00?  If so, that's:
300 txn/hr
5 txn/minute
0.08333 txn/sec.

Your existing server can handle that.

Is there something you aren't telling us?  Is the system 99.99%
SELECTs?  That would be 30,000,000 SELECTS/day.  Obviously a burden
to any but the biggest systems.

Are your tables indexed properly for the given queries?  More hw
won't solve crappy/non-existent index support

How complicated are the queries?  Simple and low volume, or 15 table
OLAP mega-joins?

For that matter, how complicated are the transactions?  Does each
INSERT trigger 18 UPDATEs and 10 DELETEs?  None of which have proper
index support?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGQNHbS9HxQb37XmcRAicrAJ4740CpjG4jERQnW/ta0VkhjR++RQCg4r3g
BRAWEy4mdnyKJzjehluGbb4=
=HbGC
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Server specs to run PostgreSQL

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 14:04, MaRCeLO PeReiRA wrote:
> Hi there,
> 
> Actually my PostgreSQL server is running on a Athlon
> XP 1800+, 512Mb RAM, IDE Disks, 10/100 netcard.
> 
> I would like to buy a new server, could you please
> give me some information about the specs??
> 
> Is it important to have a Dual (or even Quad)
> processor??? Will PostgreSQL use them??
> 
> What about memory??? How much???
> 
> SCSI Disks and 10/100/1000 netcards, it's ok!! No
> doubt!
> 
> It's a medium use server, about 3000 transactions/day.

Assuming that most of your transactions happen from 9-5, then that's
about 6 transactions per second.  Assuming these are mostly lightweight
transaction, you'll be needing a pentium 100 with 64 megs of ram. :)

Seriously though, the hardware you need / want REALLY depends on your
usage patterns.  Are you doing a lot of CPU intensive stuff, where
having more CPUs or faster / more memory would help, or are you mostly
I/O bound, and in need of a faster disk subsystem.

Dual CPUs are almost always a good thing, as there's enough going on
other than pgsql (the OS and all its processes) that having a second CPU
can make a machine noticeably more responsive.  After that, it's hard to
say.  PostgreSQL itself can only use 1 cpu per connection max, plus one
for things like th background writer or periodic vacuuming.  So, if
you've only got one user at a time ever hitting your DB, then multiple
CPUs wouldn't help.  

Here's what I recommend for a db server:

Lots of memory
2 CPUs (more if needed, maybe dual core x 2)
battery backed caching RAID controller.  Look at Areca, 3Ware/Escalade,
and LSI
Lots of hard disks for said controller
fast, efficient NIC.  Better to spend a little more in a quality NIC
with a low CPU overhead.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Server specs to run PostgreSQL

2007-05-08 Thread MaRCeLO PeReiRA
Hi there,

Actually my PostgreSQL server is running on a Athlon
XP 1800+, 512Mb RAM, IDE Disks, 10/100 netcard.

I would like to buy a new server, could you please
give me some information about the specs??

Is it important to have a Dual (or even Quad)
processor??? Will PostgreSQL use them??

What about memory??? How much???

SCSI Disks and 10/100/1000 netcards, it's ok!! No
doubt!

It's a medium use server, about 3000 transactions/day.

Thanks in advance,
Marcelo Pereira

__
Fale com seus amigos  de graça com o novo Yahoo! Messenger 
http://br.messenger.yahoo.com/ 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSql embedded available?

2007-05-08 Thread Rich Shepard

On Tue, 8 May 2007, Scott Marlowe wrote:


Is there a version of PostgreSql that can be embedded with an
application?



Nope, and it's not real likely to happen.  Take a look at sqllite.


  Yes, SQLite (http://www.sqlite.org/) is the way to go for embedded RDBMS.
Syntax is close to that of postgres, and it's available for use in any
application. Great product when you need to incorporate an invisible data
storage facility.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSql embedded available?

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 13:26, Austin Winstanley wrote:
> Is there a version of PostgreSql that can be embedded with an
> application?
> 
> I have a Mono application and when I deploy it, I would like to deploy
> a postgresql database with it, but embedded, so the end user doesn't
> have to install postgresql seperately. Is this possible? 

Nope, and it's not real likely to happen.  Take a look at sqllite.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSql embedded available?

2007-05-08 Thread Tino Wildenhain

Austin Winstanley schrieb:

Is there a version of PostgreSql that can be embedded with an application?

I have a Mono application and when I deploy it, I would like to deploy a 
postgresql database with it, but embedded, so the end user doesn't have 
to install postgresql seperately. Is this possible?


Last time I saw a box with Postgresql embedded it was next to a box
with Oracle embedded... ;)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Scott Marlowe
On Tue, 2007-05-08 at 12:19, Sean Murphy wrote:
> Tom Lane wrote:
> > Sean Murphy <[EMAIL PROTECTED]> writes:
> >> I'm WAY out of my depth here, but my impression, based on the
> >> circumstances, is that there is some sort of an idle session timeout
> >> kicking in (most likely on the client side) and dropping the connection.
> > 
> > There's no such timeout in the Postgres server, for sure.  I would
> > actually bet that your problem is in some router between the client and
> > the server.  In particular, routers that do NAT address mapping
> > typically have a timeout after which they will forget the mapping for an
> > idle connection.  If you've got one of those, see if it'll let you
> > change the timeout.
> > 
> > If you can't do that, you might think about teaching your client-side
> > code to send dummy queries every so often.
> > 
> > regards, tom lane
> > 
> 
> I've already maxed out the connection timeout at the firewall... and
> I've been using dummy queries every five minutes, but it just feels like
> a crutch to do so.

Have you looked into tcp keepalive settings?

net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 500

Not sure if those settings will help with an NAT router or not but it's
worth a try. Pgsql 8.2 can set those for you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] PostgreSql embedded available?

2007-05-08 Thread Austin Winstanley

Is there a version of PostgreSql that can be embedded with an application?

I have a Mono application and when I deploy it, I would like to deploy a
postgresql database with it, but embedded, so the end user doesn't have to
install postgresql seperately. Is this possible?

--
Thanks,
Austin Winstanley
Tradelogic Software Group


Re: [GENERAL] Time of previous REINDEX execution..

2007-05-08 Thread Jeff Davis
On Tue, 2007-05-08 at 13:21 +0100, Anoo Pillai wrote:
> Hi, 
>  
> How to get the time at which the previous REINDEX command was run? 
>  

I don't think that information is explicitly stored. You can log
statements like that and that might help in the future.

If you really need to know, a REINDEX changes the relfilenode, so you
can see the creation time of the file in the data directory
corresponding to the current indexes of the table. That will probably be
the last time REINDEX was run.

Does that help?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG on Debian 4.0.x ?

2007-05-08 Thread Jeff Davis
On Tue, 2007-05-08 at 18:09 +0200, Andreas wrote:
> Hi,
> 
> I compiled the latest and greatest PG server on SUSE systems until now.
> 
> Now I want to stay with the "official" binaries of the Debian project 
> because I plan to evaluate if it is really that good for a server as 
> everybody is telling me.
> It should get a server with only the minimum of programs that don't 
> belong to the dedicated task, like in this case "run a PG server".
> Especially I don't want any development stuff like gcc on this server 
> for security reasons. So I can't build it myself - at least not with 
> this box.
> 
> BTW ... the Debian installer had an anonymous option "SQL server" and 
> installed PostgreSQL 8.1.8 as default without further question. This is 
> cool because I - being a known pessimistic - expected to find MySQL.   ;-)
> 
> Do you know where I find PG 8.2.4 and pgAdmin 1.6.3 binaries for Debian 
> 4.0.x ?
> 

You can get postgresql-8.2.4 from unstable, which is probably the best
place to get it. 

pgAdmin is client software -- why would you want it on a dedicated
postgresql server? 

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PITR and tar

2007-05-08 Thread Jeff Davis
On Tue, 2007-05-08 at 13:24 -0400, Merlin Moncure wrote:
> On 5/8/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
> > On Tue, 2007-05-08 at 08:47 +0200, Albe Laurenz wrote:
> > > > The docs recommend using tar to perform a base backup for PITR.
> > > >
> > > > Usually, tar reports notices like:
> > > > "tar: Truncated write; file may have grown while being archived."
> > >
> > > Did you call pg_start_backup(text) before you started to archive?
> > >
> >
> > I was referring to the result of the tar itself being a corrupted gzip
> > file (that couldn't be uncompressed with gunzip).
> >
> > I did indeed call pg_start/stop_backup().
> 
> is fsync on?
> 

Yes. I have a battery-backed cache as well, and there were no power
failures involved. 

Regards,
Jeff Davis


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Sean Murphy
Tom Lane wrote:
> Sean Murphy <[EMAIL PROTECTED]> writes:
>> I'm WAY out of my depth here, but my impression, based on the
>> circumstances, is that there is some sort of an idle session timeout
>> kicking in (most likely on the client side) and dropping the connection.
> 
> There's no such timeout in the Postgres server, for sure.  I would
> actually bet that your problem is in some router between the client and
> the server.  In particular, routers that do NAT address mapping
> typically have a timeout after which they will forget the mapping for an
> idle connection.  If you've got one of those, see if it'll let you
> change the timeout.
> 
> If you can't do that, you might think about teaching your client-side
> code to send dummy queries every so often.
> 
>   regards, tom lane
> 

I've already maxed out the connection timeout at the firewall... and
I've been using dummy queries every five minutes, but it just feels like
a crutch to do so.

Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PITR and tar

2007-05-08 Thread Merlin Moncure

On 5/8/07, Jeff Davis <[EMAIL PROTECTED]> wrote:

On Tue, 2007-05-08 at 08:47 +0200, Albe Laurenz wrote:
> > The docs recommend using tar to perform a base backup for PITR.
> >
> > Usually, tar reports notices like:
> > "tar: Truncated write; file may have grown while being archived."
>
> Did you call pg_start_backup(text) before you started to archive?
>

I was referring to the result of the tar itself being a corrupted gzip
file (that couldn't be uncompressed with gunzip).

I did indeed call pg_start/stop_backup().


is fsync on?

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PITR and tar

2007-05-08 Thread Jeff Davis
On Tue, 2007-05-08 at 08:47 +0200, Albe Laurenz wrote:
> > The docs recommend using tar to perform a base backup for PITR.
> > 
> > Usually, tar reports notices like:
> > "tar: Truncated write; file may have grown while being archived."
> 
> Did you call pg_start_backup(text) before you started to archive?
> 

I was referring to the result of the tar itself being a corrupted gzip
file (that couldn't be uncompressed with gunzip).

I did indeed call pg_start/stop_backup(). 

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PITR and tar

2007-05-08 Thread Jeff Davis
On Tue, 2007-05-08 at 10:25 -0500, Jim Nasby wrote:
> On May 7, 2007, at 1:58 PM, Jeff Davis wrote:
> > Second, it seems that it can cause a bad backup to occur if you  
> > pass the
> > "z" option to tar. Instead, piping the output of tar through the
> > compression program seems to avoid that problem (i.e. "tar cf - ... |
> > gzip > ..."). I am using FreeBSD's tar, other implementations may be
> > different.
> 
> What *exactly* are you seeing there? If anything -z should be safer  
> than piping through gzip, since you could easily accidentally pipe  
> stderr through gzip as well, which *would* corrupt the backup.
> 

tar: Truncated write; file may have grown while being archived. 
tar: Truncated write; file may have grown while being archived. 
tar: GZip compression failed 

is the output from my cron script (which is emailed to me). This
happened several times in a row. When I tried to extract one of those
backups, I got errors like (some names have been changed):

$ tar zxf mybackup.tar.gz
data/base/16418/32309.1: Premature end of gzip compressed data:
Input/output error
tar: Premature end of gzip compressed data: Input/output error

and 
$ gzip -dc mybackup.tar.gz > /dev/null
gzip: ../mybackup.tar.gz: unexpected end of file
gzip: ../mybackup.tar.gz: uncompress failed


This may be specific to FreeBSD's tar. I remember testing in the past on
Linux and never had these problems.

When I changed to do it as a pipe instead of using the "z" flag, it
worked fine. I still get the stderr properly (which is also emailed to
me via cron) but only contains the "truncated write" warnings.

> > Are my observations correct, and if so, should they be documented as a
> > potential "gotcha" when making base backups?
> 
> I believe the bit about tar complaining about changed files is  
> already in there, no?

I was talking about using the "z" flag with tar causing potential bad
backups as described above, not just the warnings. If that's true, there
are probably other people with untrustworthy backups.

Regards,
Jeff Davis




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Idle session timeout?

2007-05-08 Thread Tom Lane
Sean Murphy <[EMAIL PROTECTED]> writes:
> I'm WAY out of my depth here, but my impression, based on the
> circumstances, is that there is some sort of an idle session timeout
> kicking in (most likely on the client side) and dropping the connection.

There's no such timeout in the Postgres server, for sure.  I would
actually bet that your problem is in some router between the client and
the server.  In particular, routers that do NAT address mapping
typically have a timeout after which they will forget the mapping for an
idle connection.  If you've got one of those, see if it'll let you
change the timeout.

If you can't do that, you might think about teaching your client-side
code to send dummy queries every so often.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Spliting a string in plpgsql

2007-05-08 Thread Andreas Kretschmer
Jasbinder Singh Bali <[EMAIL PROTECTED]> schrieb:

> Hi,
> I'm writing a function in plpgsql and i need to do the following:
> 
> I have a string in the following format.
> 
> _m_a_i_l_._y_a_h_o_o_._c_o_m
> 
> In this string, i need to figure out the number of dots in it and split the

Number of dots:

test=*# select length(regexp_replace('mail.yahoo.com','[^\.]','','g'));
 length

  2
(1 row)


> string into two
> on last but one dot.

I'm not sure if i understand you correctly, perhaps with split_part(),
see
http://www.postgresql.org/docs/current/interactive/functions-string.html


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Idle session timeout?

2007-05-08 Thread Sean Murphy
I've got a bunch of users on VB applications connecting to PG 8.2.3 via
psqlodbc with SSL. For a variety of reasons, some good, some probably
bad, I have the app open a connection and leave it open, using it as
needed to run queries back and forth.

If I look in my logs, I see loans of little clusters of entries like this:
2007-05-08 08:44:57 [EMAIL PROTECTED] LOG:  SSL SYSCALL error:
Connection reset by peer
2007-05-08 08:44:57 [EMAIL PROTECTED] LOG:  could not receive
data from client: Connection reset by peer
2007-05-08 08:44:57 [EMAIL PROTECTED] LOG:  unexpected EOF on
client connection

These seem to correspond to dropped connections for my users, generally
if a user with an open app that hasn't used it for a while attempts to
use it. I've put some code in to re-establish the connection to mitigate
user frustration, but it doesn't do much to mitigate my frustration.

I'm WAY out of my depth here, but my impression, based on the
circumstances, is that there is some sort of an idle session timeout
kicking in (most likely on the client side) and dropping the connection.
Judging by the error messages I'd be inclined to say that it's happening
in openSSL, not Postgres, but I don't know.

I've searched documentation and newsgroups for Postgres, psqlodbc, and
openSSL, and haven't found any user-settable timeout parameters. The one
thing I did come across that seemed to my clueless eyes to be relevant
was an OpenSSL API call: SSL_CTX_set_timeout(). Even assuming this is
relevant, wouldn't this be the kind of thing that would need to be
called by Postgres code at the point that SSL is integrated?

Anyway, I appreciate any help or guidance anyone might be able to give.
It would be awesome if there were a user parameter somewhere on the
server or client that someone could point out to me to take care of
this. If this is a bug, please help me figure out who it belongs to. If
this is a feature, please let me know so I can properly account for it
in my code.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Spliting a string in plpgsql

2007-05-08 Thread Jasbinder Singh Bali

Hi,
I'm writing a function in plpgsql and i need to do the following:

I have a string in the following format.

mail.yahoo.com

In this string, i need to figure out the number of dots in it and split the
string into two
on last but one dot.

Is there any way to accomplish this.
Please let me know

Thanks
~Jas


Re: [GENERAL] Building Pg 8.2.4 on AIX 5.3 doesn't produce shared libs?

2007-05-08 Thread Tom Lane
Sic Transit Gloria Mundi <[EMAIL PROTECTED]> writes:
> I would like to use Perl, DBI, and DBD::Pg on AIX.  As I understand it, I 
> need a shared lib version of the client libs for this.

> When building on AIX5.3 ML04 (powerpc_power5, 64 bit), it seems only the 
> static libraries are built.  This seems true with either xlc (8.x) or gcc 
> (3.3.2).  Everything works, there are just no ./lib/*.so at the end of the 
> process.

IIRC, on AIX the .a files actually are shared libraries.  You don't need
anything named .so.  Check the -hackers and -patches archives from back
around mid-Sept 2006 for details.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] npgsql and standard_conforming_strings

2007-05-08 Thread William Garrison
I use npgsql 1.0 to access a PostgreSql 8.2.3 database.  Recently, I 
decided to test with standard_conforming_strings = on and I noticed that 
npgsql still sends double-backslashes, which corrupts the data.


This is especially bad with byte arrays: if I insert N bytes I get back 
4*N bytes because \123 (one byte) becomes \\123 (4 bytes).


Is this a bug in npgsql, or is there some way I can get npgsql to 
realize that postgres is using standard_conforming_strings?


Thanks,
Bill

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Building Pg 8.2.4 on AIX 5.3 doesn't produce shared libs?

2007-05-08 Thread Sic Transit Gloria Mundi

I would like to use Perl, DBI, and DBD::Pg on AIX.  As I understand it, I need 
a shared lib version of the client libs for this.

When building on AIX5.3 ML04 (powerpc_power5, 64 bit), it seems only the static 
libraries are built.  This seems true with either xlc (8.x) or gcc (3.3.2).  
Everything works, there are just no ./lib/*.so at the end of the process.

I see the 8.2 history mentions a fix was added to allow shared libs to build on 
aix.

This must be something basic I don't understand.  Any pointers to get on the 
right track would be greatly appreciated.

Tim

 
-
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.

Re: [GENERAL] Some problem with warm standby server

2007-05-08 Thread Nico Sabbi

Simon Riggs wrote:

then I updated the master with a batch of inserts, but after a while the 
slave stopped with

these messages:

LOG:  restored log file "00010021" from archive
LOG:  record with zero length at 0/2148
LOG:  invalid primary checkpoint record
LOG:  restored log file "00010020" from archive
LOG:  restored log file "00010021" from archive
LOG:  invalid resource manager ID in secondary checkpoint record
PANIC:  could not locate a valid checkpoint record
LOG:  startup process (PID 19619) was terminated by signal 6
LOG:  aborting startup due to startup process failure
   



Please run pg_controldata to print out the control file.
 



Hi, sorry for the long delay.
First of all I had to stop postgres with pg_ctl stop -s immediate, or it 
wouldn't die because of the ongoing replication.


This is the output of pg_controldata:

[EMAIL PROTECTED]:/usr/local/postgres_replica/data$ pg_controldata   
/usr/local/postgres_replica/data/

pg_control version number:812
Catalog version number:   200510211
Database system identifier:   5001030714849737714
Database cluster state:   in recovery
pg_control last modified: Fri 27 Apr 2007 13:20:46 CEST
Current log file ID:  0
Next log file segment:26
Latest checkpoint location:   0/190C7E04
Prior checkpoint location:0/190C7DC0
Latest checkpoint's REDO location:0/190C7E04
Latest checkpoint's UNDO location:0/0
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  3698809
Latest checkpoint's NextOID:  68745
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Time of latest checkpoint:Fri 27 Apr 2007 11:53:47 CEST
Maximum data alignment:   4
Database block size:  8192
Blocks per segment of large relation: 131072
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Date/time type storage:   floating-point numbers
Maximum length of locale name:128
LC_COLLATE:   C
LC_CTYPE: C



Backup all the files in case we need to inspect them.
 



ok


What was the ending log sequence number (e.g. x/) from the previous
recovery? I'll see if I can re-create this.
 



judging from the logs I gues it is 0/190C7E04:
LOG:  restored log file "00010019.000C7E04.backup" from 
archive

LOG:  restored log file "00010019" from archive
LOG:  checkpoint record is at 0/190C7E04
LOG:  redo record is at 0/190C7E04; undo record is at 0/0; shutdown FALSE
LOG:  next transaction ID: 3698809; next OID: 68745
LOG:  next MultiXactId: 1; next MultiXactOffset: 0
LOG:  automatic recovery in progress
LOG:  redo starts at 0/190C7E48


 

What did I do wrong? Is there any other procedure to follow to restart a 
stopped replication?
   



You're right, using the trigger is not the right way to stop/start the
standby. Just stop/start the standby server normally.
 



as above: a plain stop hangs


The trigger means that you'd like to perform a failover.

There is a patch not yet applied which will make a new version of
pg_standby. pg_standby's official status right now is beta, so please
expect, look for and report any issues you find. Thanks.

 


thank you

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] PG on Debian 4.0.x ?

2007-05-08 Thread Andreas

Hi,

I compiled the latest and greatest PG server on SUSE systems until now.

Now I want to stay with the "official" binaries of the Debian project 
because I plan to evaluate if it is really that good for a server as 
everybody is telling me.
It should get a server with only the minimum of programs that don't 
belong to the dedicated task, like in this case "run a PG server".
Especially I don't want any development stuff like gcc on this server 
for security reasons. So I can't build it myself - at least not with 
this box.


BTW ... the Debian installer had an anonymous option "SQL server" and 
installed PostgreSQL 8.1.8 as default without further question. This is 
cool because I - being a known pessimistic - expected to find MySQL.   ;-)


Do you know where I find PG 8.2.4 and pgAdmin 1.6.3 binaries for Debian 
4.0.x ?



regards
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] SELECT TIMESTAMP WITH TIME ZONE ... AT TIME ZOME as inverted meaning with UTC times...

2007-05-08 Thread Tom Lane
"Andreas Schultz" <[EMAIL PROTECTED]> writes:
> MST is UTC-07, so i would expect that i can replace MST with UTC-07, but:

> # SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME
> ZONE 'UTC-07';
>   timezone
> -
>  2001-02-17 08:38:40

> The time returned is at UTC+07

A time zone name in that form is a POSIX-spec timezone specification,
and the POSIX spec says that positive is west from Greenwich.
Everywhere else in Postgres we follow the SQL spec, which says that
positive is east from Greenwich.  Aren't standards wonderful?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Continuous Archiving for Multiple Warm StandbyServers

2007-05-08 Thread Simon Riggs
On Mon, 2007-05-07 at 15:10 -0500, Thomas F. O'Connell wrote:

> I'm attempting to design a postgres system whereby an authoritative
> primary server simultaneously feeds continuous archives to a number of
> warm standby servers that live both on the local network and on remote
> networks.

> The sticking point in my current thinking about such a system is what
> to do in the event that any of an array of possible nodes becomes
> unreachable. I would expect a custom archive_command to have the
> intelligence about network reachability and to report a nonzero status
> if it was unable to submit an archive to any particular node.

> What is the advised remedy for this scenario in general? 

There is no advised remedy. It is designed to allow you to program it
any way you choose.

You can choose to provide maximum availability, by allowing it to work
even when one node is down, or you can choose to provide maximum
protection, by ensuring that it does not work if any node is down. Or
anywhere in between, as requirements dictate.

One of the problems with providing the example as a simple cp command is
that it tends to limit one's thinking about how much flexibility and
control is being offered.

pg_standby is just one of a number of possibilities for integration in
this area. We can easily publish others.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Any "guide to indexes" exists?

2007-05-08 Thread Jim Nasby

On May 7, 2007, at 11:01 AM, Karsten Hilbert wrote:

On Mon, May 07, 2007 at 10:47:24AM -0500, Jim Nasby wrote:

GiST can also be useful if you have to query in multiple dimensions,
which can occur outside the normal case of geometry. Best example I
know of is a table containing duration information in the form of
start_time and end_time. Trying to query for what events happened on
5/28/2005 will generally be much cheaper with a GiST index than a b-
tree.

Are you referring to queries with

... where some_timestamp between some_start and some_end ...

or

	... where some_timestamp > some_start and some_timestamp <  
some_end ...


Yes. :)

Say the table spans 10 years of records. You're looking for a  
specific set of records that affect a day 5 years ago. Your where  
clause will look something like this:


WHERE start_time < now() - 5 years AND end_time > now() - 5 years 1 day

B-tree indexes will be nearly useless in this case, because each one  
is going to match on over half the table. But there are operators  
that would let you treat this as a 2 dimensional problem and then use  
GiST (or r-tree, but that's no longer recommended). For example, if  
you create an index on:


box(point(start_time, start_time), point(end_time, end_time))

Then you just need to query for all rows that overlap the box defined  
by:


box(point(now() - 5 years 1 day, now() - 5 years 1 day), point(now()  
- 5 years, now() - 5 years))


You'd have to double-check the exact logic and see which overlaps  
operator you want, but the basic idea is the same: you're converting  
2 one-dimensional timelines (start_time and end_time) into a two- 
dimensional timeline, and then using geometric functions to index that.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Slow query and indexes...

2007-05-08 Thread Jim Nasby
There are other ways to influence the selection of a seqscan, notably  
effective_cache_size and random_page_cost.


First, you need to find out at what point a seqscan is actually  
faster than an index scan. That's going to be a trial and error  
search, but eventually if you're going back far enough in time the  
seqscan will be faster. EXPLAIN ANALYZE has it's own overhead, so a  
better way to test this is with psql's timing command, and wrap the  
query into a count so you're not shoving a bunch of data across to psql:


SELECT count(*) FROM (... your query goes here ...) a;

(SELECT 1 might work too and would be more accurate)

Once you've found the break even point, you can tweak all the cost  
estimates. Start by making sure that effective_cache_size is set  
approximately to how much memory you have. Increasing that will favor  
an index scan. Decreasing random_page_cost will also favor an index  
scan, though I'd try not to go below 2 and definitely not below 1.  
You can also tweak the CPU cost estimates (lower numbers will favor  
indexes). But keep in mind that doing that at a system level will  
impact every query running in the system. You may have no choice but  
to explicitly set custom parameters for just this statement. SET  
LOCAL and wrapping the SELECT in a transaction is a less painful way  
to do that.


On May 7, 2007, at 10:47 AM, Jonas Henriksen wrote:


Well thanks, but that don't help me much.

I've tried setting an extra condition using datetime>(now() - '14
weeks'::interval)

explain analyze
SELECT max(date_time) FROM data_values
where date_time > (now() - '14 weeks'::interval)
GROUP BY data_logger_id;

HashAggregate  (cost=23264.52..23264.55 rows=2 width=12) (actual
time=1691.447..1691.454 rows=3 loops=1)
 ->  Bitmap Heap Scan on data_values  (cost=7922.08..21787.31
rows=295442 width=12) (actual time=320.643..951.043 rows=298589
loops=1)
   Recheck Cond: (date_time > (now() - '98 days'::interval))
   ->  Bitmap Index Scan on data_values_data_date_time_index
(cost=0.00..7848.22 rows=295442 width=0) (actual time=319.708..319.708
rows=298589 loops=1)
 Index Cond: (date_time > (now() - '98 days'::interval))
Total runtime: 1691.598 ms

However, when I switch to using datetime>(now() - '15  
weeks'::interval) I get:

explain analyze
SELECT max(date_time) FROM data_values
where date_time > (now() - '15 weeks'::interval)
GROUP BY data_logger_id;

HashAggregate  (cost=23798.26..23798.28 rows=2 width=12) (actual
time=3237.816..3237.823 rows=3 loops=1)
 ->  Seq Scan on data_values  (cost=0.00..22084.62 rows=342728
width=12) (actual time=0.037..2409.234 rows=344111 loops=1)
   Filter: (date_time > (now() - '105 days'::interval))
Total runtime: 3237.944 ms

Doing "SET enable_seqscan=off" speeds up the query and forces the use
of the index, but I dont really love that solution...


regards Jonas:))




On 5/7/07, Peter Eisentraut <[EMAIL PROTECTED]> wrote:

Am Montag, 7. Mai 2007 15:53 schrieb Jonas Henriksen:
> while if I add a GROUP BY data_logger the query uses a seq scan  
and a

>
> lot of time:
> >> explain analyze SELECT max(date_time) FROM data_values GROUP BY
> data_logger_id;

I don't think there is anything you can do about this.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/



---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings



--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Slow query and indexes...

2007-05-08 Thread Jim Nasby

On May 8, 2007, at 3:29 AM, PFC wrote:
Create a table which contains your list of loggers (since it's good  
normalization anyway, you probably have it already) and have your  
data table's logger_id REFERENCE it


BTW, you could do that dynamically with a subselect: (SELECT DISTINCT  
logger_id FROM data) AS loggers, though I'm not sure how optimal the  
plan would be.


BTW, I encourage you to not use 'id' as a field name; I've found it  
makes doing things like joins a lot trickier. Easier to just make  
every id field the same (logger_id in this case).

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SELECT TIMESTAMP WITH TIME ZONE ... AT TIME ZOME as inverted meaning with UTC times...

2007-05-08 Thread Andreas Schultz

Hi,


From the documentation:


# SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST';
 timezone
-
2001-02-16 18:38:40

MST is UTC-07, so i would expect that i can replace MST with UTC-07, but:

# SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME
ZONE 'UTC-07';
 timezone
-
2001-02-17 08:38:40

The time returned is at UTC+07

Lets try at UTC+07 instead:

# SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME
ZONE 'UTC+07';
 timezone
-
2001-02-16 18:38:40

I this a bug or a feature, and if it is a feature, whats the rational behind it?

Regards
Andreas

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] problem with a conditional statement

2007-05-08 Thread Kirk Wythers


On May 8, 2007, at 2:02 AM, Albe Laurenz wrote:


Kirk Wythers wrote:


I am struggling to get a CASE WHEN statement to work within another
CASE WHEN. Here is my original code:

SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id

ELSE

s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,
w.year, w.doy, w.precip, w.tmin, w.tmax,

--replace missing solar values (-999) with the average of all solar
--values from that month (s.month)

--CASE  s.par WHEN -999 THEN AVG( s.par) ELSE s.par END
--FROM solar s
--GROUP BY s.month;

FROM site_near INNER JOIN solar s ON
site_near.ref_solar_station_id = s.station_id  AND
site_near.obs_year = s.year
INNER JOIN weather w ON site_near.ref_weather_station_id =
w.station_id AND site_near.obs_year = w.year AND s.date = w.date
WHERE w.station_id = 211630;

I have commented out the troublesome bits in the middle of the code.
All I am trying to do here is to replace missing values with averages



from the same day of the year for all years. Does anyone see what I
am buggering up here?




Thank you for the reply. I see what you are doing in the creating of  
avgsol. That should work perfectly. However, I am unsure how you are  
working it into the existing code.



The problem here is the AVG().
All columns that appear outside of group functions in the SELECT list
must be in the GROUP BY clause.

Maybe something like this could help you:

SELECT ..., w.tmax,


I think you adding "CASE s.par WHEN -999 THEN avgsol.par ELSE s.par  
END" after


"SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE
s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,  
w.year, w.doy, w.precip, w.tmin, w.tmax,"


to look this like this:

SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE
s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,  
w.year, w.doy, w.precip, w.tmin, w.tmax,

--replace missing values (-999) with the monthly average
CASE s.par WHEN -999 THEN avgsol.par ELSE s.par END

Correct?


  CASE s.par WHEN -999 THEN avgsol.par ELSE s.par END
  ...
  FROM solar s INNER JOIN ...,


I can't quite figure out what you are suggesting here?


(SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol
  WHERE s.month = avgsol.month
AND ...


Do you mean:

FROM site_near INNER JOIN solar s ON
(SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol WHERE  
s.month = avgsol.month
AND site_near.ref_solar_station_id = s.station_id  AND  
site_near.obs_year = s.year
INNER JOIN weather w ON site_near.ref_weather_station_id =  
w.station_id AND site_near.obs_year = w.year AND s.date = w.date

WHERE w.station_id = 211630;

I think my trouble is figuring how to place the code snipit:

   (SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol
  WHERE s.month = avgsol.month
AND ...

Sorry for being so dull





In this statement I create a subselect "avgsol" that I use like
a table.

Be warned that there will probably be a sequential scan of the whole
table "solar" whenever you run the statement, because the averages  
have

to be calculated first!


That is ok, I won't be running this query so often that the  
performance will be an issue.





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Continuous Archiving for Multiple Warm Standby Servers

2007-05-08 Thread Jim Nasby

On May 7, 2007, at 3:10 PM, Thomas F. O'Connell wrote:
I'm attempting to design a postgres system whereby an authoritative  
primary server simultaneously feeds continuous archives to a number  
of warm standby servers that live both on the local network and on  
remote networks.


The sticking point in my current thinking about such a system is  
what to do in the event that any of an array of possible nodes  
becomes unreachable. I would expect a custom archive_command to  
have the intelligence about network reachability and to report a  
nonzero status if it was unable to submit an archive to any  
particular node.


The way I understand it, postgres would then resubmit the file that  
caused the nonzero status, which, if connectivity has been  
restored, is no problem for the node that caused the nonzero status  
in the first place. But then the issue becomes what to do with the  
nodes that were fine when the nonzero status.


From the docs :


"It is advisable to test your proposed archive command to ensure  
that it indeed does not overwrite an existing file, and that it  
returns nonzero status in this case. We have found that cp -i does  
this correctly on some platforms but not others. If the chosen  
command does not itself handle this case correctly, you should add  
a command to test for pre-existence of the archive file."


What is the advised remedy for this scenario in general? And then  
what is it if nonzero status is returned by archive_command because  
the file already exists on nodes that stayed up after a scenario  
where nonzero status is returned because one or more nodes became  
unreachable?


AFAIK the bit about -i / not overwriting files is just a safety  
measure to ensure you don't accidentally set up the archive_command  
to over-write WAL files for a working backend, or some other PITR  
backup. As long as you're certain you've got all your paths setup  
correctly it should be safe to drop the -i bit.


A follow-on question is: Does it become the responsibility of  
archive_command in a scenario like this to track which files have  
been archived on which nodes? Is there any introspective way for a  
standby server to know that a file has been archived by primary? If  
not, is it safe to reply on using sequential numbering of WAL files  
for implicit introspection? I don't see any functions that provide  
introspection of this nature. I ask because it seems like network- 
to-network failures are a common enough occurrence that some  
mechanism for archive verification is a must-have. I'm just trying  
to determine how much of that functionality I'll have to build  
myself...


I don't think you'd have to have the archive_command track each copy  
of each WAL file... but you might want to do that anyway, just for  
status information. If only we had a database to store that info  
in... ;)


As for WAL file numbering, even if there were some things you could  
deduce from the file numbers, I wouldn't want to bet on that. Future  
changes to the WAL code could change the naming scheme, which could  
result in subtle breakage to your system (granted, odds of that  
happening are low, but still...)

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PITR and tar

2007-05-08 Thread Jim Nasby

On May 7, 2007, at 1:58 PM, Jeff Davis wrote:
Second, it seems that it can cause a bad backup to occur if you  
pass the

"z" option to tar. Instead, piping the output of tar through the
compression program seems to avoid that problem (i.e. "tar cf - ... |
gzip > ..."). I am using FreeBSD's tar, other implementations may be
different.


What *exactly* are you seeing there? If anything -z should be safer  
than piping through gzip, since you could easily accidentally pipe  
stderr through gzip as well, which *would* corrupt the backup.



Are my observations correct, and if so, should they be documented as a
potential "gotcha" when making base backups?


I believe the bit about tar complaining about changed files is  
already in there, no?

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dangers of fsync = off

2007-05-08 Thread Andrew Sullivan
On Fri, May 04, 2007 at 08:54:10AM -0600, Joel Dice wrote:
> 
> My next question is this: what are the dangers of turning fsync off in the 
> context of a high-availablilty cluster using asynchronous replication?

My real question is why you want to turn it off.  If you're using a
battery-backed cache on your disk controller, then fsync ought to be
pretty close to free.  Are you sure that turning it off will deliver
the benefit you think it will?

> on Y.  Thus, database corruption on X is irrelevant since our first step 
> is to drop them.

Not if the corruption introduces problems for replication, which is
indeed possible.

A
-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Tom Lane
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Tue, May 08, 2007 at 02:14:54PM +0200, Felix Kater wrote:
>> There is *no complete* substitute for foreign keys by using *indexes*
>> since I'd loose the referencial integrity (whereas for unique contraints
>> there *is* a full replacement using indexes)?

> A unique index is not a "substitute" for a unique constraint, they're
> exactly the same thing. If you drop your constraint and create a unique
> index, you're back where you started. You neither added nor removed
> anything.

Well, actually you added or removed a pg_constraint entry associated
with the index ... but either way it's the unique index that really
does the work of enforcing uniqueness.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_contraint: 'action code' ?

2007-05-08 Thread Tom Lane
Felix Kater <[EMAIL PROTECTED]> writes:
> where are the 'action code' (type char) of foreign keys defined for ON
> DELETE resp. ON UPDATE for the actions NO ACTION, RESTRICT, CASCADE, SET
> NULL, SET DEFAULT in pg_contraint?

Per the comment in pg_constraint.h:

/*
 * Valid values for confupdtype and confdeltype are the FKCONSTR_ACTION_xxx
 * constants defined in parsenodes.h.  Valid values for confmatchtype are
 * the FKCONSTR_MATCH_xxx constants defined in parsenodes.h.
 */

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Martijn van Oosterhout
On Tue, May 08, 2007 at 02:14:54PM +0200, Felix Kater wrote:
> If I get you right:
> 
> There is *no complete* substitute for foreign keys by using *indexes*
> since I'd loose the referencial integrity (whereas for unique contraints
> there *is* a full replacement using indexes)?

A unique index is not a "substitute" for a unique constraint, they're
exactly the same thing. If you drop your constraint and create a unique
index, you're back where you started. You neither added nor removed
anything.

On a certain level foreign keys are just triggers, specially coded to
do the work. Yes, you could write your own triggers to do exactly the
same thing, but why bother, when someone has written them for you and
made nice syntax to use them?

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


[GENERAL] pg_contraint: 'action code' ?

2007-05-08 Thread Felix Kater
Hi,

where are the 'action code' (type char) of foreign keys defined for ON
DELETE resp. ON UPDATE for the actions NO ACTION, RESTRICT, CASCADE, SET
NULL, SET DEFAULT in pg_contraint?

In the manual (8.2) it is mentioned but not explained.

Thank You
Felix

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
On Tue, 8 May 2007 14:19:12 +0200
Peter Eisentraut <[EMAIL PROTECTED]> wrote:

> > I like to keep my pg interface small: Can I replace foreign keys by
> > using indexes somehow?
> 
> Not while preserving the semantics.

I am not bound to indexes, however, wonder if foreign keys itself are
non-atomic functionality. I mean: if foreign keys are based on some
other lower level functionality like indexes or anything else which I
could use as a substitute--in what way ever. Of course, I want to
gain the same (referential integrity etc.).

If foreign keys are, however, something unique which can't be replaced
by any other pg function (I am of course not taking into account things
like multiple queries bound together by transactions...) then I have to
go though it and implement it into my pg interface (looking at the
information_schema: This seems to be quite a bunch of work...).

Thank You
Felix

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Raymond O'Donnell

On 08/05/2007 13:14, Felix Kater wrote:


There is *no complete* substitute for foreign keys by using *indexes*
since I'd loose the referencial integrity (whereas for unique contraints
there *is* a full replacement using indexes)?


Here's my understandingan index is just that (an index) and no more 
- it tells PostgreSQL where to look in a table to find a particular row 
or set of rows. A foreign key, on the other hand, ensures that 
referential integrity is enforced: it enforces the relationship between 
rows in a table which refer to rows in another table, depending on how 
the foreign key was specified in the first place (cf. the "ON UPDATE... 
ON DELETE... etc. clauses).


When you have a foreign key, you can put an index on the foreign key 
column in the "child" table for performance reasons, but this *isn't* 
the same as the foreign key.


I don't know about the equivalence of unique constraints and indices - 
others on the list can answer that.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Time of previous REINDEX execution..

2007-05-08 Thread Anoo Pillai

Hi,

How to get the time at which the previous REINDEX command was run?

With Thanks in advance,
AnooS


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Peter Eisentraut
Am Dienstag, 8. Mai 2007 13:32 schrieb Felix Kater:
> I like to keep my pg interface small: Can I replace foreign keys by
> using indexes somehow?

Not while preserving the semantics.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
On Tue, 08 May 2007 12:48:30 +0100
Raymond O'Donnell <[EMAIL PROTECTED]> wrote:

> You can do that, but you'll lose the enforcement of referential 
> integrity, which is what foreign keys give you.

If I get you right:

There is *no complete* substitute for foreign keys by using *indexes*
since I'd loose the referencial integrity (whereas for unique contraints
there *is* a full replacement using indexes)?

Felix

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Raymond O'Donnell

On 08/05/2007 12:32, Felix Kater wrote:


I like to keep my pg interface small: Can I replace foreign keys by
using indexes somehow? (This is at least possible for primary key
columns which can be replaced by suitable indexes.)


You can do that, but you'll lose the enforcement of referential 
integrity, which is what foreign keys give you.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
Hi,

I like to keep my pg interface small: Can I replace foreign keys by
using indexes somehow? (This is at least possible for primary key
columns which can be replaced by suitable indexes.)

Thank You
Felix

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] /var/run/postgresql ??

2007-05-08 Thread Manuel Preliteiro

Your a life saver :D

Thank you for the super fast anser
Manuel

On 5/8/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:


Hi,

Manuel Preliteiro írta:
> Hello, i have installed postgresql version 8.1.4-1.1 in Fedora 6

I have the same system.

> Since I'm very Linux unexperienced I used yum for this purpose.
>
> I have all the commands installed, liek createdb (under
> /usr/bin/createdb witch is not the location specified in the online
> manual), i also have the psql "shell/interface" runing and i can
> manipulate the databases i create. The postgresql deamon is up and
> running also.
>
> The problem is this location "/var/run/postgresql" does not exists, i
> searched in some foruns and it's mentioned a lot so i dont know why i
> dont have it...
>
> I'm using a postgres OCaml library witch has the parameter "
> unix_domain_socket_dir = "/var/run/postgresql" " and when i try to
> compile it i have the following error:

Set this in your OCaml config:

unix_domain_socket_dir = '/tmp'

since PostgreSQL on Fedora uses the default
/tmp/.s.PGSQL.5432 for its socket.

> File "", line 0, characters 0-1:
> Uncaught exception: Unix.Unix_error (20, "connect", "")
> Uncaught exception: Unix.Unix_error(20, "connect", "")
>
> I dont know if the problem comes from the above description (i think
> so, but as i said i'm unexperienced in linux), but does any one know
> hot to fix it? Did i miss something during install?? I installed the
> client, the server and the libs were alredy installed...
>
> Thank you
> Manuel

--
--
Zoltán Böszörményi
Cybertec Geschwinde & Schönig GmbH
http://www.postgresql.at/




Re: [GENERAL] /var/run/postgresql ??

2007-05-08 Thread Zoltan Boszormenyi

Hi,

Manuel Preliteiro írta:

Hello, i have installed postgresql version 8.1.4-1.1 in Fedora 6


I have the same system.


Since I'm very Linux unexperienced I used yum for this purpose.

I have all the commands installed, liek createdb (under 
/usr/bin/createdb witch is not the location specified in the online 
manual), i also have the psql "shell/interface" runing and i can 
manipulate the databases i create. The postgresql deamon is up and 
running also.


The problem is this location "/var/run/postgresql" does not exists, i 
searched in some foruns and it's mentioned a lot so i dont know why i 
dont have it...


I'm using a postgres OCaml library witch has the parameter " 
unix_domain_socket_dir = "/var/run/postgresql" " and when i try to 
compile it i have the following error:


Set this in your OCaml config:

unix_domain_socket_dir = '/tmp'

since PostgreSQL on Fedora uses the default
/tmp/.s.PGSQL.5432 for its socket.


File "", line 0, characters 0-1:
Uncaught exception: Unix.Unix_error (20, "connect", "")
Uncaught exception: Unix.Unix_error(20, "connect", "")

I dont know if the problem comes from the above description (i think 
so, but as i said i'm unexperienced in linux), but does any one know 
hot to fix it? Did i miss something during install?? I installed the 
client, the server and the libs were alredy installed...


Thank you
Manuel


--
--
Zoltán Böszörményi
Cybertec Geschwinde & Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] shmget fails on OS X with proper settings

2007-05-08 Thread Christopher S Martin

Just a final note:

Changing shmall did fix the problem. Thanks to everyone for the help.
the final working settings i used are:

kern.sysv.shmmax=4194304
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8
kern.sysv.shmall=4194304

I'm wondering how I managed to get it to work before, without setting shmall.

Thanks again to everyone

Thanks,
Chris

On 5/7/07, Jim Nasby <[EMAIL PROTECTED]> wrote:

On May 7, 2007, at 9:11 AM, Tom Lane wrote:
> I believe BTW that you
> need to do "sudo ipcs -a" to be sure of seeing everything; otherwise
> OS X's ipcs silently doesn't tell you about segments your userid
> doesn't
> have access to.

Actually, it seems that you don't get anything back when ipcs is run
as non-root...

[EMAIL PROTECTED]:41]~/pgsql/HEAD:58%ipcs -a|grep decibel
[EMAIL PROTECTED]:41]~/pgsql/HEAD:59%sudo ipcs -a|grep decibel
m 3080191001 --rw---  decibel  decibel  decibel
decibel  3 38133760  11042  11042  9:00:53 10:41:53  9:00:53
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] /var/run/postgresql ??

2007-05-08 Thread Manuel Preliteiro

Hello, i have installed postgresql version 8.1.4-1.1 in Fedora 6

Since I'm very Linux unexperienced I used yum for this purpose.

I have all the commands installed, liek createdb (under /usr/bin/createdb
witch is not the location specified in the online manual), i also have the
psql "shell/interface" runing and i can manipulate the databases i create.
The postgresql deamon is up and running also.

The problem is this location "/var/run/postgresql" does not exists, i
searched in some foruns and it's mentioned a lot so i dont know why i dont
have it...

I'm using a postgres OCaml library witch has the parameter "
unix_domain_socket_dir = "/var/run/postgresql" " and when i try to compile
it i have the following error:

File "", line 0, characters 0-1:
Uncaught exception: Unix.Unix_error (20, "connect", "")
Uncaught exception: Unix.Unix_error(20, "connect", "")

I dont know if the problem comes from the above description (i think so, but
as i said i'm unexperienced in linux), but does any one know hot to fix it?
Did i miss something during install?? I installed the client, the server and
the libs were alredy installed...

Thank you
Manuel


Re: [GENERAL] Slow query and indexes...

2007-05-08 Thread PFC



Thanks for a good answer, I'll try to find a workaround. The number of
data_loggers will change, but not to frequently. I was actually hoping
to make a view showing the latest data for each logger, maybe I can
manage that with a stored procedure thingy...


	- Create a table which contains your list of loggers (since it's good  
normalization anyway, you probably have it already) and have your data  
table's logger_id REFERENCE it
	- You now have a simple way to get the list of loggers (just select from  
the loggers table which will contain 3 rows)

- Then, to get the most recent record for each logger_id, you do :

SELECT l.logger_id, (SELECT id FROM data d WHERE d.logger_id = l.logger_id  
ORDER BY d.logger_id DESC, d.date_time DESC LIMIT 1) AS last_record_id  
FROM loggers l


2 minute example :

forum_bench=> CREATE TABLE loggers (id SERIAL PRIMARY KEY, name TEXT );
CREATE TABLE

forum_bench=> INSERT INTO loggers (name) VALUES ('logger 1'),('logger  
2'),('logger 3');

INSERT 0 3

forum_bench=> CREATE TABLE data (id SERIAL PRIMARY KEY, logger_id INTEGER  
NOT NULL REFERENCES loggers( id ));

CREATE TABLE

forum_bench=> INSERT INTO data (logger_id) SELECT 1+floor(random()*3) FROM  
generate_series(1,100);


forum_bench=> SELECT logger_id, count(*) FROM data GROUP BY logger_id;
 logger_id | count
---+
 3 | 333058
 2 | 333278
 1 | 333664


NOTE : I use id rather than timestamp to get the last one

forum_bench=> EXPLAIN ANALYZE SELECT logger_id, max(id) FROM data GROUP BY  
logger_id;

 QUERY PLAN
-
 HashAggregate  (cost=19166.82..19169.32 rows=200 width=8) (actual  
time=1642.556..1642.558 rows=3 loops=1)
   ->  Seq Scan on data  (cost=0.00..14411.88 rows=950988 width=8) (actual  
time=0.028..503.308 rows=100 loops=1)

 Total runtime: 1642.610 ms

forum_bench=> CREATE INDEX data_by_logger ON data (logger_id, id);
CREATE INDEX

forum_bench=> EXPLAIN ANALYZE SELECT l.id, (SELECT d.id FROM data d WHERE  
d.logger_id=l.id ORDER BY d.logger_id DESC, d.id DESC LIMIT 1) FROM  
loggers l;
 QUERY  
PLAN

-
 Seq Scan on loggers l  (cost=0.00..3128.51 rows=1160 width=4) (actual  
time=0.044..0.074 rows=3 loops=1)

   SubPlan
 ->  Limit  (cost=0.00..2.68 rows=1 width=8) (actual time=0.020..0.020  
rows=1 loops=3)
   ->  Index Scan Backward using data_by_logger on data d   
(cost=0.00..13391.86 rows=5000 width=8) (actual time=0.018..0.018 rows=1  
loops=3)

 Index Cond: (logger_id = $0)
 Total runtime: 0.113 ms
(6 lignes)

forum_bench=> SELECT l.id, (SELECT d.id FROM data d WHERE d.logger_id=l.id  
ORDER BY d.logger_id DESC, d.id DESC LIMIT 1) FROM loggers l;

 id | ?column?
+--
  1 |   99
  2 |  100
  3 |   90
(3 lignes)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] tokenize string for tsearch?

2007-05-08 Thread Magnus Hagander
On Mon, May 07, 2007 at 05:31:02PM -0700, Ottavio Campana wrote:
> Hi, I'm trying to use tsearch2 for the first time and I'm having a
> problem setting up a query
> 
> If I execute
> 
> SELECT * from test_table where ts_desc @@ to_tsquery ('hello&world');
> 
> it works, but I'm having the problem that the string used for the query
> is not 'hello&world' but 'hello world', Moreover, it can have an
> arbitrary number of spaces between the words, so I cannot just
> substitute the spaces with &, because 'hello&&world' gives error.
> 
> What is the safest way transform a string into a list of works "anded"
> together?

Look at plainto_tsquery().

//Magnus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] problem with a conditional statement

2007-05-08 Thread Albe Laurenz
Kirk Wythers wrote:

> I am struggling to get a CASE WHEN statement to work within another  
> CASE WHEN. Here is my original code:
> 
> SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id
ELSE
> s.obs_id END AS obs_id,  site_near.station_id, site_near.longname,  
> w.year, w.doy, w.precip, w.tmin, w.tmax,
> 
> --replace missing solar values (-999) with the average of all solar  
> --values from that month (s.month)
> 
> --CASE  s.par WHEN -999 THEN AVG( s.par) ELSE s.par END
> --FROM solar s
> --GROUP BY s.month;
> 
> FROM site_near INNER JOIN solar s ON
> site_near.ref_solar_station_id = s.station_id  AND 
> site_near.obs_year = s.year
> INNER JOIN weather w ON site_near.ref_weather_station_id =  
> w.station_id AND site_near.obs_year = w.year AND s.date = w.date
> WHERE w.station_id = 211630;
> 
> I have commented out the troublesome bits in the middle of the code.  
> All I am trying to do here is to replace missing values with averages

> from the same day of the year for all years. Does anyone see what I  
> am buggering up here?

The problem here is the AVG().
All columns that appear outside of group functions in the SELECT list
must be in the GROUP BY clause.

Maybe something like this could help you:

SELECT ..., w.tmax,
  CASE s.par WHEN -999 THEN avgsol.par ELSE s.par END
  ...
  FROM solar s INNER JOIN ...,
(SELECT month, AVG(par) FROM solar GROUP BY month) AS avgsol
  WHERE s.month = avgsol.month
AND ...

In this statement I create a subselect "avgsol" that I use like
a table.

Be warned that there will probably be a sequential scan of the whole
table "solar" whenever you run the statement, because the averages have
to be calculated first!

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly