[GENERAL]About selected data dump

2007-10-03 Thread longlong
hello,all

I want to transmit data from a database to another partly,
which means only data that selected in a table will be transmit.

I can select data , and then inert or update rows one by one.
But is there any way better?


Re: [GENERAL] Easier string concat in PL funcs?

2007-10-03 Thread Merlin Moncure
On 10/3/07, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 03, 2007 at 11:47:26AM -0400, Jerry Sievers wrote:
> > Question: Am I overlooking a simple way of doing this?
>
> yes. use plpython or plperl to do the job.
>
> depesz
>


here is a great example with pl/perl  (search: printf)
http://people.planetpostgresql.org/greg/index.php?/categories/12-PlPerl

-- Parse a pipe-delimeted string:
SELECT sprintf('Total grams: %3.3f Donuts: %s',
  '101.319472|chocolate and boston cream', '|');

 sprintf
-
 Total grams: 101.319 Donuts: chocolate and boston cream

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Design Question (Time Series Data)

2007-10-03 Thread Andreas Strasser

Hello,

i'm currently designing an application that will retrieve economic data 
(mainly time series)from different sources and distribute it to clients. 
It is supposed to manage around 20.000 different series with differing 
numbers of observations (some have only a few dozen observations, others 
several thousand) and i'm now faced with the decision where and how to 
store the data.


So far, i've come up with 3 possible solutions

1) Storing the observations in one big table with fields for the series, 
position within the series and the value (float)
2) Storing the observations in an array (either in the same table as the 
series or in an extra data-table)
3) Storing the observations in CSV-files on the hard disk and only 
putting a reference to it in the database


I expect that around 50 series will be updated daily - which would mean 
that for solution nr. 1 around 50.000 rows would be deleted and appended 
(again) every day.


I personally prefer solution 1, because it is the easiest to implement 
(i need to make different calculations and be able to transform the data 
easily), but i'm concerned about perfomance and overhead. It effectively 
triples the space needed (over solutions nr. 2) and will result in huge 
index files.


Are there any other storage methods which are better suited for this 
kind of data? How can i avoid trouble resulting from the daily updates 
(high number of deleted rows)? Which method would you prefer?


Thanks in advance!

Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Tom Lane
"Jimmy Choi" <[EMAIL PROTECTED]> writes:
> select
>metric_type,
>case metric_type
>   when 0 then
>  sum (1 / val)
>   when 1 then
>  sum (val)
>end as result
> from metrics
> group by metric_type

The reason this does not work is that the aggregate functions are
aggregated without any knowledge of how they might ultimately be used
in the final output row.  The fact that the CASE might not actually
demand the value of an aggregate at the end doesn't stop the system from
having to compute it.

You could use a CASE *inside* the SUM() to prevent division by zero
while forming the sum, but on the whole this query seems rather badly
designed.  Consider

SELECT 0, sum(1/val) FROM metrics WHERE metric_type = 0
UNION ALL
SELECT 1, sum(val) FROM metrics WHERE metric_type = 1
UNION ALL
...


regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Richard Huxton wrote:
>> Could you run Linux in a virtual-machine in OS X?

> I think it would be easier (and more performant) to define a new locale
> on OS/X (or on Linux) to match the behavior of the other system.
> (Perhaps define a new locale on both, with matching name and matching
> behavior).

Given that the OP doesn't seem to care about the difference in
behavior between Linux and OS/X interpretations of en_US, I'd think
that using C locale on both would suit him just fine.  (Of course,
that would require initdb on both :-()

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Alvaro Herrera
Richard Huxton wrote:
> Tom Lane wrote:
>> Brian Wipf <[EMAIL PROTECTED]> writes:
>>> PG tried to enforce the same LC_COLLATE and LC_CTYPE. On OS X, the  value 
>>> of en_US.utf8 didn't exist, so I created a soft link to  en_US.UTF-8 in 
>>> the /usr/share/locale/ directory. When I sort the  values of 
>>> product_id_from_source on both systems using the locales in  this manner 
>>> I get different orderings:
>
>>> I can happily live with rebuilding indexes if this is the only  problem I 
>>> can expect to encounter, and I would still prefer PITR over  replication.
>> The whole notion scares the daylights out of me.  If you really need
>> to use PITR between these two particular platforms, use a locale
>> with common behavior --- C/POSIX would work.
>
> Could you run Linux in a virtual-machine in OS X?

I think it would be easier (and more performant) to define a new locale
on OS/X (or on Linux) to match the behavior of the other system.
(Perhaps define a new locale on both, with matching name and matching
behavior).

-- 
Alvaro Herrerahttp://www.advogato.org/person/alvherre
"No hay hombre que no aspire a la plenitud, es decir,
la suma de experiencias de que un hombre es capaz"

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Alvaro Herrera
Jimmy Choi escribió:
> This will work for this particular example. But what if my case
> statement is more complicated than that? Example:
> 
> select
>metric_type,
>case metric_type
>   when 0 then
>  sum (1 / val)
>   when 1 then
>  sum (val)
>   when 2 then
>  max (val)
>   when 3 then
>  min (val)
>end as result
> from metrics
> group by metric_type

This doesn't make sense.  Use separate output columns for the different
aggregates.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"Crear es tan difícil como ser libre" (Elsa Triolet)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Jimmy Choi
This will work for this particular example. But what if my case
statement is more complicated than that? Example:

select
   metric_type,
   case metric_type
  when 0 then
 sum (1 / val)
  when 1 then
 sum (val)
  when 2 then
 max (val)
  when 3 then
 min (val)
   end as result
from metrics
group by metric_type

Thanks!

On 10/3/07, Rodrigo De León <[EMAIL PROTECTED]> wrote:
> On 10/3/07, Jimmy Choi <[EMAIL PROTECTED]> wrote:
> > I expect to get the following result set:
> >
> > metric_type | result
> > +---
> > 0   |   2
> > 1   |   3
>
> Try:
>
> SELECT   metric_type
>, SUM(CASE metric_type
>WHEN 0
>  THEN 1 / val
>WHEN 1
>  THEN val
>  END) AS RESULT
> FROM metrics
> GROUP BY metric_type
> ORDER BY metric_type
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Rodrigo De León
On 10/3/07, Jimmy Choi <[EMAIL PROTECTED]> wrote:
> I expect to get the following result set:
>
> metric_type | result
> +---
> 0   |   2
> 1   |   3

Try:

SELECT   metric_type
   , SUM(CASE metric_type
   WHEN 0
 THEN 1 / val
   WHEN 1
 THEN val
 END) AS RESULT
FROM metrics
GROUP BY metric_type
ORDER BY metric_type

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Richard Huxton

Brian Wipf wrote:

On 3-Oct-07, at 12:46 PM, Richard Huxton wrote:

Could you run Linux in a virtual-machine in OS X?


That's an idea. Performance-wise though, I think we'd be better off 
wiping OS X and installing Linux. As an added bonus, we'll be able to 
get way better performance out of our Infortrend S16F-R/G1430 Fibre to 
SAS RAID box, which isn't getting near the I/O its capable of under OS X.


Oh, fair enough. I assumed you had some OSX specific app you were 
running on it.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PITR and Compressed WALS

2007-10-03 Thread Brian Wipf

On 3-Oct-07, at 12:38 PM, Tom Lane wrote:

What this sounds like to me is a problem in your recovery procedures.
What exactly did you do to "bring the database out of recovery mode"?


The script looked for a trigger file and once found, aborts.  
Unfortunately, it would abort without doing the requested copy. I  
fixed the bug now. Thanks for your input Tom.


Brian Wipf
ClickSpace Interactive Inc.
<[EMAIL PROTECTED]> 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Jimmy Choi
Suppose I have the following table named "metrics":

metric_type | val
+-
 0  | 1
 0  | 1
 1  | 0
 1  | 3

Now suppose I run the following simple query:

select
   metric_type,
   case metric_type
  when 0 then
 sum (1 / val)
  when 1 then
 sum (val)
   end as result
from metrics
group by metric_type

I expect to get the following result set:

metric_type | result
+---
0   |   2
1   |   3

But in reality I get the following error:

ERROR: division by zero
SQL state: 22012

So it appears that Postgres executes all cases and select the result
in the end. Is this expected behavior?

Thanks
- Jimmy

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Brian Wipf

On 3-Oct-07, at 12:46 PM, Richard Huxton wrote:

Tom Lane wrote:

Brian Wipf <[EMAIL PROTECTED]> writes:
PG tried to enforce the same LC_COLLATE and LC_CTYPE. On OS X,  
the  value of en_US.utf8 didn't exist, so I created a soft link  
to  en_US.UTF-8 in the /usr/share/locale/ directory. When I sort  
the  values of product_id_from_source on both systems using the  
locales in  this manner I get different orderings:


I can happily live with rebuilding indexes if this is the only   
problem I can expect to encounter, and I would still prefer PITR  
over  replication.

The whole notion scares the daylights out of me.  If you really need
to use PITR between these two particular platforms, use a locale
with common behavior --- C/POSIX would work.


Could you run Linux in a virtual-machine in OS X?


That's an idea. Performance-wise though, I think we'd be better off  
wiping OS X and installing Linux. As an added bonus, we'll be able to  
get way better performance out of our Infortrend S16F-R/G1430 Fibre  
to SAS RAID box, which isn't getting near the I/O its capable of  
under OS X.


Brian Wipf
ClickSpace Interactive Inc.
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Unexpected behavior with CASE statement

2007-10-03 Thread Jimmy Choi
Suppose I have the following table named "metrics":

metric_type | val
+-
 0  | 1
 0  | 1
 1  | 0
 1  | 3

Now suppose I run the following simple query:

select 
   metric_type,
   case metric_type
  when 0 then
 sum (1 / val)
  when 1 then
 sum (val)
   end as result
from metrics
group by metric_type

I expect to get the following result set:

metric_type | result
+---
0   |   2
1   |   3

But in reality I get the following error:

ERROR: division by zero
SQL state: 22012

So it appears that Postgres executes all cases and select the result in
the end. Is this expected behavior? 

Thanks
- Jimmy


Confidentiality Notice.  This message may contain information that is 
confidential or otherwise protected from disclosure.
If you are not the intended recipient, you are hereby notified that any use, 
disclosure, dissemination, distribution, 
or copying of this message, or any attachments, is strictly prohibited.  If you 
have received this message in error, 
please advise the sender by reply e-mail, and delete the message and any 
attachments.  Thank you.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Richard Huxton

Tom Lane wrote:

Brian Wipf <[EMAIL PROTECTED]> writes:
PG tried to enforce the same LC_COLLATE and LC_CTYPE. On OS X, the  
value of en_US.utf8 didn't exist, so I created a soft link to  
en_US.UTF-8 in the /usr/share/locale/ directory. When I sort the  
values of product_id_from_source on both systems using the locales in  
this manner I get different orderings:


I can happily live with rebuilding indexes if this is the only  
problem I can expect to encounter, and I would still prefer PITR over  
replication.


The whole notion scares the daylights out of me.  If you really need
to use PITR between these two particular platforms, use a locale
with common behavior --- C/POSIX would work.


Could you run Linux in a virtual-machine in OS X?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Feature Request - Defining default table space for Indexes in Conf file

2007-10-03 Thread Tom Lane
"Josh Tolley" <[EMAIL PROTECTED]> writes:
> On 10/1/07, S Sharma <[EMAIL PROTECTED]> wrote:
>> It would be nice to have a feature to define a default table space for
>> indexes in db conf file and all indexed are created in that table space.

> Although the most basic optimization suggested when using tablespaces
> is always "Put indexes on one and data on another to avoid disk
> contention", I doubt that the ideal optimization for many workloads,
> which means sticking such a thing in a config file might not be such a
> good idea. In other words, a DBA probably ought to think harder about
> optimizing his/her use of tablespaces than just "I'll put indexes on
> this one and data on another".

Yeah, I think that argument is why we did not provide such a setup to
begin with...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PITR and Compressed WALS

2007-10-03 Thread Tom Lane
Brian Wipf <[EMAIL PROTECTED]> writes:
> Last night, I brought the database out of its perpetual recovery  
> mode. Here are the lines from the log when this was done:
> [2007-10-01 23:43:03 MDT] LOG:  restored log file  
> "000104660060" from archive
> [2007-10-01 23:45:50 MDT] LOG:  could not open file "pg_xlog/ 
> 000104660061" (log file 1126, segment 97): No such file  
> or directory
> [2007-10-01 23:45:50 MDT] LOG:  redo done at 466/6070

> Which is all fine, since 000104660060.gz was the last  
> archived WAL file. The next entry in the log follows:

> [2007-10-01 23:45:50 MDT] PANIC:  could not open file "pg_xlog/ 
> 000104660060" (log file 1126, segment 96): No such file  
> or directory
> [2007-10-01 23:45:51 MDT] LOG:  startup process (PID 27624) was  
> terminated by signal 6
> [2007-10-01 23:45:51 MDT] LOG:  aborting startup due to startup  
> process failure
> [2007-10-01 23:45:51 MDT] LOG:  logger shutting down

> And the database would not start up. The issue appears to be that the  
> restore_command script itself ungzips the WAL to its destination %p,  
> and the WAL is left in the archive directory as  
> 000104660060.gz. By simply ungzipping the last few WALs  
> manually in the archive directory, the database replayed them and  
> started up successfully.

What this sounds like to me is a problem in your recovery procedures.
What exactly did you do to "bring the database out of recovery mode"?
It is expected that it would ask the restore_command script to fetch the
last WAL segment a second time, and I don't understand why that didn't
Just Work.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Tom Lane
Brian Wipf <[EMAIL PROTECTED]> writes:
> PG tried to enforce the same LC_COLLATE and LC_CTYPE. On OS X, the  
> value of en_US.utf8 didn't exist, so I created a soft link to  
> en_US.UTF-8 in the /usr/share/locale/ directory. When I sort the  
> values of product_id_from_source on both systems using the locales in  
> this manner I get different orderings:

Hmph, hadn't remembered that, but indeed it seems that en_US sorting
is ASCII order, or nearly so, on Darwin.  On Linux it's "dictionary
order", which means case-insensitive, spaces are second class citizens,
and some other strange rules.

Linux:

$ LANG=en_US.utf8 sort zzz
ZZ538264
zz barf
zzdangle
zz echo
ZZring
$

Darwin, same data:

$ LANG=en_US.UTF-8 sort zzz
ZZ538264
ZZring
zz barf
zz echo
zzdangle
$

> I can happily live with rebuilding indexes if this is the only  
> problem I can expect to encounter, and I would still prefer PITR over  
> replication.

The whole notion scares the daylights out of me.  If you really need
to use PITR between these two particular platforms, use a locale
with common behavior --- C/POSIX would work.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Generating TRUNCATE orders

2007-10-03 Thread Erik Jones

On Oct 3, 2007, at 12:19 PM, Scott Marlowe wrote:


On 10/3/07, Laurent ROCHE <[EMAIL PROTECTED]> wrote:

Would this work:

SELECT
'TRUNCATE TABLE ' ||
'my_schema.' ||
c.relname ||', '
FROM pg_namespace nc, pg_class c
  WHERE c.relnamespace = nc.oid
AND  c.relkind IN ('r' )
AND nc.nspname = 'my_schema'
ORDER BY relname


Or, just:

SELECT 'TRUNCATE ' || schemaname || '.' || tablename ';'
FROM pg_tables
WHERE schemname='my_schema'
ORDER BY tablename;

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Brian Wipf

On 3-Oct-07, at 8:07 AM, Tom Lane wrote:

PG 8.2 does store data in the pg_control file with which it can check
for the most common disk-format-incompatibility problems (to wit,
endiannness, maxalign, and --enable-integer-datetimes).  If Brian has
stumbled on another such foot-gun, it'd be good to identify it so we
can think about adding more checking.

Noting that one of the columns in the corrupted index was varchar,
I am wondering if the culprit could have been a locale/encoding  
problem

of some sort.  PG tries to enforce the same LC_COLLATE and LC_CTYPE
values (via pg_control entries) but when you are migrating across
widely different operating systems like this, identical spelling of
locale names proves damn near nothing.

What are the settings being used, anyway?  (pg_controldata can tell
you.)  Try using sort(1) to sort the values of  
product_id_from_source on
both systems, in that locale, and see if you get the same sort  
ordering.


PG tried to enforce the same LC_COLLATE and LC_CTYPE. On OS X, the  
value of en_US.utf8 didn't exist, so I created a soft link to  
en_US.UTF-8 in the /usr/share/locale/ directory. When I sort the  
values of product_id_from_source on both systems using the locales in  
this manner I get different orderings:


Linux box:

select product_id_from_source from fed_product order by  
product_id_from_source desc limit 5;

 product_id_from_source

 ZZring
 ZZR0-70-720
 zzdangle
 ZZC0-68-320 -05
 ZZ538264
(5 rows)

OS X box:

select product_id_from_source from fed_product order by  
product_id_from_source desc limit 10;

 product_id_from_source

 zzdangle
 zz06
 zz05
 zz04
 zz03
(5 rows)

and all of these rows exist on both databases. The data appears to be  
okay. Is it possible the only issue is with indexes?


I can happily live with rebuilding indexes if this is the only  
problem I can expect to encounter, and I would still prefer PITR over  
replication. We tried PG Pool for replication and the performance was  
poor compared to a single standalone server. Slony-I worked better  
for us, but it is more difficult to maintain than PG's PITR and a  
warm standby is sufficient for us. It would be nice to be able to use  
the read-only warm stand-by PITR at some point as well, although with  
the different locale orderings, I suppose this wouldn't be possible.


Brian Wipf
ClickSpace Interactive Inc.
<[EMAIL PROTECTED]>

Heres the output from pg_controldata on both boxes:

Linux box:
pg_control version number:822
Catalog version number:   200611241
Database system identifier:   5087840078460068765
Database cluster state:   in production
pg_control last modified: Wed 03 Oct 2007 11:16:34 AM MDT
Current log file ID:  1126
Next log file segment:99
Latest checkpoint location:   466/6220
Prior checkpoint location:466/6120
Latest checkpoint's REDO location:466/6220
Latest checkpoint's UNDO location:0/0
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  0/1720940695
Latest checkpoint's NextOID:  506360
Latest checkpoint's NextMultiXactId:  16963
Latest checkpoint's NextMultiOffset:  41383
Time of latest checkpoint:Wed 03 Oct 2007 11:16:34 AM MDT
Minimum recovery ending location: 0/0
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Date/time type storage:   floating-point numbers
Maximum length of locale name:128
LC_COLLATE:   en_US.utf8
LC_CTYPE: en_US.utf8

OS X box:
pg_control version number:822
Catalog version number:   200611241
Database system identifier:   5087840078460068765
Database cluster state:   in production
pg_control last modified: Wed Oct  3 11:25:59 2007
Current log file ID:  1166
Next log file segment:48
Latest checkpoint location:   48E/2A09A428
Prior checkpoint location:48E/251024C8
Latest checkpoint's REDO location:48E/2A086140
Latest checkpoint's UNDO location:0/0
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  0/1750418938
Latest checkpoint's NextOID:  530936
Latest checkpoint's NextMultiXactId:  17655
Latest checkpoint's NextMultiOffset:  43050
Time of latest checkpoint:Wed Oct  3 11:23:31 2007
Minimum recovery ending location: 42B/701FDB0
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16

Re: [GENERAL] Backup single database roles and ACLs

2007-10-03 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rui Lopes wrote:
> Hello,
> 
> How do I backup all the roles and ACLs that have permissions to a single
> database?
> 
> pg_dumpall -g does not do the trick because it dumps all the roles from
> all the databases.

roles are part of the catalog/cluster not a database, thus -g is your
option.

It has been discussed in the past to have per database roles, but that
has not beed coded.

Joshua D. Drake


> 
> I've read the system catalogs documentation [1], but I didn't figure how
> to relate the database name with the table names...
> 
> So far I can get all the database names, owners oid and ACLs using:
> 
>   select oid,datname,datdba,datacl from pg_database;
> 
> But how do I relate pg_database.oid with pg_class (to get table ACLs)?
> or maybe its related with other table that I'm missing?
> 
> 
> TIA!
> 
> Best regards,
> Rui Lopes
> 
> [1] http://www.postgresql.org/docs/8.2/static/catalogs.html

- ---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHA9i9ATb/zqfZUUQRAqorAKCslzpZqeAyj79nYY5amFFWTuJ7dwCgkowS
8vza+ntivUnancUTZa6eaJQ=
=MUB8
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Importing MySQL dump into PostgreSQL 8.2

2007-10-03 Thread Jeff Lanzarotta
Hello,

I have a MySQL dump file that I would like to import into our PostgreSQL 8.2 
database. Is there a way to do this?

Thanks.


-Jeff

[GENERAL] Backup single database roles and ACLs

2007-10-03 Thread Rui Lopes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

How do I backup all the roles and ACLs that have permissions to a single
database?

pg_dumpall -g does not do the trick because it dumps all the roles from
all the databases.

I've read the system catalogs documentation [1], but I didn't figure how
to relate the database name with the table names...

So far I can get all the database names, owners oid and ACLs using:

  select oid,datname,datdba,datacl from pg_database;

But how do I relate pg_database.oid with pg_class (to get table ACLs)?
or maybe its related with other table that I'm missing?


TIA!

Best regards,
Rui Lopes

[1] http://www.postgresql.org/docs/8.2/static/catalogs.html
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHA9UHg6ss8INEYd4RAhGJAJ9gcuPY/VoIPUGnkB6QLEWaf6tsYACeOxul
qqUTkoo1WewaLKDxJXSZfHE=
=GLyA
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Easier string concat in PL funcs?

2007-10-03 Thread hubert depesz lubaczewski
On Wed, Oct 03, 2007 at 11:47:26AM -0400, Jerry Sievers wrote:
> Question: Am I overlooking a simple way of doing this?

yes. use plpython or plperl to do the job.

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Generating TRUNCATE orders

2007-10-03 Thread Scott Marlowe
On 10/3/07, Laurent ROCHE <[EMAIL PROTECTED]> wrote:

Would this work:

SELECT
'TRUNCATE TABLE ' ||
'my_schema.' ||
c.relname ||', '
FROM pg_namespace nc, pg_class c
  WHERE c.relnamespace = nc.oid
AND  c.relkind IN ('r' )
AND nc.nspname = 'my_schema'
ORDER BY relname

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Generating TRUNCATE orders

2007-10-03 Thread Richard Huxton

Laurent ROCHE wrote:
 
So I wrote:
SELECT 'TRUNCATE TABLE ' 
UNION 

...

ORDER BY relname
 
And this fails with the following message:

ERROR: column "relname" does not exist



But I don't understand why this does not work: the 2 SELECTs produce a single 
char column so from what I understand that should work ! ? !
If some body can explain I will be grateful.


The "ORDER BY" is attached to the "UNION" not the second subquery. 
Catches everyone out from time to time.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Generating TRUNCATE orders

2007-10-03 Thread Laurent ROCHE
Hi,
 
I wanted to write a SELECT that generates a TRUNCATE TABLE for all the tables 
in a given schema.
 
So I wrote:
SELECT 'TRUNCATE TABLE ' 
UNION 
SELECT 'my_schema.' || c.relname ||', '
FROM pg_namespace nc, pg_class c
  WHERE c.relnamespace = nc.oid
AND  c.relkind IN ('r' )
AND nc.nspname = 'my_schema'
ORDER BY relname
 
And this fails with the following message:
ERROR: column "relname" does not exist
SQL state:42703

 
If I run only the SELECT after the UNION that works as expected.
Of course, this is not a big deal as I copying and pasting this into a script 
file any way (and I will add the TRUNCATE TABLE manually).
But I don't understand why this does not work: the 2 SELECTs produce a single 
char column so from what I understand that should work ! ? !
If some body can explain I will be grateful.
 
 
PS: Of course, I realise the code produced by the SELECTs and UNION would not 
work straight away, because of the trailing comma !
 
Have fun,
[EMAIL PROTECTED]
The Computing Froggy


  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 

Re: [GENERAL] Easier string concat in PL funcs?

2007-10-03 Thread Alvaro Herrera
Jerry Sievers wrote:
> After for the umpteenth time bashing my head against a wall developing
> some PL funcs that use dynamic SQL, going plain bonkers trying to
> build the query string; I came up with a function like the one below
> to take a string with placeholders, an array of values to be
> interpolated and a placeholder char.  (This may appear Pythonish to
> some.
> 
> Question: Am I overlooking a simple way of doing this?
> 
> As in; raise notice 'Foo % %', v_var1, v_var2;

No, you aren't.  AFAICT there isn't any way to do that, and I missed it
not too long ago.

I'm not sure about the exact syntax, and certainly I expect this to
become less of an issue with plan invalidation on 8.3, but IMHO it would
be good to have something like Python %(f)s string interpolation (or
just plain string interpolation like in other languages).

-- 
Alvaro Herrerahttp://www.advogato.org/person/alvherre
"Cuando no hay humildad las personas se degradan" (A. Christie)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Easier string concat in PL funcs?

2007-10-03 Thread Jerry Sievers
After for the umpteenth time bashing my head against a wall developing
some PL funcs that use dynamic SQL, going plain bonkers trying to
build the query string; I came up with a function like the one below
to take a string with placeholders, an array of values to be
interpolated and a placeholder char.  (This may appear Pythonish to
some.

Question: Am I overlooking a simple way of doing this?

As in; raise notice 'Foo % %', v_var1, v_var2;


create function make_string(v_template text, v_vars text[], v_placeholder char)
returns text
as $$

declare
v_temp text[] := string_to_array(v_template, v_placeholder);
v_output text[];

begin
if array_upper(v_vars, 1) + 1 != array_upper(v_temp, 1) then
raise exception 'Too many vars; should be equal to placeholders 
"%" in string', v_placeholder;
end if;

for i in 2 .. array_upper(v_temp, 1) * 2 by 2 loop
v_output [i - 1] := v_temp[i / 2];
v_output [i] := v_vars[i / 2];
end loop;

return array_to_string(v_output, '');

end

$$
language plpgsql;


The above function makes possible to do something like this shown
below wich for complex dynamic SQL strings, can be a lot easier to
create than with the usual combo of string constants pasted together
with PL vars using ||.

execute make_string($$
create table fooschema.%
;
create rule %
as on insert  to fooschema.%
where %
do whatever
;
$$,
array [
v_tablename,
v_rulename,
v_tablename,
v_conditions
],
'%'
);
-- 
...Still not exactly simple, I realize :-)

Thanks 

---
Jerry Sievers   732 365-2844 (work) Production Database Administrator
305 321-1144 (mobil WWW E-Commerce Consultant

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Erik Jones

On Oct 3, 2007, at 6:47 AM, Richard Huxton wrote:


Sergey Konoplev wrote:

Don't forget to cc: the list.
Try not to top-post replies, it's easier to read if you reply  
below the

text you're replying to.

Thanx for your advice. I'm just absolutely worned out. Sorry.


Know that feeling - let's see if we can't sort this out.


1. Is it always the same query?
2. Does the client still think it's connected?
3. Is that query using up CPU, or just idling?
4. Anything odd in pg_locks for the problem pid?

1. No it isn't. I have few functions (plpgsql, plpython) that cause
such situations more often than another but they are called more  
often

also.
OK, so there's no real pattern. That would suggest it's not a  
particular

query-plan that's got something wrong.

Do you always get this problem inside a function?

As far as I remember I do.


Hmm - check Magnus' thoughts on pl/python. Can't comment on Python  
myself. Are you sure it's not always the same few function(s) that  
cause this problem?



2. The client just waits for query and buzz.
3. They are using CPU in usual way and their pg_lock activity  
seems normal.

So the backend that appears "stuck" is still using CPU?

Yes but the metter is that this procedures usualy use CPU just a
little so I can't find out if there is some oddity or not.


OK, so it's not that it's stuck in a loop wasting a lot of CPU


In order to get at least some idea of what these processes are (or,  
are not) doing, run an strace (or your OS's equivalent) on the  
process before killing it.  Let us know what you see there.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Oleg Bartunov

On Wed, 3 Oct 2007, Alban Hertroys wrote:


Alban Hertroys wrote:

The only odd thing is that to_tsvector('dutch', 'some dutch text') now
returns '|' for stop words...

For example:
 select to_tsvector('nederlands', 'De beste stuurlui staan aan wal');
  to_tsvector

 '|':1,5 'bes':2 'wal':6 'staan':4 'stuurlui':3


I found the cause. The stop words list I found contained comments
prefixed by '|' signs. Removing the contents and recreating the database
solved the problem. Just updating the reference didn't seem to help...


you need to recreate tsvector field and index, after changing any dicts.



There's undoubtedly some cleaner way to replace the stop words list, but
at the current stage of our project this was the simplest to achieve.




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Oleg Bartunov

On Wed, 3 Oct 2007, Alban Hertroys wrote:


Oleg Bartunov wrote:

Alban,

the documentation you're refereed on is for upcoming 8.3 release.
For 8.1 and 8.2 you need to do all machinery by hand. It's not
difficult, for example:


Thanks Oleg.
I think I managed to do this right, although I had to google for some of
the files (we don't have ispell installed).

You also seem to have mixed russion and english dictionaries in your
example, I'm not sure that was on purpose?


yes, we index mixed content



Anyway, I changed your example to use dutch dictionaries and locale
where I thought it applicable, and I got something working apparently.
Quite some guess work was involved, so I have a few questions left.

The only odd thing is that to_tsvector('dutch', 'some dutch text') now
returns '|' for stop words...


Could you packed your dictionary files and .sql, so we look on them in 
spare time.




For example:
select to_tsvector('nederlands', 'De beste stuurlui staan aan wal');
 to_tsvector

'|':1,5 'bes':2 'wal':6 'staan':4 'stuurlui':3


A minor nit... You ended the script with a hidden commit (END;). I would
have preferred to experiment with the results a bit before commiting...


this is up to you. It was just an example



I mixed in a few questions below, if you could answer them please?


-- sample tsearch2 configuration for search.postgresql.org
-- Creates configuration 'pg' - default, should match server's locale !!!
-- Change 'ru_RU.UTF-8'

begin;

-- create special (default) configuration 'pg'
update pg_ts_cfg set locale=NULL where locale = 'ru_RU.UTF-8';


I suppose this disables a possibly existing stemmer for that locale?


no, it's just to have one (default) configuration 'pg' for 
locale 'ru_RU.UTF-8'. You can skip this.







insert into pg_ts_cfg values('pg','default','ru_RU.UTF8');

-- register 'pg_dict' dictionary using synonym template
-- postgrespg
-- pgsql   pg
-- postgresql  pg
insert into pg_ts_dict
(select 'pg_dict',dict_init,
'/usr/local/pgsql-dev/share/contrib/pg_dict.txt',
dict_lexize, 'pg-specific dictionary'
from pg_ts_dict
where dict_name='synonym'
);

-- register ispell dictionary, check paths and stop words
-- I used iconv for english files, since there are some cyrillic stuff
insert into pg_ts_dict
(SELECT 'en_ispell', dict_init,
'DictFile="/usr/local/share/dicts/ispell/utf8/english-utf8.dict",'
 'AffFile="/usr/local/share/dicts/ispell/utf8/english-utf8.aff",'
 'StopFile="/usr/local/share/dicts/ispell/utf8/english-utf8.stop"',
 dict_lexize
 FROM pg_ts_dict
 WHERE dict_name = 'ispell_template'
 );


I actually use a .lat file here. I have no idea whether that's
compatible (but it appears to have worked).


it's just filenames, no matter (for 8.1,8.2)


I got my .lat and .aff files from:
http://fmg-www.cs.ucla.edu/geoff/ispell-dictionaries.html#Dutch-dicts


You can use myspell dictionaries.




My stop words file is from:
http://snowball.tartarus.org/algorithms/dutch/stop.txt


 -- use the same stop-word list as 'en_ispell' dictionary
UPDATE pg_ts_dict set dict_initoption='/usr/local/share/dicts/english.stop'
where dict_name='en_stem';


Why change the stop words for the English dictionary? I skipped this
step. Is that right?


I wanted to have the same list of stop words for ispell and snowball.




-- default token<->dicts mappings
insert into pg_ts_cfgmap  select 'pg', tok_alias, dict_name from
public.pg_ts_cfgmap where ts_name='default';

-- modify mappings for latin words for configuration 'pg'
update pg_ts_cfgmap set dict_name = '{pg_dict,en_ispell,en_stem}'
where tok_alias in ( 'lword', 'lhword', 'lpart_hword' )
and ts_name = 'pg';

-- we won't index/search some tokens
update pg_ts_cfgmap set dict_name = NULL
--where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float','word')
where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float')
and ts_name = 'pg';

end;

-- testing

select * from ts_debug('
PostgreSQL, the highly scalable, SQL compliant, open source
object-relational
database management system, is now undergoing beta testing of the next
version of our software: PostgreSQL 8.2.
');


Oleg





Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Tom Lane
Magnus Hagander <[EMAIL PROTECTED]> writes:
> Does pl/python listen to SIGINT during execution of functions? If not,
> that'd be an explanation - if it's stuck inside a pl/python function...
> AFAIK, pl/pgsql does listen for SIGINT during execution, but I don't nkow
> abuot plpython.

It does not, unless the function contains SPI calls that'll return
control into PG code.

Short answer: don't write infinite loops in plpython.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Brian Wipf wrote:
>> Both servers have identical Intel processors and both are running 64-bit 
>> PostgreSQL 8.2.4. The original server is running 64-bit openSUSE 10.2 
>> (Linux 2.6.18.2-34-default #1 SMP Mon Jul 16 01:16:32 GMT 2007 x86_64 
>> x86_64 x86_64 GNU/Linux) and the new server is running Mac OS X Leopard 
>> Server.

> This isn't necessarily safe. If your setup isn't *identical* then you 
> need to do a lot of checking to make sure this will work.

PG 8.2 does store data in the pg_control file with which it can check
for the most common disk-format-incompatibility problems (to wit,
endiannness, maxalign, and --enable-integer-datetimes).  If Brian has
stumbled on another such foot-gun, it'd be good to identify it so we
can think about adding more checking.

Noting that one of the columns in the corrupted index was varchar,
I am wondering if the culprit could have been a locale/encoding problem
of some sort.  PG tries to enforce the same LC_COLLATE and LC_CTYPE
values (via pg_control entries) but when you are migrating across
widely different operating systems like this, identical spelling of
locale names proves damn near nothing.

What are the settings being used, anyway?  (pg_controldata can tell
you.)  Try using sort(1) to sort the values of product_id_from_source on
both systems, in that locale, and see if you get the same sort ordering.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Alban Hertroys
Alban Hertroys wrote:
> The only odd thing is that to_tsvector('dutch', 'some dutch text') now
> returns '|' for stop words...
> 
> For example:
>  select to_tsvector('nederlands', 'De beste stuurlui staan aan wal');
>   to_tsvector
> 
>  '|':1,5 'bes':2 'wal':6 'staan':4 'stuurlui':3

I found the cause. The stop words list I found contained comments
prefixed by '|' signs. Removing the contents and recreating the database
solved the problem. Just updating the reference didn't seem to help...

There's undoubtedly some cleaner way to replace the stop words list, but
at the current stage of our project this was the simplest to achieve.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Alban Hertroys
Oleg Bartunov wrote:
> Alban,
> 
> the documentation you're refereed on is for upcoming 8.3 release.
> For 8.1 and 8.2 you need to do all machinery by hand. It's not
> difficult, for example:

Thanks Oleg.
I think I managed to do this right, although I had to google for some of
the files (we don't have ispell installed).

You also seem to have mixed russion and english dictionaries in your
example, I'm not sure that was on purpose?

Anyway, I changed your example to use dutch dictionaries and locale
where I thought it applicable, and I got something working apparently.
Quite some guess work was involved, so I have a few questions left.

The only odd thing is that to_tsvector('dutch', 'some dutch text') now
returns '|' for stop words...

For example:
 select to_tsvector('nederlands', 'De beste stuurlui staan aan wal');
  to_tsvector

 '|':1,5 'bes':2 'wal':6 'staan':4 'stuurlui':3


A minor nit... You ended the script with a hidden commit (END;). I would
have preferred to experiment with the results a bit before commiting...

I mixed in a few questions below, if you could answer them please?

> -- sample tsearch2 configuration for search.postgresql.org
> -- Creates configuration 'pg' - default, should match server's locale !!!
> -- Change 'ru_RU.UTF-8'
> 
> begin;
> 
> -- create special (default) configuration 'pg'
> update pg_ts_cfg set locale=NULL where locale = 'ru_RU.UTF-8';

I suppose this disables a possibly existing stemmer for that locale?

> insert into pg_ts_cfg values('pg','default','ru_RU.UTF8');
> 
> -- register 'pg_dict' dictionary using synonym template
> -- postgrespg
> -- pgsql   pg
> -- postgresql  pg
> insert into pg_ts_dict
> (select 'pg_dict',dict_init,
> '/usr/local/pgsql-dev/share/contrib/pg_dict.txt',
> dict_lexize, 'pg-specific dictionary'
> from pg_ts_dict
> where dict_name='synonym'
> );
> 
> -- register ispell dictionary, check paths and stop words
> -- I used iconv for english files, since there are some cyrillic stuff
> insert into pg_ts_dict
> (SELECT 'en_ispell', dict_init,
> 'DictFile="/usr/local/share/dicts/ispell/utf8/english-utf8.dict",'
>  'AffFile="/usr/local/share/dicts/ispell/utf8/english-utf8.aff",'
>  'StopFile="/usr/local/share/dicts/ispell/utf8/english-utf8.stop"',
>  dict_lexize
>  FROM pg_ts_dict
>  WHERE dict_name = 'ispell_template'
>  );

I actually use a .lat file here. I have no idea whether that's
compatible (but it appears to have worked).

I got my .lat and .aff files from:
http://fmg-www.cs.ucla.edu/geoff/ispell-dictionaries.html#Dutch-dicts

My stop words file is from:
http://snowball.tartarus.org/algorithms/dutch/stop.txt

>  -- use the same stop-word list as 'en_ispell' dictionary
> UPDATE pg_ts_dict set dict_initoption='/usr/local/share/dicts/english.stop'
> where dict_name='en_stem';

Why change the stop words for the English dictionary? I skipped this
step. Is that right?

> -- default token<->dicts mappings
> insert into pg_ts_cfgmap  select 'pg', tok_alias, dict_name from
> public.pg_ts_cfgmap where ts_name='default';
> 
> -- modify mappings for latin words for configuration 'pg'
> update pg_ts_cfgmap set dict_name = '{pg_dict,en_ispell,en_stem}'
> where tok_alias in ( 'lword', 'lhword', 'lpart_hword' )
> and ts_name = 'pg';
> 
> -- we won't index/search some tokens
> update pg_ts_cfgmap set dict_name = NULL
> --where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float','word')
> where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float')
> and ts_name = 'pg';
> 
> end;
> 
> -- testing
> 
> select * from ts_debug('
> PostgreSQL, the highly scalable, SQL compliant, open source
> object-relational
> database management system, is now undergoing beta testing of the next
> version of our software: PostgreSQL 8.2.
> ');
> 
> 
> Oleg

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Find min year and min value

2007-10-03 Thread Michael Glaesemann


On Oct 3, 2007, at 1:29 , Stefan Schwarzer wrote:

As others have noted, the query *can* be written. But it appears  
to me

that you are struggling against your table layout.


The current schema he has is commonly called EAV (entity-attribute- 
value) and is generally frowned upon. Now, in his particular case  
it may be justified if the "value" column values are actually all  
of the same type, such as currency amounts for each category. If  
this is the case, I suggest renaming the column to be more  
descriptive of what is actually stored: likewise the id_variable  
column.


Having 500 statistical global national variables for about 240  
countries/territories. Need to do regional aggregations, per Capita  
calculations and some completeness computations on-the-fly.




id_variable   |year|value |id_country


Both Steve and I have given you alternatives and reasons for choosing  
alternative schema. You haven't provided any additional information  
to really help us guide you in any particular direction from what we  
already have. For example, in the section from me which you quoted  
above, I wrote that this schema may be appropriate if the "value"  
column values are actually all of the same type (e..g, all currency  
amounts, all masses, all counts). You haven't said whether or not  
this is the case. We can't read your mind :)


Again, one thing that would help is if you use a more descriptive  
column name than "value" that gives an indication of what *kind* of  
values are in the column.


I thought (and did ask) about the possibility to put nevertheless -  
with the new table design - the variables into different tables,  
but nobody really got my on a track for that.


Steve first suggested it and I provided an example of what that would  
look like (using "gdp" and "fish_catches" tables) in the same post  
you quoted from above.


http://archives.postgresql.org/pgsql-general/2007-10/msg00108.php

Is this not what you mean?

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] reporting tools

2007-10-03 Thread Geoffrey

Andrus wrote:

Use

www.fyireporting.com

Open source, uses excellent PostgreSQL npgsql drivers.
Use standard RDL format


I guess I should have noted that we will need to run this on Linux clients.

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Oleg Bartunov

Alban,

the documentation you're refereed on is for upcoming 8.3 release.
For 8.1 and 8.2 you need to do all machinery by hand. It's not 
difficult, for example:


-- sample tsearch2 configuration for search.postgresql.org
-- Creates configuration 'pg' - default, should match server's locale !!!
-- Change 'ru_RU.UTF-8'

begin;

-- create special (default) configuration 'pg'
update pg_ts_cfg set locale=NULL where locale = 'ru_RU.UTF-8';
insert into pg_ts_cfg values('pg','default','ru_RU.UTF8');

-- register 'pg_dict' dictionary using synonym template
-- postgrespg
-- pgsql   pg
-- postgresql  pg
insert into pg_ts_dict
(select 'pg_dict',dict_init,
'/usr/local/pgsql-dev/share/contrib/pg_dict.txt',
dict_lexize, 'pg-specific dictionary'
from pg_ts_dict
where dict_name='synonym'
);

-- register ispell dictionary, check paths and stop words
-- I used iconv for english files, since there are some cyrillic stuff
insert into pg_ts_dict
(SELECT 'en_ispell', dict_init,
'DictFile="/usr/local/share/dicts/ispell/utf8/english-utf8.dict",'
 'AffFile="/usr/local/share/dicts/ispell/utf8/english-utf8.aff",'
 'StopFile="/usr/local/share/dicts/ispell/utf8/english-utf8.stop"',
 dict_lexize
 FROM pg_ts_dict
 WHERE dict_name = 'ispell_template'
 );

 -- use the same stop-word list as 'en_ispell' dictionary
UPDATE pg_ts_dict set dict_initoption='/usr/local/share/dicts/english.stop'
where dict_name='en_stem';



-- default token<->dicts mappings
insert into pg_ts_cfgmap  select 'pg', tok_alias, dict_name from 
public.pg_ts_cfgmap where ts_name='default';

-- modify mappings for latin words for configuration 'pg'
update pg_ts_cfgmap set dict_name = '{pg_dict,en_ispell,en_stem}'
where tok_alias in ( 'lword', 'lhword', 'lpart_hword' )
and ts_name = 'pg';

-- we won't index/search some tokens
update pg_ts_cfgmap set dict_name = NULL
--where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float','word')
where tok_alias in ('email', 'url', 'sfloat', 'uri', 'float')
and ts_name = 'pg';

end;

-- testing

select * from ts_debug('
PostgreSQL, the highly scalable, SQL compliant, open source object-relational
database management system, is now undergoing beta testing of the next
version of our software: PostgreSQL 8.2.
');


Oleg
On Wed, 3 Oct 2007, Alban Hertroys wrote:


Hello,

I'm trying to get a Dutch snowball stemmer in Postgres 8.1, but I can't
find how to do that.

I found CREATE FULLTEXT DICTIONARY commands in the tsearch2 docs on
http://www.sai.msu.su/~megera/postgres/fts/doc/index.html, but these
commands are apparently not available on PG8.1.

I also found the tables pg_ts_(cfg|cfgmap|dict|parser), but I have no
idea how to add a Dutch stemmer to those.

I did find some references to stem.[ch] files that were suggested to
compile into the postgres sources, but I cannot believe that's the right
way to do this (besides that I don't have sufficient privileges to
install such a version).

So... How do I do this?

The system involved is some version of Debian Linux (2.6 kernel); are
there any packages for a Dutch stemmer maybe?

I'm in a bit of a hurry too, as we're on a tight deadline :(

Regards,



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Alvaro Herrera
Sergey Konoplev escribió:

> > AFAIK, pl/pgsql does listen for SIGINT during execution, but I don't nkow
> > abuot plpython.
> 
> How can we find it out?

Let's see one of the functions to find out if anyone else can reproduce
the problem.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Tsearch2 Dutch snowball stemmer in PG8.1

2007-10-03 Thread Alban Hertroys
Hello,

I'm trying to get a Dutch snowball stemmer in Postgres 8.1, but I can't
find how to do that.

I found CREATE FULLTEXT DICTIONARY commands in the tsearch2 docs on
http://www.sai.msu.su/~megera/postgres/fts/doc/index.html, but these
commands are apparently not available on PG8.1.

I also found the tables pg_ts_(cfg|cfgmap|dict|parser), but I have no
idea how to add a Dutch stemmer to those.

I did find some references to stem.[ch] files that were suggested to
compile into the postgres sources, but I cannot believe that's the right
way to do this (besides that I don't have sufficient privileges to
install such a version).

So... How do I do this?

The system involved is some version of Debian Linux (2.6 kernel); are
there any packages for a Dutch stemmer maybe?

I'm in a bit of a hurry too, as we're on a tight deadline :(

Regards,
-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Richard Huxton

Sergey Konoplev wrote:

Don't forget to cc: the list.
Try not to top-post replies, it's easier to read if you reply below the
text you're replying to.


Thanx for your advice. I'm just absolutely worned out. Sorry.


Know that feeling - let's see if we can't sort this out.


1. Is it always the same query?
2. Does the client still think it's connected?
3. Is that query using up CPU, or just idling?
4. Anything odd in pg_locks for the problem pid?

1. No it isn't. I have few functions (plpgsql, plpython) that cause
such situations more often than another but they are called more often
also.

OK, so there's no real pattern. That would suggest it's not a particular
query-plan that's got something wrong.

Do you always get this problem inside a function?


As far as I remember I do.


Hmm - check Magnus' thoughts on pl/python. Can't comment on Python 
myself. Are you sure it's not always the same few function(s) that cause 
this problem?



2. The client just waits for query and buzz.
3. They are using CPU in usual way and their pg_lock activity seems normal.

So the backend that appears "stuck" is still using CPU?


Yes but the metter is that this procedures usualy use CPU just a
little so I can't find out if there is some oddity or not.


OK, so it's not that it's stuck in a loop wasting a lot of CPU


So - the symptoms are:

[snip]

Exactly.


So - we need to solve two mysteries
1. Why are these functions not returning?
2. Why does SIGINT not interrupt them?


Are you happy that your hardware and drivers are OK? There aren't
problems with any other servers on this machine?


Yes I'm quite happy. My hardware is: 2 double-core Xeon, 8Gb RAM,
RAID5. What about other software... it's dedicated PG server so I have
no problem with it.


Well, the places I'd look would be:
1. Hardware (you're happy that's fine, and it's not quite the problems 
I'd expect)

2. Drivers (same as #1)
3. Client connectivity (but you say the client is fine)
4. External interactions (see below)
5. Bug in PG extension (pl/python)
6. Bug in PG core code

Do any of your functions interact with the outside world - fetch 
webpages or similar? It could be they're waiting for that. If you're 
using a library that could hang waiting for a response and also block 
SIGINT at the same time that would explain everything.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Sergey Konoplev
> > Don't forget to cc: the list.
> > Try not to top-post replies, it's easier to read if you reply below the
> > text you're replying to.
> >
> > Sergey Konoplev wrote:
> > >>1. Is it always the same query?
> > >>2. Does the client still think it's connected?
> > >>3. Is that query using up CPU, or just idling?
> > >>4. Anything odd in pg_locks for the problem pid?
> >
> > >1. No it isn't. I have few functions (plpgsql, plpython) that cause
> > >such situations more often than another but they are called more often
> > >also.
> >
> > OK, so there's no real pattern. That would suggest it's not a particular
> > query-plan that's got something wrong.
> >
> > Do you always get this problem inside a function?
>
> Does pl/python listen to SIGINT during execution of functions? If not,
> that'd be an explanation - if it's stuck inside a pl/python function...
>
> AFAIK, pl/pgsql does listen for SIGINT during execution, but I don't nkow
> abuot plpython.

How can we find it out?

> > 4. You have to cancel the query from the command-line using "kill -9
> > "
>
> That's not cancel, that's taking a sledgehammer to your server :(

Yes I know it but I have no choice :(

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Feature Request - Defining default table space for Indexes in Conf file

2007-10-03 Thread Josh Tolley
On 10/1/07, S Sharma <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> The default table space defined in db conf file is used for all database
> tables as well as indexes. So putting the indexes on another table space
> requires manually dropping and re-creating indexes.
> It would be nice to have a feature to define a default table space for
> indexes in db conf file and all indexed are created in that table space.
> This would allow creating a good database architecture to avoid disc
> contention easily.
>
> Thanks
> Data_arch

Although the most basic optimization suggested when using tablespaces
is always "Put indexes on one and data on another to avoid disk
contention", I doubt that the ideal optimization for many workloads,
which means sticking such a thing in a config file might not be such a
good idea. In other words, a DBA probably ought to think harder about
optimizing his/her use of tablespaces than just "I'll put indexes on
this one and data on another".  See
http://www.depesz.com/index.php/2007/09/30/finding-optimum-tables-placement-in-2-tablespace-situation/
and http://people.planetpostgresql.org/xzilla/ for two recent blog
posts on the subject. But now I'll be quiet, because I have no
evidence to prove any of the above :)

 - Josh

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Magnus Hagander
On Wed, Oct 03, 2007 at 11:18:32AM +0100, Richard Huxton wrote:
> Don't forget to cc: the list.
> Try not to top-post replies, it's easier to read if you reply below the 
> text you're replying to.
> 
> Sergey Konoplev wrote:
> >>1. Is it always the same query?
> >>2. Does the client still think it's connected?
> >>3. Is that query using up CPU, or just idling?
> >>4. Anything odd in pg_locks for the problem pid?
> 
> >1. No it isn't. I have few functions (plpgsql, plpython) that cause
> >such situations more often than another but they are called more often
> >also.
> 
> OK, so there's no real pattern. That would suggest it's not a particular 
> query-plan that's got something wrong.
> 
> Do you always get this problem inside a function?

Does pl/python listen to SIGINT during execution of functions? If not,
that'd be an explanation - if it's stuck inside a pl/python function...

AFAIK, pl/pgsql does listen for SIGINT during execution, but I don't nkow
abuot plpython.

> 4. You have to cancel the query from the command-line using "kill -9 
> "

That's not cancel, that's taking a sledgehammer to your server :(

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] datestyle question

2007-10-03 Thread Diego Gil
El mié, 03-10-2007 a las 00:27 -0500, Erik Jones escribió:
> On Oct 2, 2007, at 8:56 PM, Diego Gil wrote:
> 
> > El jue, 27-09-2007 a las 10:32 +0200, Alban Hertroys escribió:
> >> Diego Gil wrote:
> >>> Hi,
> >>>
> >>> I have a file to import to postgresql that have an unusual date  
> >>> format.
> >>> For example, Jan 20 2007 is 20022007, in DDMM format, without  
> >>> any
> >>> separator. I know that a 20072002 (MMDD) is ok, but I don't  
> >>> know how
> >>> to handle the DDMM dates.
> >>
> >> You could try importing those fields in a text field in a temporary
> >> table and then convert them from there into your final tables  
> >> using the
> >> to_date() function.
> >>
> >> If 20022007 really means 20 Jan instead of 20 Feb, try something  
> >> like:
> >>
> > No, it realy means 20 Feb. My mistake !.
> >
> >
> >> insert into my_table (my_date_field)
> >> select to_date(my_date_text_field, 'DDMM') - interval '1 month'
> >>   from my_temp_table;
> >>
> >> Regards,
> >
> > I finally ended coding a dirty C program to reverse the order of date
> > fields. Here is the code, in case anyone need it.
> 
> I'm glad you got something working.  However, out of morbid  
> curiousity I have to ask:  why did you use C for that when you could  
> have done it with at most a three line script or even one line  
> directly from the shell?

I am a lot more used to work with C (25+ years doing it). In fact, it is
easier to me writing it in C. That is all!. Could you post an
equivalente script, so I can try it and learn?. 

"It is hard to teach new tricks to an old dog".

Regards,
Diego.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Richard Huxton

Don't forget to cc: the list.
Try not to top-post replies, it's easier to read if you reply below the 
text you're replying to.


Sergey Konoplev wrote:

1. Is it always the same query?
2. Does the client still think it's connected?
3. Is that query using up CPU, or just idling?
4. Anything odd in pg_locks for the problem pid?



1. No it isn't. I have few functions (plpgsql, plpython) that cause
such situations more often than another but they are called more often
also.


OK, so there's no real pattern. That would suggest it's not a particular 
query-plan that's got something wrong.


Do you always get this problem inside a function?


2. The client just waits for query and buzz.
3. They are using CPU in usual way and their pg_lock activity seems normal.


So the backend that appears "stuck" is still using CPU?


4. No I haven't noticed anything odd.


So - the symptoms are:
1. Client hangs, waiting for the result of a query
2. You notice this
3. You issue pg_cancel_backend() which sends a SIGINT which doesn't do 
anything
4. You have to cancel the query from the command-line using "kill -9 
"



Are you happy that your hardware and drivers are OK? There aren't 
problems with any other servers on this machine?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Richard Huxton

Sergey Konoplev wrote:

I'm sorry I mean not HUP but KILL


Hmm...


 datname  | usename  | procpid |  current_query  | waiting |
   query_start
---+--+-+-+-+---
 transport | belostotskaya_la |   20530 | select * from c | f   |
2007-10-02 05:05:28.908687+04



transport=# select pg_catalog.pg_cancel_backend(20530);
 pg_cancel_backend
---
 t



 datname  | usename  | procpid |  current_query  | waiting |
   query_start
---+--+-+-+-+---
 transport | belostotskaya_la |   20530 | select * from c | f   |
2007-10-02 05:05:28.908687+04


1. Is it always the same query?
2. Does the client still think it's connected?
3. Is that query using up CPU, or just idling?
4. Anything odd in pg_locks for the problem pid?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Sergey Konoplev
I'm sorry I mean not HUP but KILL

2007/10/3, Sergey Konoplev <[EMAIL PROTECTED]>:
> Hi all,
>
> I often face with buzz queries (see below). I've looked through pg
> manual and huge amount of forums and mail archives and found nothing.
> The only solution  is to restart postgres server. Moreover I have to
> terminate the process using HUP signal to stop the server.
>
> transport=# select version();
>   version
> -
>  PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
> 3.3.3 (SuSE Linux)
> (1 row)
>
> transport=# select datname, usename, procpid, current_query::char(15),
> waiting, query_start from pg_catalog.pg_stat_activity where procpid =
> 20530;
>  datname  | usename  | procpid |  current_query  | waiting |
>query_start
> ---+--+-+-+-+---
>  transport | belostotskaya_la |   20530 | select * from c | f   |
> 2007-10-02 05:05:28.908687+04
> (1 row)
>
> transport=# select pg_catalog.pg_cancel_backend(20530);
>  pg_cancel_backend
> ---
>  t
> (1 row)
>
> transport=# select datname, usename, procpid, current_query::char(15),
> waiting, query_start from pg_catalog.pg_stat_activity where procpid =
> 20530;
>  datname  | usename  | procpid |  current_query  | waiting |
>query_start
> ---+--+-+-+-+---
>  transport | belostotskaya_la |   20530 | select * from c | f   |
> 2007-10-02 05:05:28.908687+04
> (1 row)
>
> --
> Regards,
> Sergey Konoplev
>


-- 
Regards,
Sergey Konoplev

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Select too many ids..

2007-10-03 Thread Alban Hertroys
Rodrigo De León wrote:
> On 10/1/07, Abandoned <[EMAIL PROTECTED]> wrote:
>> Hi..
>> I have a id list and id list have 2 million dinamic elements..
>> I want to select what id have point..
>> I try:
>>
>> SELECT id, point FROM table WHERE id in (IDLIST)
>>
>> This is working but too slowly and i need to performance..
>>
>> I'm sorry my bad english.
>> King regards..
> 
> DDL please...

I guess the OP's problem is with the time it takes to push a query
containing 2 million ID's through the SQL parser.

A few things that may help:

- See if you can find any uninterrupted ranges of values in your ID's
and combine them into separate where clauses (ie. where id between x and
y). This won't help you much for ranges where y = x+1, but it does help
if (for example) y = x+10...

- Try if it helps COPY-ing your ID's into a temp table and join your
table with that instead of using a huge IN list.

- If you're still using PG 7.4, try a PG 8. There have been significant
performance improvements with IN lists in queries in the 8-series.

Regards,

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pg_cancel_backend() does not work with buzz queries

2007-10-03 Thread Sergey Konoplev
Hi all,

I often face with buzz queries (see below). I've looked through pg
manual and huge amount of forums and mail archives and found nothing.
The only solution  is to restart postgres server. Moreover I have to
terminate the process using HUP signal to stop the server.

transport=# select version();
   version
-
 PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
3.3.3 (SuSE Linux)
(1 row)

transport=# select datname, usename, procpid, current_query::char(15),
waiting, query_start from pg_catalog.pg_stat_activity where procpid =
20530;
  datname  | usename  | procpid |  current_query  | waiting |
query_start
---+--+-+-+-+---
 transport | belostotskaya_la |   20530 | select * from c | f   |
2007-10-02 05:05:28.908687+04
(1 row)

transport=# select pg_catalog.pg_cancel_backend(20530);
 pg_cancel_backend
---
 t
(1 row)

transport=# select datname, usename, procpid, current_query::char(15),
waiting, query_start from pg_catalog.pg_stat_activity where procpid =
20530;
  datname  | usename  | procpid |  current_query  | waiting |
query_start
---+--+-+-+-+---
 transport | belostotskaya_la |   20530 | select * from c | f   |
2007-10-02 05:05:28.908687+04
(1 row)

-- 
Regards,
Sergey Konoplev

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PITR Recovery and out-of-sync indexes

2007-10-03 Thread Richard Huxton

Brian Wipf wrote:
We are running a production server off of a new database that was 
synchronized using PITR recovery. We found that many of the btree 
indexes were out of sync with the underlying data after bringing the new 
server out of recovery mode, but the data itself appeared to be okay.


Both servers have identical Intel processors and both are running 64-bit 
PostgreSQL 8.2.4. The original server is running 64-bit openSUSE 10.2 
(Linux 2.6.18.2-34-default #1 SMP Mon Jul 16 01:16:32 GMT 2007 x86_64 
x86_64 x86_64 GNU/Linux) and the new server is running Mac OS X Leopard 
Server.


This isn't necessarily safe. If your setup isn't *identical* then you 
need to do a lot of checking to make sure this will work. In particular 
you'd want to make sure that all your ./configure options are compatible 
(e.g. --enable-integer-datetimes can change on-disk representations).


You also need to make sure there aren't any differences in behaviour in 
any OS libraries used. That's not implausible since you're contrasting 
Linux with a BSD-based system.


In the Continuous Archiving Point-In-Time Recovery section of the docs, 
one of the caveats listed is:
"Operations on hash indexes are not presently WAL-logged, so replay will 
not update these indexes. The recommended workaround is to manually 
REINDEX each such index after completing a recovery operation"


Is it possible there are issues with btree indexes being maintained 
properly as well? Any other ideas?


If there is a problem then it's a bug. However, it's quite likely that 
you're seeing underlying platform differences. Sounds like you want 
replication rather than PITR.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq