Re: [GENERAL] Missing database entry in pg_database

2006-01-27 Thread Robert Korteweg

Robert Korteweg wrote:

Robert Korteweg robert ( at ) sambalbij ( dot ) nl writes:


I have a problem with a database i'm maintaining. I first noticed
the problem because i could not make a backup of the database i got
the following error:




pg_dump: missing pg_database entry for database xxx




I verified this by selecting the pg_database. It was indeed gone.



I did some more diggin and noticed that on doing a describe (\d 
table) of a table i could not see any or some of the columns in

the table, and a few tables i also could just see the correct
layout. It looks random.




This sounds suspiciously like a transaction ID wraparound problem.


Yes i read about that, but i thought this was not my problem because i 
vacuumed like i thought i should.



The database is a very active database. It is running on Postgresql
 7.3. The database is getting a VACUUM FULL ANALYZE every night.



The *entire* database ... or are you just vacuuming the user tables 
and not the system catalogs?  Daily vacuuming of the catalogs should 
have prevented any such problem (unless you are managing to exceed 1

billion transactions per day...)



VACUUM FULL ANALYZE is the exact query i do on the database every night. 
I do not know if postgres will vacuum the systemtables as well with this 
command. And i do not believe the database will exceed the billion 
transactions a day.


I also do not see the template databases, but i do not know if this is 
important.


Any thought? i'm about too decide to do an initdb on this database, but 
would prever too fix the problem.


--
You can't reach second base, and keep your foot on first.

Groeten,
Robert

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] xmin system column

2006-01-27 Thread Marko Kreen
On 1/26/06, Eric B. Ridge [EMAIL PROTECTED] wrote:
 Outside of VACUUM FREEZE, is there any way the xmin column in a
 relation can change, assuming of course the tuple is never updated
 again?  I'm considering using this as a way to identify all tuples
 modified in the same transaction (in an effort to group them
 together), and am wondering if there's any way tuples from different
 transactions could end up with the same xmin value.

I had the same problem - how to identify rows by transaction.  I solved
it by using the xxid from Slony-I and making it 8-byte.

http://gborg.postgresql.org/pipermail/slony1-general/2006-January/003668.html
http://gborg.postgresql.org/pipermail/slony1-general/2006-January/003685.html

It has only 2 slight gotchas:

- the function will fail if there are more than 2G tx'es between calls
- you need to bump epoch if you reload dump.

otherwise seems to work fine.

Btw it uses TopTransactionId, so subtransactions should not be problem.

--
marko

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Error: could not read from statistics collector pipe

2006-01-27 Thread Stephan Vollmer
Hello!

In my PostgreSQL logfile, I see that the following entries occur
very frequently:

2006-01-27 10:37:29 FATAL  could not read from statistics collector
   pipe: No error
2006-01-27 10:37:30 LOGstatistics collector process (PID 5940)
   was terminated by signal 1

(PostgreSQL 8.1.2, WinXP SP2, Athlon64 X2 3800, 2 GB RAM)

These errors occur every 1-2 minutes when the DB is active. What do
these messages mean and how can I stop them from appearing?

I also noticed that during inserting lots of records, the DB becomes
increasingly slower the more records were inserted. For example, the
first 10 records take 15 minutes, but records 30-40 take
3 hours. Could this be related to the messages above?

I tried to execute ANALYZE every 10 records, but this did not
seem to help.

Thanks in advance for your help!

- Stephan



signature.asc
Description: OpenPGP digital signature


[GENERAL] question about large databases

2006-01-27 Thread Sergey Karin
Hi, List!

Some times ago in this list was discussed next question: Which databases are small and which are large?

The answer was:
1-2 GB - small
50 and around - large

As I think, hundreds of GB and more - very large (VLDB).

How Postgres works with VLDB? And what about speed? Can I expect that
postgres works with VLDB not worse that commertial DBMS (DB2, Oracle,
Informix). Maybe someone have a comparision results of Postgres, DB2,
Oracle and Informix (or postgres and any of commertial DBMS) when this
DBMS manage with VLDB? I will VERY, VERY respect to that man for the
information...

My organisation plans to use GIS based on PostGIS/Postgres. As we think, size of our database can be 50, 100 and more Gb.

Thanks in advance!

Sergey Karin


Re: [GENERAL] incremental backups

2006-01-27 Thread Csaba Nagy
OK, that was before going home from work, so it could be excusable :-D
I read your mail now in more detail, and I can't answer it other than
that we use here a standby data base based on WAL log shipping, and the
procedure of building the standby finishes with a script
inserting/deleting a few 1000s of lines in a bogus table so there is for
sure a WAL file archived. That might fit your needs or might not...

Cheers,
Csaba.


On Thu, 2006-01-26 at 18:48, Rick Gigger wrote:
 Um, no you didn't read my email at all.  I am aware of all of that  
 and it is clearly outlined in the docs.  My email was about a  
 specific detail in the process.  Please read it if you want to know  
 what my actual question was.
 
 Thanks,
 
 Rick
 
 On Jan 26, 2006, at 10:41 AM, Csaba Nagy wrote:
 
  I didn't read your mail very carefully, but I guess you want:
 
- turn on WAL archiving, and archive all WAL logs;
- take the file system backup at regular time points, optionally you
  can keep them also for point in time recovery;
 
  Then you always have all the WAL files you need to recover to any  
  point
  in time you need. You can then supply all the WAL files which are  
  needed
  by the last file system backup to recover after a crash, or you can
  supply all the WAL files up to the time point just before your student
  DBA deleted all your data.
 
  HTH,
  Csaba.
 
 
  On Thu, 2006-01-26 at 18:33, Rick Gigger wrote:
  I am looking into using WAL archiving for incremental backups.  It
  all seems fairly straightforward except for one thing.
 
  So you set up the archiving of the WAL files.  Then you set up cron
  or something to regularly do a physical backup of the data
  directory.  But when you do the physical backup you don't have the
  last WAL file archived yet that you need to restore that physical
  backup.  So you always need to keep at least two physical backups
  around so that you know that at least one of them has the WAL files
  needed for recovery.
 
  The question I have is: how do I know if I can use the latest one?
  That is if I first do physical backup A and then later do physical
  backup B and then I want to do a restore.  How do I know when I've
  got the files I need to use B so that I don't have to go all the way
  back to A?
 
  My initial thoughts are that I could:
 
  a) just before or after calling pg_stop_backup check the file system
  to see what the last archived WAL file is on disk and make sure that
  that I get the next one before I try restoring from that backup.
 
  b) just before or after calling pg_stop_backup check postgres to see
  to see what the current active WAL file is and make sure it has been
  archived before I try to restore from that backup.
 
  c) Always just use backup A.
 
  No c seems the easiest but is that even fail safe?  I realize it
  wouldn't really ever happen in an active production environment that
  was set up right but say you did backup A and backup B and during
  that whole time you had few writes in postgres that you never filled
  up a whole WAL file so both of the backups are invalid.  Then you
  would have to always go to option a or b above to verify that a given
  backup was good so that any previous backups could be deleted.
 
  Wouldn't it make things a lot easier if the backup history file not
  only gave you the name of the first file that you need but also the
  last one?  Then you could look at a given backup and say I need this
  start file and this end file.  Then you could delete all archived WAL
  files before start file.  And you could delete any old physical dumps
  because you know that your last physical dump was good.  It would
  just save you the step in the backups process of figuring out what
  that file is.  And it seems like pg_stop_backup could determine that
  on it's own.
 
  Does that make sense?  Am I totally off base here?
 
  Rick
 
  ---(end of  
  broadcast)---
  TIP 6: explain analyze is your friend
 
 
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] incremental backups

2006-01-27 Thread Richard Huxton

Rick Gigger wrote:
Um, no you didn't read my email at all.  I am aware of all of that and 
it is clearly outlined in the docs.  My email was about a specific 
detail in the process.  Please read it if you want to know what my 
actual question was.


I'm not sure your email is quite right as regards the process. You need:
  1. the filesystem backup
  2. the WAL file indicated in the history-file
  3. all the WAL files later than that
to get up to now.

If you don't want to replay up to now then you will not need some of 
the more recent WAL files. You can't afford to throw them away though 
since you've got a rolling backup system running and the whole point is 
so you can recover to any point you like.


You can however throw away any WAL files older than that indicated in 
the history file for your current filesystem-backup. You can then only 
restore from that point in time forward.


There is no last one in the WAL set unless you know the time you want 
to restore to. Indeed, the last one might not be full yet and 
therefore archived if you want to restore to 10 seconds ago.


Or am I mis-understanding your email too?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] xmin system column

2006-01-27 Thread Christian Kratzer

Hi,

On Thu, 26 Jan 2006, Eric B. Ridge wrote:
snip/

long pause

hahaha, *blush*.  I could just use now(), right?  pg8.1 docs say that 
now()/CURRENT_TIMESTAMP return the start time of the current transaction; 
their values do not change during the transaction.  I could use a composite 
of (now(), GetTopTransctionId()) to assume batch uniqueness.


Or use a touple of (now(), pg_backend_pid()) for this kind of stuff.
pg_backend_pid() should sufficiently disambiguate now() to make obove
touple unique.

Greetings
Christian

--
Christian Kratzer   [EMAIL PROTECTED]
CK Software GmbHhttp://www.cksoft.de/
Phone: +49 7452 889 135 Fax: +49 7452 889 136

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] question about large databases

2006-01-27 Thread Richard Huxton

Sergey Karin wrote:

Maybe someone have a comparision results of Postgres, DB2, Oracle
and Informix (or postgres and any of commertial DBMS) when this DBMS manage
with VLDB? I will VERY, VERY respect to that man for the information...


It is against the rules of your Oracle licence to publish performance 
comparisons with other databases. The same with most other commercial RDBMS.


If you can provide more details about your setup, someone else might 
have experience to share though.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Alternative to knoda, kexi and rekall?

2006-01-27 Thread Ben Trewern
For administration try pgAdmin III but to make applications you could try 
Gambas see: http://gambas.sourceforge.net/ or even Lazarus see: 
http://www.lazarus.freepascal.org/

For internet stuff try Ruby on Rails.  It has a bit of a steep learning 
curve to start with but it's a RAD tool when you get the hang of it.

Ben

BTW whats the problem with Rekall?  I thought it could just use QT.

Michelle Konzack [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hello,

 I am using PostgreSQL since more then 6 years now and for 1 1/2 years
 rekall.  Now there is a problem with the crapy QT and I have no
 Frontend anymore which works WITHOUT (!!!) KDE or GNOME which I hate!

 Currently I am using pgAccess to check my PostgreSQL but it is very
 limited.

 Does anyone know a Frontend for PostgreSQL which I can use to design
 and admin a very huge Database (over 160 GByte and grown; the biggest
 table is over 120 GByte)

 I need it urgentiel under plain/x without GNOME and KDE.

 If OSS is not availlable, a commercial product?

 I am not a PostgreSQL guru, but since I have lost last year my two
 Iranien programmers, I am working alone and need support in form of
 good Software.

 Please note, that I am using Debian GNU/Linux 3.0 and 3.1 and NO, I WILL
 NOT SWITCH TO WINDOWS, EVEN THERE ARE VERY GOOD GUI'S FOR POSTGRESQL.

 I wish, such GUI's exist under Linux!

 Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


 -- 
 Linux-User #280138 with the Linux Counter, http://counter.li.org/
 # Debian GNU/Linux Consultant #
 Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
 0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Logging statements and parameter values

2006-01-27 Thread Ted Powell
Our development group needs to have the option of logging all SQL
statements including substituted parameter values. Getting output in the
form:
... WHERE contact.login_con = $1 AND company.login_co = $2

was no problem, but nothing that I tried turning on in the config file
yielded values for $1 and $2.

Digging into the source for 8.1.1 brought me to this code in
.../backend/tcop/postgres.c (lines 1449+)

/* We need to output the parameter values someday */
if (log_statement == LOGSTMT_ALL)
ereport(LOG,
(errmsg(statement: BIND %s, portal_name)));

/*
 * Fetch parameters, if any, and store in the portal's memory context.
 */
if (numParams  0)

It seems to me that a point near the bottom of the loop over parameters
(1564+)
params[i].kind = PARAM_NUM;
params[i].id = i + 1;
params[i].ptype = ptype;
params[i].isnull = isNull;

i++;
}

(params[i].value is set in a couple of places higher up in the loop)
would be a good place to log each parameter, but...

Has this not been done simply because nobody has gotten around to it, or
are there pitfalls? Although I've been using PostgreSQL for several years,
this is my first venture into its source code beyond watching it build.

Also, the Datum params[i].value, does it necessarily hold displayable
text, even when its content is the output of a binary input converter?
Is there a utility log routine somewhere that I can simply feed a
Datum to?


-- 
Ted Powell [EMAIL PROTECTED]   http://psg.com/~ted/
GPL code ... It's the difference between
owning your own home and just renting.  --PJ

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread Matthew Henderson
Dear all,

I am a novice user, so any help with the following problem
would be greatly appreciated.

I had Suse 9.1 installed on one hard disk in my machine
(/dev/hda). On this machine Postgresql 7.4 was installed
and I had one database which, I believe, was installed
in the default location

 This hard disk no longer boots and so I have
installed Suse 10.0 on another hard disk in the same
machine (/dev/hdb). This OS has Postgresql 8.1 installed
and I would like to be able to connect to the database on
the old hard disk (/dev/hda) so that I can dump it to
make a backup and eventually transfer it to the new hard disk.

How should I go about doing this? Are there any problems
I should be aware of in moving from version 7.4 to 8.1?


thanks,
Matthew Henderson

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Change SERIAL to INTEGER

2006-01-27 Thread Pedro Monjo Florit

Hi everybody:

In a computer which is un production, I have a PostgreSQL 7.4 database. 
In a couple of tables, I have just realised that I made a small mistake: 
the primary key is SERIAL but what I really need is an INTEGER, since I 
do not need the auto-increment feature or the sequence. Until now, this 
has not created any problem, since all INSERTs set the primary key 
value, but I am not sure if in the future this could cause any problem. 
I cannot drop and recreate the table or the column, since this column is 
a used as foreign keys in other tables.


I have already tried to do the following:

ALTER TABLE mytable ALTER COLUMN mytableid DROP DEFAULT;
DROP SEQUENCE mytable_mytableid_seq;

but the last command fails, saying that mytable still uses this sequence.

Is there any way to change a SERIAL type to an INTEGER? I think that it 
should be easy, since SERIAL is, in fact, an INTEGER with some conditions.


Thanks!

Pedro

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] PG_RESTORE and database size

2006-01-27 Thread Marcus Couto



Here's a basic question.On working with 
backing up and restoringit seems like if I keep on doingit in a row, 
the backup file size keeps on increasing in size. It almost doubles in size for 
every backup/restore. I want the restore the backup to overwrite the database 
and not add to it. Is there another procedure I'm missing or a parameter I'm not 
using right with pg_restore and pg_dump? 

Thank you for any help.


[GENERAL] How to find a temporary table

2006-01-27 Thread Emil Rachovsky

Hi,
I am using PostgreSQL 8.1.0 . How can I find a temp
table from my session, having the name of the table?
Can anyone show me what query should I execute? I've
tried some things but I receive mixed results of
tables from different sessions, which is strange.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to find a temporary table

2006-01-27 Thread Pandurangan R S
http://archives.postgresql.org/pgsql-general/2006-01/msg01259.php


On 1/27/06, Emil Rachovsky [EMAIL PROTECTED] wrote:

 Hi,
 I am using PostgreSQL 8.1.0 . How can I find a temp
 table from my session, having the name of the table?
 Can anyone show me what query should I execute? I've
 tried some things but I receive mixed results of
 tables from different sessions, which is strange.

 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] REPOST:Memory Allocation error using pg_dump on 7.4

2006-01-27 Thread frank church

I repeatedly get this error whenever I try to backup a database

The command used is:

pg_dump -Fc -O -U username tablename  tablename.20060122

pg_dump: ERROR:  invalid memory alloc request size 4294967290
pg_dump: SQL command to dump the contents of table cc_ratecard failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request size
4294967290
pg_dump: The command was: COPY public.tablename(id, ...

Is there a bug somewhere in there?

Is there something which needs doing in regard to my memory allocation settings?

The table involved is one of the larger tables, but at only 400,000 records it
shouldn't be a problem.



This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Change SERIAL to INTEGER

2006-01-27 Thread Pandurangan R S
Hi,

Is there any way to change a SERIAL type to an INTEGER? I think that it
should be easy, since SERIAL is, in fact, an INTEGER with some conditions.

There serial column is just a integer, with default as nextval from a
sequence, so there is no neccessity to change the datatype of the
column. You have successfully removed the default value for this
column with you first SQL statement.

I hope this condition should not cause any problem, except that
sequence lingers in the database.

However, if you are determined to get rid of the sequence, then you
may follow the procedure below

Disclaimer : It worked for me and I hope that this tinkering with
pg_depend table will not cause any adverse effects and you might want
to make sure that there will not be any side effects and I am not
responsible for any damage caused :)

select * from pg_depend where refobjid = (select oid from pg_class
where relname = 'mytable') and objid = (select oid from pg_class where
relname = 'mytable_mytableid_seq');

delete from pg_depend where refobjid = (select oid from pg_class where
relname = 'mytable') and objid = (select oid from pg_class where
relname = 'mytable_mytableid_seq');

DROP SEQUENCE mytable_mytableid_seq;

Regards
Pandu

On 1/27/06, Pedro Monjo Florit [EMAIL PROTECTED] wrote:
 Hi everybody:

 In a computer which is un production, I have a PostgreSQL 7.4 database.
 In a couple of tables, I have just realised that I made a small mistake:
 the primary key is SERIAL but what I really need is an INTEGER, since I
 do not need the auto-increment feature or the sequence. Until now, this
 has not created any problem, since all INSERTs set the primary key
 value, but I am not sure if in the future this could cause any problem.
 I cannot drop and recreate the table or the column, since this column is
 a used as foreign keys in other tables.

 I have already tried to do the following:

 ALTER TABLE mytable ALTER COLUMN mytableid DROP DEFAULT;
 DROP SEQUENCE mytable_mytableid_seq;

 but the last command fails, saying that mytable still uses this sequence.

 Is there any way to change a SERIAL type to an INTEGER? I think that it
 should be easy, since SERIAL is, in fact, an INTEGER with some conditions.

 Thanks!

 Pedro

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread Peter Eisentraut
Am Freitag, 27. Januar 2006 11:30 schrieb Matthew Henderson:
 How should I go about doing this? Are there any problems
 I should be aware of in moving from version 7.4 to 8.1?

For one thing you won't be able to read the 7.4 database using 8.1 binaries, 
so you need to get 7.4 installed first in order to make a dump.  If there are 
no packages available you will have to compile it by hand.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Finding missing records

2006-01-27 Thread Stefano B.



Hi,

I have two identical tables

table1 (f1,f2,f3,f4 primary key 
(f1,f2,f3,f4))

table2 (g1,g2,g3,g4 primary key 
(g1,g2,g3,g4))

How can I find the difference between the two 
tables?
table1 has 1 records
table2 has 9900 records (these records are in 
table1 as well)

I'd like to find 100 missing records.
I have try this query

select f1,f2,f3,f4 from table1 where (f1,f2,f3,f4) 
NOT IN (select f1,f2,f3,f4 from table2)

but it seems not work (as I want). It returns me no 
records. If I use the IN clause it returns me all 1 table1 
records.

Thanks in advance
Stefano


Re: [GENERAL] Finding missing records

2006-01-27 Thread A. Kretschmer
am  27.01.2006, um 14:59:47 +0100 mailte Stefano B. folgendes:
 How can I find the difference between the two tables?
 table1 has 1 records
 table2 has  9900 records (these records are in table1 as well)
 
 I'd like to find 100 missing records.
 I have try this query
 
 select f1,f2,f3,f4 from table1 where (f1,f2,f3,f4) NOT IN (select f1,f2,f3,f4 
 from table2)

select f1,f2,f3,f4 from table1 except select f1,f2,f3,f4 from table2;


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Finding missing records

2006-01-27 Thread Pandurangan R S
select f1,f2,f3,f4 from table1 EXCEPT ALL select f1,f2,f3,f4 from table2

http://www.postgresql.org/docs/8.1/static/sql-select.html

On 1/27/06, Stefano B. [EMAIL PROTECTED] wrote:

 Hi,

 I have two identical tables

 table1 (f1,f2,f3,f4 primary key (f1,f2,f3,f4))

 table2 (g1,g2,g3,g4 primary key (g1,g2,g3,g4))

 How can I find the difference between the two tables?
 table1 has 1 records
 table2 has  9900 records (these records are in table1 as well)

 I'd like to find 100 missing records.
 I have try this query

 select f1,f2,f3,f4 from table1 where (f1,f2,f3,f4) NOT IN (select
 f1,f2,f3,f4 from table2)

 but it seems not work (as I want). It returns me no records. If I use the IN
 clause it returns me all 1 table1 records.

 Thanks in advance
 Stefano

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Finding missing records

2006-01-27 Thread John D. Burger


On Jan 27, 2006, at 08:59, Stefano B. wrote:

select f1,f2,f3,f4 from table1 where (f1,f2,f3,f4) NOT IN (select 
f1,f2,f3,f4 from table2)

 
but it seems not work (as I want). It returns me no records. If I use 
the IN clause it returns me all 1 table1 records.


The standard way to do this is:

  select f1,f2,f3,f4 from table1
  except
  select f1,f2,f3,f4 from table2;

Note that IN and EXCEPT are essentially set operators - if you have 
duplicates in either table, you might not get what you expect.  Your 
last comment above seems to indicate that this is indeed the case.


If what you want is the =bag= difference of the two tables, you'll have 
to do something more complicated.  Possible solutions might involve 
counting duplicates in both tables with a COUNT(*) and GROUP BY, and 
then joining on the four columns and subtracting the counts.


- John D. Burger
  MITRE

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG_RESTORE and database size

2006-01-27 Thread Richard Huxton

Marcus Couto wrote:

Here's a basic question. On working with backing up and restoring it
seems like if I keep on doing it in a row, the backup file size keeps
on increasing in size. It almost doubles in size for every
backup/restore. I want the restore the backup to overwrite the
database and not add to it. Is there another procedure I'm missing or
a parameter I'm not using right with pg_restore and pg_dump?


If I were to ask you about vacuum-ing would you know what I was talking 
about? If not, check the manuals and read up on the topic.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-01-27 Thread Alexander Presber
Tsearch/isepll is not able to break this word into parts, because  
of the s in Produktion/s/intervall. Misspelling the word as  
Produktionintervall fixes it:

It should be affixes marked as 'affix in middle of compound word',
Flag is '~', example look in norsk dictionary:

flag ~\\:
[^S]   S  #~ advarsel  advarsels-

BTW, we develop and debug compound word support on norsk  
(norwegian) dictionary, so look for example there. But we don't  
know Norwegian, norwegians helped us :)


Hello everyone!

I cannot get this to work. Neither in a german version, nor with the  
norwegian example supplied on the tsearch website.
That means, just like Hannes I can get compound word support without  
inserted 's' in german and norwegian:
Vertragstrafe works, but not Vertragsstrafe, which is the right  
Form.


So I tried it the other way around: My dictionary consists of two words:

---
vertrag/zs
strafe/z
 ---

My affixes file just switches on compounds and allows for s-insertion  
as described in the norwegian tutorial:


---
compoundwords controlled z
suffixes
flag s:
  [^S]  S  # endet nicht auf s: s anfuegen und in  
compound-check (Recht  Rechts-)

---

ts_debug yields:

tstest=# SELECT tsearch2.ts_debug('vertragstrafe strafevertrag  
vertragsstrafe');

  ts_debug
 
-
(german,lword,Latin  
word,vertragstrafe,{ispell_de,simple},'strafe' 'vertrag')
(german,lword,Latin  
word,strafevertrag,{ispell_de,simple},'strafe' 'vertrag')
(german,lword,Latin  
word,vertragsstrafe,{ispell_de,simple},'vertragsstrafe')

(3 Zeilen)

I would say, the ispell compound support does not honor the s-Flag in  
compounds.
Could it be, that this feature got lost in a regression? It must have  
worked for norwegian once. (Take the overtrekksgrilldresser example  
from the tsearch2:compounds tutorial, that I cannot reproduce).


Any hints?

Alexander

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Finding missing records

2006-01-27 Thread John D. Burger

I wrote:

Note that IN and EXCEPT are essentially set operators - if you have 
duplicates in either table, you might not get what you expect.


If what you want is the =bag= difference of the two tables, you'll 
have to do something more complicated.


and then I immediately saw Pandurangan's message indicating that ALL 
can be used to turn the set operators into bag operators, e.g., EXCEPT 
ALL.  Cool!  (And not complicated at all.)


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread Matthew Henderson
Okay, so if I have 7.4 installed and I have the old
harddisk mount under /mnt/hda can I do something
like 

  pg_dump /mnt/hda/path_to_old_database  dump.txt

???




On Fri, Jan 27, 2006 at 02:47:21PM +0100, Peter Eisentraut wrote:
 Am Freitag, 27. Januar 2006 11:30 schrieb Matthew Henderson:
  How should I go about doing this? Are there any problems
  I should be aware of in moving from version 7.4 to 8.1?
 
 For one thing you won't be able to read the 7.4 database using 8.1 binaries, 
 so you need to get 7.4 installed first in order to make a dump.  If there are 
 no packages available you will have to compile it by hand.
 
 -- 
 Peter Eisentraut
 http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-01-27 Thread Alexander Presber
I should add that, with the minimal dictionary and .aff file,  
vertrags gets reduced alright, dropping the trailing 's':


tstest=# SELECT tsearch2.ts_debug('vertrags');
  ts_debug
-
(german,lword,Latin word,vertrags,{ispell_de,simple},'vertrag')
(1 Zeile)

The affix is just not applied while looking for compound words.

Sincerely yours
Alexander Presber

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread A. Kretschmer
am  27.01.2006, um 14:21:31 + mailte Matthew Henderson folgendes:
 Okay, so if I have 7.4 installed and I have the old
 harddisk mount under /mnt/hda can I do something
 like 
 
   pg_dump /mnt/hda/path_to_old_database  dump.txt

No, this is imposible IMHO. You need a PG-Server with this version
(7.4), to read the data. pg_dump is only a client for the DB, it can't
read the files.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] REPOST:Memory Allocation error using pg_dump on 7.4

2006-01-27 Thread Tom Lane
frank church [EMAIL PROTECTED] writes:
 I repeatedly get this error whenever I try to backup a database

 pg_dump: ERROR:  invalid memory alloc request size 4294967290
 pg_dump: SQL command to dump the contents of table cc_ratecard failed:
 PQendcopy() failed.

Looks like a corrupt-data problem to me.  Please see the PG list archives
concerning ways to narrow down where the bad data is and get rid of it.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] table is not a table

2006-01-27 Thread Ilja Golshtein
Hi!

On Wed, Jan 25, 2006 at 11:26:39AM -0500, Tom Lane wrote:
 Ilja Golshtein [EMAIL PROTECTED] writes:
  postgres=# create table ddd(f1 int4);
  CREATE TABLE
  postgres=# drop table ddd;
  ERROR:  ddd is not a table
 
 That's just plain bizarre.  Would you try it with \set VERBOSITY verbose
 so we can see exactly where the error is coming from?

No extra information. Just ERROR:  ddd is not a table.

 Has this installation been working for you before?  

Not really.

 I'm wondering about
 a corrupt backend executable file, or some such ...

Looks like.

The only special thing (and the only thing to blame)
I can imagine about this installation - ICU patch.
It was applied without any visible troubles.

Could a corrupt catalog be responsible?  

I've initialised DB cluster from scratch. 
Problem exists.

Might a query like the
following reveal anything?

SELECT c.ctid, c.xmin, c.xmax, c.oid, c.relname, c.relkind,
   n.ctid, n.xmin, n.xmax, n.oid, n.nspname
FROM pg_class AS c
LEFT JOIN pg_namespace AS n ON n.oid = c.relnamespace
WHERE c.relname ~* '^ddd';

SELECT c.ctid, c.xmin, c.xmax, c.oid, c.relname, c.relkind,
   n.ctid, n.xmin, n.xmax, n.oid, n.nspname
FROM pg_class AS c
LEFT JOIN pg_namespace AS n ON n.oid = c.relnamespace
WHERE c.relname ~* '^ddd';
postgres-# postgres-# postgres-# postgres-#   ctid  | xmin | xmax |  oid  | 
relname | relkind | ctid  | xmin | xmax | oid  | nspname 
+--+--+---+-+-+---+--+--+--+-
 (5,12) |  621 |0 | 16386 | ddd | r   | (0,6) |2 |0 | 2200 
| public
(1 row)

-- 
Best regards
Ilja Golshtein

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Logging statements and parameter values

2006-01-27 Thread Tom Lane
Ted Powell [EMAIL PROTECTED] writes:
 Has this not been done simply because nobody has gotten around to it, or
 are there pitfalls?

What are you going to do with binary parameter values?  Calling the
type's output converter is possible but not very pleasant.

 Also, the Datum params[i].value, does it necessarily hold displayable
 text, even when its content is the output of a binary input converter?

Datums are guaranteed *not* to be displayable text.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PG_RESTORE and database size

2006-01-27 Thread Tom Lane
Marcus Couto [EMAIL PROTECTED] writes:
 Here's a basic question. On working with backing up and restoring it =
 seems like if I keep on doing it in a row, the backup file size keeps on =
 increasing in size. It almost doubles in size for every backup/restore. =
 I want the restore the backup to overwrite the database and not add to =
 it. Is there another procedure I'm missing or a parameter I'm not using =
 right with pg_restore and pg_dump?

Why are you restoring into a live database?  It sounds to me like you
are probably ignoring a lot of object-already-exists errors from
pg_restore and then having it add on duplicate data to the tables.

Usually people drop the database and recreate it empty before running
pg_restore.  There is a switch named --clean or something like that
to make pg_restore emit DROP commands before recreating objects, but
hardly anyone uses it because it's usually slower than dropping the
whole database at once.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] table is not a table

2006-01-27 Thread Tom Lane
Ilja Golshtein [EMAIL PROTECTED] writes:
 On Wed, Jan 25, 2006 at 11:26:39AM -0500, Tom Lane wrote:
 That's just plain bizarre.  Would you try it with \set VERBOSITY verbose
 so we can see exactly where the error is coming from?

 No extra information. Just ERROR:  ddd is not a table.

Not possible unless you mistyped it.  You should get something like

regression=# create sequence s;
CREATE SEQUENCE
regression=# drop table s;
ERROR:  s is not a table
HINT:  Use DROP SEQUENCE to remove a sequence.
regression=# \set VERBOSITY verbose
regression=# drop table s;
ERROR:  42809: s is not a table
HINT:  Use DROP SEQUENCE to remove a sequence.
LOCATION:  DropErrorMsgWrongType, utility.c:133
regression=#

 The only special thing (and the only thing to blame)
 I can imagine about this installation - ICU patch.
 It was applied without any visible troubles.

Perhaps you messed up the patch, or failed to do a full rebuild after
applying it?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Allowing Custom Fields

2006-01-27 Thread Aaron Colflesh

Hello folks,
I've run into a challenge that doesn't appear to have been discussed in 
the archives anywhere.


I'm designing a database that users need to have the ability to 
customize some. They just need the ability to add extra fields to an 
existing table (oh and they can't touch the predefined fields). So the 
database schema so far is table A (existing table), table B (contains a 
list of custom field names and other meta data) and table C 
(intersection table between A  B containing the values for the custom 
fields for each row). That works really well and all but we have 
problems with retrieving the data. Due to other requirements related to 
reporting we need to be able to present the data in table A along with 
any custom fields in a table as if the custom fields were actually 
fields on A. I only know of two ways of doing this, and I'm hoping one 
of you knows of a third way (I've tried to use a function to do it but 
it just doesn't seem to work).


1. Build the virtual table outside the database in application code
2. Use triggers on table B to actually create and remove custom fields 
on A as they are inserted/removed from B.


#2 would seem to be the simplest except I'm really not too keen on the 
idea of manipulating a table like that on the fly (even though I did 
proof of concept it and it seems to be simple enough to be fairly safe 
if adequate checks for entries on table B are put into the system). Does 
anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
all that uncommon task, so I'm hoping there is some slick way of maybe 
putting together a function or view to return data rows with a flexible 
field layout. So far all the in-db tricks I've come up with have 
required me to know what the field names were to generate the final 
query anyway, so they don't really gain me anything.


Thanks,
Aaron C.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Allowing Custom Fields

2006-01-27 Thread Bruno Wolff III
On Fri, Jan 27, 2006 at 10:25:00 -0600,
  Aaron Colflesh [EMAIL PROTECTED] wrote:
 
 #2 would seem to be the simplest except I'm really not too keen on the 
 idea of manipulating a table like that on the fly (even though I did 
 proof of concept it and it seems to be simple enough to be fairly safe 
 if adequate checks for entries on table B are put into the system). Does 
 anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
 all that uncommon task, so I'm hoping there is some slick way of maybe 
 putting together a function or view to return data rows with a flexible 
 field layout. So far all the in-db tricks I've come up with have 
 required me to know what the field names were to generate the final 
 query anyway, so they don't really gain me anything.

Couldn't you let the user creating a view joining A and B?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] incremental backups

2006-01-27 Thread Rick Gigger
Sorry for my sharp reply!  It looks like we are after the same thing  
so that does help a little although it doesn't really answer my  
question.  I set up my backups system using pg_dump back in 7.3  
because that's all there was.  I am finally moving to 8.1 and want to  
switch to doing incrementals because the dumps are just getting too  
big.  If you didn't mind showing me your scripts it would probably be  
a good staring point for me.


Thanks,

Rick

On Jan 27, 2006, at 3:32 AM, Csaba Nagy wrote:


OK, that was before going home from work, so it could be excusable :-D
I read your mail now in more detail, and I can't answer it other than
that we use here a standby data base based on WAL log shipping, and  
the

procedure of building the standby finishes with a script
inserting/deleting a few 1000s of lines in a bogus table so there  
is for

sure a WAL file archived. That might fit your needs or might not...

Cheers,
Csaba.


On Thu, 2006-01-26 at 18:48, Rick Gigger wrote:

Um, no you didn't read my email at all.  I am aware of all of that
and it is clearly outlined in the docs.  My email was about a
specific detail in the process.  Please read it if you want to know
what my actual question was.

Thanks,

Rick

On Jan 26, 2006, at 10:41 AM, Csaba Nagy wrote:


I didn't read your mail very carefully, but I guess you want:

  - turn on WAL archiving, and archive all WAL logs;
  - take the file system backup at regular time points,  
optionally you

can keep them also for point in time recovery;

Then you always have all the WAL files you need to recover to any
point
in time you need. You can then supply all the WAL files which are
needed
by the last file system backup to recover after a crash, or you can
supply all the WAL files up to the time point just before your  
student

DBA deleted all your data.

HTH,
Csaba.


On Thu, 2006-01-26 at 18:33, Rick Gigger wrote:

I am looking into using WAL archiving for incremental backups.  It
all seems fairly straightforward except for one thing.

So you set up the archiving of the WAL files.  Then you set up cron
or something to regularly do a physical backup of the data
directory.  But when you do the physical backup you don't have the
last WAL file archived yet that you need to restore that physical
backup.  So you always need to keep at least two physical backups
around so that you know that at least one of them has the WAL files
needed for recovery.

The question I have is: how do I know if I can use the latest one?
That is if I first do physical backup A and then later do physical
backup B and then I want to do a restore.  How do I know when I've
got the files I need to use B so that I don't have to go all the  
way

back to A?

My initial thoughts are that I could:

a) just before or after calling pg_stop_backup check the file  
system
to see what the last archived WAL file is on disk and make sure  
that

that I get the next one before I try restoring from that backup.

b) just before or after calling pg_stop_backup check postgres to  
see
to see what the current active WAL file is and make sure it has  
been

archived before I try to restore from that backup.

c) Always just use backup A.

No c seems the easiest but is that even fail safe?  I realize it
wouldn't really ever happen in an active production environment  
that

was set up right but say you did backup A and backup B and during
that whole time you had few writes in postgres that you never  
filled

up a whole WAL file so both of the backups are invalid.  Then you
would have to always go to option a or b above to verify that a  
given

backup was good so that any previous backups could be deleted.

Wouldn't it make things a lot easier if the backup history file not
only gave you the name of the first file that you need but also the
last one?  Then you could look at a given backup and say I need  
this
start file and this end file.  Then you could delete all  
archived WAL
files before start file.  And you could delete any old physical  
dumps

because you know that your last physical dump was good.  It would
just save you the step in the backups process of figuring out what
that file is.  And it seems like pg_stop_backup could determine  
that

on it's own.

Does that make sense?  Am I totally off base here?

Rick

---(end of
broadcast)---
TIP 6: explain analyze is your friend











---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Allowing Custom Fields

2006-01-27 Thread Aaron Colflesh




Bruno Wolff III wrote:

  On Fri, Jan 27, 2006 at 10:25:00 -0600,
  Aaron Colflesh [EMAIL PROTECTED] wrote:
  
  
#2 would seem to be the simplest except I'm really not too keen on the 
idea of manipulating a table like that on the fly (even though I did 
proof of concept it and it seems to be simple enough to be fairly safe 
if adequate checks for entries on table B are put into the system). Does 
anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
all that uncommon task, so I'm hoping there is some slick way of maybe 
putting together a function or view to return data rows with a flexible 
field layout. So far all the in-db tricks I've come up with have 
required me to know what the field names were to generate the final 
query anyway, so they don't really gain me anything.

  
  
Couldn't you let the user creating a view joining A and B?
  

I have yet to find a way to make a query that will take the individual
row values of one table and make them appear to be columns (either by
themselves or as part of a join to another table). If someone can tell
me how to do that, then yes a view would be ideal.
Thanks,
AaronC




Re: [GENERAL] Allowing Custom Fields

2006-01-27 Thread Uwe C. Schroeder
On Friday 27 January 2006 08:25, Aaron Colflesh wrote:
 Hello folks,
 I've run into a challenge that doesn't appear to have been discussed in
 the archives anywhere.

 I'm designing a database that users need to have the ability to
 customize some. They just need the ability to add extra fields to an
 existing table (oh and they can't touch the predefined fields). So the
 database schema so far is table A (existing table), table B (contains a
 list of custom field names and other meta data) and table C
 (intersection table between A  B containing the values for the custom
 fields for each row). That works really well and all but we have
 problems with retrieving the data. Due to other requirements related to
 reporting we need to be able to present the data in table A along with
 any custom fields in a table as if the custom fields were actually
 fields on A. I only know of two ways of doing this, and I'm hoping one
 of you knows of a third way (I've tried to use a function to do it but
 it just doesn't seem to work).

You could have the two tables linked with a key, say

table A ( custom_key int )
table B (custom_key int) - and this custom_key references A
I'd probably go for a view that is recreated by a trigger on table B.
Second alternative would be to just use a join on the tables. I don't know 
what kind of reporting software you use, but I don't know any that can't do a 
join on two tables. The worst case scenario would look like

SELECT a.*,b.* FROM a JOIN b ON b.custom_key=a.custom_key

that will give you one result set.

There is a third option. If you know the maximum number of custom columns and 
possibly their data type, you could add those columns statically, like in

table B (custom_key int,
cust_field_1 int,
cust_field_2 int,


)

and then use a third table to label the custom fields, aka

table C (cfield1_label varchar(80), cfield2 varchar(80) )

Your application then can grab the label for the field dynamically and the 
fields in table B wouldn't have to change at all.
 


 1. Build the virtual table outside the database in application code
 2. Use triggers on table B to actually create and remove custom fields
 on A as they are inserted/removed from B.

 #2 would seem to be the simplest except I'm really not too keen on the
 idea of manipulating a table like that on the fly (even though I did
 proof of concept it and it seems to be simple enough to be fairly safe
 if adequate checks for entries on table B are put into the system). Does
 anyone know of a 3rd way of doing it? It seems like this shouldn't be an
 all that uncommon task, so I'm hoping there is some slick way of maybe
 putting together a function or view to return data rows with a flexible
 field layout. So far all the in-db tricks I've come up with have
 required me to know what the field names were to generate the final
 query anyway, so they don't really gain me anything.

 Thanks,
 Aaron C.

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

-- 
UC

--
Open Source Solutions 4U, LLC   1618 Kelly St
Phone:  +1 707 568 3056 Santa Rosa, CA 95401
Cell:   +1 650 302 2405 United States
Fax:+1 707 568 6416

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-01-27 Thread Oleg Bartunov

Alexander,

could you try tsearch2 from CVS HEAD  ?
tsearch2 in 8.1.X doesn't supports UTF-8 and works for someone
only by accident :)

Oleg
On Fri, 27 Jan 2006, Alexander Presber wrote:

Tsearch/isepll is not able to break this word into parts, because of the 
s in Produktion/s/intervall. Misspelling the word as 
Produktionintervall fixes it:

It should be affixes marked as 'affix in middle of compound word',
Flag is '~', example look in norsk dictionary:

flag ~\\:
   [^S]   S  #~ advarsel  advarsels-

BTW, we develop and debug compound word support on norsk (norwegian) 
dictionary, so look for example there. But we don't know Norwegian, 
norwegians helped us :)


Hello everyone!

I cannot get this to work. Neither in a german version, nor with the 
norwegian example supplied on the tsearch website.
That means, just like Hannes I can get compound word support without inserted 
's' in german and norwegian:

Vertragstrafe works, but not Vertragsstrafe, which is the right Form.

So I tried it the other way around: My dictionary consists of two words:

---
vertrag/zs
strafe/z
---

My affixes file just switches on compounds and allows for s-insertion as 
described in the norwegian tutorial:


---
compoundwords controlled z
suffixes
flag s:
[^S]  S  # endet nicht auf s: s anfuegen und in 
compound-check (Recht  Rechts-)

---

ts_debug yields:

tstest=# SELECT tsearch2.ts_debug('vertragstrafe strafevertrag 
vertragsstrafe');

ts_debug
-
(german,lword,Latin word,vertragstrafe,{ispell_de,simple},'strafe' 
'vertrag')
(german,lword,Latin word,strafevertrag,{ispell_de,simple},'strafe' 
'vertrag')
(german,lword,Latin 
word,vertragsstrafe,{ispell_de,simple},'vertragsstrafe')

(3 Zeilen)

I would say, the ispell compound support does not honor the s-Flag in 
compounds.
Could it be, that this feature got lost in a regression? It must have worked 
for norwegian once. (Take the overtrekksgrilldresser example from the 
tsearch2:compounds tutorial, that I cannot reproduce).


Any hints?

Alexander

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Accessing an old database from a new OS installation.

2006-01-27 Thread Doug McNaught
A. Kretschmer [EMAIL PROTECTED] writes:

 am  27.01.2006, um 14:21:31 + mailte Matthew Henderson folgendes:
 Okay, so if I have 7.4 installed and I have the old
 harddisk mount under /mnt/hda can I do something
 like 
 
   pg_dump /mnt/hda/path_to_old_database  dump.txt

 No, this is imposible IMHO. You need a PG-Server with this version
 (7.4), to read the data. pg_dump is only a client for the DB, it can't
 read the files.

In addition, it's considered best practice to run (in this case) the
7.4 server against the old database, and use the 8.X pg_dump to dump
it out for loading into the 8.X server.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] incremental backups

2006-01-27 Thread Rick Gigger
I guess my email wasn't all that clear.  I will try to rephrase.  I  
am moving from using the old style pg_dump for backups to using  
incrementals and want to make sure I understand the process before I  
go about writing a bunch of scritps.


To me setting up incremental backup consists of the following  
components:


1) Setting up the WAL archiving.  This one is trivial.
2) Doing physical dumps of the $PGDATA directory.  This one is once  
again trivial.
3) Knowing which physical dumps are Good and Not Good.  For a given  
physical dump D there is are WAL archive files Dstart and Dend for  
which you much have Dstart and Dend and all files in between.  If you  
have all those files then the physical dump is Good.  If you don't  
have them then the dump is worthless to you.
4) Knowing which dumps and which archive files can be deleted.  This  
depends on a number of factors.

a) How far back do you want to be able to do PITR
b) How much space do you have / want to use for PITR
c) Which physical dumps are Good and which are Not Good. (see #3)

Now I think I have a pretty good plan here except for #3 (and so #4  
then also suffers).


Just as an example lets say I'm not concerned so much with PITR as I  
am recovering from a db crash. I've got all the backups files saved  
to my backup db server so I can failover to it if my primary db  
server dies.  I just want to make sure I've got one physical dump  
that is good.  (This is not my actual situation but it illustrated my  
point better.)


Now when I do a physical dump it is not a Good dump.  That is I don't  
have the end archive file necessary to recover from that physical  
dump.  That is to say that  when I call pg_backup_start() then copy  
$PGDATA then call pg_backup_stop() postgres might be on say WAL  
archive file #5.  Once the physical dump is completed WAL archive  
file #5 hasn't been archived yet.  I only have up to #4.  So if I  
delete my old physical dumps and all I've got is this most recent one  
and my database crashes before #5 gets archived then I am hosed.  I  
have no good physical backups to start from.


My main question is about the best way to figure out when a physical  
dump is Good.


One strategy is to always keep around lots of physical dumps.  If you  
keep around 100 dumps you can be pretty sure that in the space of  
time that those physical dumps take place that at least one WAL file  
was archived.  In fact if you keep 2 physical dumps you can be fairly  
certain of this.  If not then you really need to space our your dumps  
more.


Is this making sense at this point?

The problem is that the WAL archiving is triggered by postgres and  
the rate at which the db is updated.  The physical dumps are  
triggered by cron and on a purely time based schedule.  So in theory  
if you had the physical dumps happening once a day but for some odd  
reason no one updated the database for 4 days then all of a sudden  
you'd have 2 physical backups and neither of them are good.  If  
you're db crashes during that time you are hosed.


Maybe I am arguing a point that is just stupid because this will  
never happen in real life.  But in that it is my backups system that  
I will be using to recover from complete and total disaster I just  
want to have all my bases covered.


So my ideas on how to determine if a physical dump is Good are as  
follows.


1) When you do the physical backup (after dumping the $PGDATA dir but  
before calling pg_stop_backup() ) determine the current WAL archive  
file.  Mark somewhere in the backed up physical dump the last file  
needed for the dump to be considered good.  Then your deletion  
scripts can look at the WAL archive files you have and the last one  
required for the dump to be Good and determine if the dump is Good or  
not.


2) After doing the physical dump but before calling pg_stop_backup()  
just copy the current WAL file to the physical dump.  If that file  
later gets archived then the restore commands overwrites your  
partially completed one so it doesn't hurt but you know that when you  
call pg_stop_backup() that that physical dump is good.  (Is it ok to  
copy the current WAL file while it is still in use?)


Is anyone taking one of these or any other precautions to make sure  
they've got a good physical dump or does everyone just keep a whole  
bunch of dumps around, and then actually restore the dump to see if  
it is good and if not go back to a previous dump?


I hope that makes more sense.

Thanks,

Rick

On Jan 27, 2006, at 3:33 AM, Richard Huxton wrote:


Rick Gigger wrote:
Um, no you didn't read my email at all.  I am aware of all of that  
and it is clearly outlined in the docs.  My email was about a  
specific detail in the process.  Please read it if you want to  
know what my actual question was.


I'm not sure your email is quite right as regards the process. You  
need:

  1. the filesystem backup
  2. the WAL file indicated in the 

Re: [GENERAL] Finding missing records

2006-01-27 Thread Stephan Szabo
On Fri, 27 Jan 2006, Stefano B. wrote:

 Hi,

 I have two identical tables

 table1 (f1,f2,f3,f4 primary key (f1,f2,f3,f4))
 table2 (g1,g2,g3,g4 primary key (g1,g2,g3,g4))

 How can I find the difference between the two tables?
 table1 has 1 records
 table2 has  9900 records (these records are in table1 as well)

 I'd like to find 100 missing records.
 I have try this query

 select f1,f2,f3,f4 from table1 where (f1,f2,f3,f4) NOT IN (select f1,f2,f3,f4 
 from table2)

Is there a reason you've used f1-f4 in the table2 subselect rather than
g1-g4? From the definitions above, I think the f1-f4 in the subselect are
becoming outer references which isn't what you want.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-01-27 Thread Harald Armin Massa
Teodor,To all: May be, we should put all snowball's stemmers (for all available
languages and encodings) to tsearch2 directory?Yes, that would be VERY helpfull. Up to now I do not dare to use tsearch2 because get stemmer here, get dictionary there...Harald
-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607


Re: [GENERAL] Allowing Custom Fields

2006-01-27 Thread Bruno Wolff III
On Fri, Jan 27, 2006 at 10:40:05 -0600,
  Aaron Colflesh [EMAIL PROTECTED] wrote:
 Bruno Wolff III wrote:
 On Fri, Jan 27, 2006 at 10:25:00 -0600,
   Aaron Colflesh [EMAIL PROTECTED] wrote:
   
 #2 would seem to be the simplest except I'm really not too keen on the 
 idea of manipulating a table like that on the fly (even though I did 
 proof of concept it and it seems to be simple enough to be fairly safe 
 if adequate checks for entries on table B are put into the system). Does 
 anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
 all that uncommon task, so I'm hoping there is some slick way of maybe 
 putting together a function or view to return data rows with a flexible 
 field layout. So far all the in-db tricks I've come up with have 
 required me to know what the field names were to generate the final 
 query anyway, so they don't really gain me anything.
 
 
 Couldn't you let the user creating a view joining A and B?
   
 I have yet to find a way to make a query that will take the individual 
 row values of one table and make them appear to be columns (either by 
 themselves or as part of a join to another table). If someone can tell 
 me how to do that, then yes a view would be ideal.

I missed that part.

There is a way to do this with one of the contrib packages (tablefunc).
To use a view though, you would need to know how many columns you were
going to get at the time you created the view. So this might not work
for you.



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] WAL questions

2006-01-27 Thread Steve Oualline
Title: WAL questions






We have a system with 1202 files in the WAL directory (pg_xlog).

When we start postmaster, it goes into the starting state for 5 minutes

and then crashes.


Questions:


1) What is the biggest number of WAL files you've seen and what were 

you doing to the database at the time?


2) When postmaster starts, it replays the WAL files. During this time

any connection is rejected with an error indicating that the database

is starting up. What the longest amount of time that you'd expect

postmaster to be in the starting up state?





Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-01-27 Thread Teodor Sigaev

contrib_regression=# insert into pg_ts_dict values (
 'norwegian_ispell',
  (select dict_init from pg_ts_dict where dict_name='ispell_template'),
  'DictFile=/usr/local/share/ispell/norsk.dict ,'
  'AffFile =/usr/local/share/ispell/norsk.aff',
 (select dict_lexize from pg_ts_dict where dict_name='ispell_template'),
 'Norwegian ISpell dictionary'
   );
INSERT 16681 1
contrib_regression=# select lexize('norwegian_ispell','politimester');
  lexize
--
 {politimester,politi,mester,politi,mest}
(1 row)

contrib_regression=# select lexize('norwegian_ispell','sjokoladefabrikk');
lexize
--
 {sjokoladefabrikk,sjokolade,fabrikk}
(1 row)

contrib_regression=# select lexize('norwegian_ispell','overtrekksgrilldresser');
 lexize
-
 {overtrekk,grill,dress}
(1 row)
% psql -l
   List of databases
Name| Owner  | Encoding
++--
 contrib_regression | teodor | KOI8
 postgres   | pgsql  | KOI8
 template0  | pgsql  | KOI8
 template1  | pgsql  | KOI8
(4 rows)


I'm afraid that UTF-8 problem. We just committed in CVS HEAD multibyte support 
for tsearch2, so you can try it.


Pls, notice, the dict, aff stopword files should be in server encoding. Snowball 
sources for german (and other) in UTF8 can be founded in 
http://snowball.tartarus.org/dist/libstemmer_c.tgz


To all: May be, we should put all snowball's stemmers (for all available 
languages and encodings) to tsearch2 directory?


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] xml_valid function

2006-01-27 Thread John Gray
On Wed, 25 Jan 2006 17:11:04 -0800, George Pavlov wrote:

 Not sure what the correct forum for pgxml/xml2 questions is. I was
 wondering what is the definition of valid that the xml_valid(text)
 function that is part of that module uses? It seems different from the
 W3C definition of valid XML (is there an implicit DTD?) Maybe it is
 more akin to well-formed?
 

It is indeed well-formed. That just seemed a long name for the function!

Regards

John


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] WAL questions

2006-01-27 Thread Tom Lane
Steve Oualline [EMAIL PROTECTED] writes:
 We have a system with 1202 files in the WAL directory (pg_xlog).
 When we start postmaster, it goes into the starting state for 5 minutes
 and then crashes.

Define crash.  If you don't show us the *exact* messages you're
seeing, it's difficult to guess what's going on.

Also, what happened when the postmaster stopped the first time?  The
most interesting part of this from my point of view is how did you get
into this state in the first place --- unless you had set insanely high
values for checkpoint_segments and checkpoint_timeout, you should not
have gotten up to that many files in pg_xlog.  A plausible guess is that
something was preventing checkpoints from completing, but any such
problem should have left traces in the postmaster log.  If you've still
got the pre-crash log it would be very interesting to see.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Are rules transaction safe?

2006-01-27 Thread Oliver Fürst

Hi all,

I have a question regarding rules on views. Are the commands inside a ON 
INSERT (or UPDATE) DO INSTEAD (implicit) transactions for postgres? I 
tried to put BEGIN; and COMMIT; around the commands in a ON ... DO 
INSTEAD ( ) block, but keep getting a syntax error.


The following example might explain what I'm trying to find out:

-
Two tables, a and b. b is referencing a via a_id...

  CREATE TABLE a
  (
 id serial,
 foo varchar(255),
 CONSTRAINT aid PRIMARY KEY (id)
  );

  CREATE TABLE b
  (
 id serial,
 a_id int4 not null,
 foo varchar(255),
 CONSTRAINT bid PRIMARY KEY (id),
 CONSTRAINT bfk FOREIGN KEY (a_id)
REFERENCES a (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
  );


...a view ab for a combination of the before mentioned tables...

  CREATE OR REPLACE VIEW ab AS
SELECT a.id AS main_id, a.foo AS from_a, b.foo AS from_b
FROM a, b
WHERE a.id = b.a_id;

...and a rule ab_insert...

  CREATE OR REPLACE RULE ab_insert AS ON INSERT TO ab DO INSTEAD (
INSERT INTO a (foo) VALUES (new.from_a);
INSERT INTO b (foo,a_id) VALUES (new.from_b,currval('a_id_seq'));
  );
-

As you can see, the ab_insert rule inserts into a first and than 
takes the current value of a.id's sequence to set the reference a_id 
in b.


Can I assume that this will always work as expected or is it possible 
that in a multi-user scenario two or more concurrent inserts on the view 
will lead to undesirable results?


As mentioned in the beginning, putting BEGIN; and COMMIT; didn't work. 
Is this kind of creating a relation between two or more tables and 
relying on a sequence generally a good practice?


Regards,
Oliver

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 Hi all,

 I have a question regarding rules on views. Are the commands inside a
 ON INSERT (or UPDATE) DO INSTEAD (implicit) transactions for postgres?
 I tried to put BEGIN; and COMMIT; around the commands in a ON ... DO
 INSTEAD ( ) block, but keep getting a syntax error.

Everything that happens in Postgres is inside either an implicit or
explicit transaction, so you can't do BEGIN/COMMIT inside rules or
functions. 

You might be able to use savepoints, depending on what you're actually
trying to do.

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] parameter substitution problem in jdbc driver? (8.1)

2006-01-27 Thread Dave E Martin

When a query of this form:

/* from xxx where yyy = ? */ select a,b,c,d from xxx where yyy = ?

is sent to the jdbc driver (as a prepared statement), it complains that 
parameter 2 is not set. This is a query from hibernate3.1, with 
hibernate's generate comments in sql option turned on.


I'm not really sure who is at fault here, or if this is one of those 
unspecified things, but I would assume that parameter substitution 
should not apply to comments?


Looking at the prepared statement in the debugger, it did indeed try to 
do parameter substitution inside the comment.


Is the problem in the JDBC driver, in Hibernate, or in the JDBC spec?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] parameter substitution problem in jdbc driver? (8.1)

2006-01-27 Thread Kris Jurka



On Fri, 27 Jan 2006, Dave E Martin wrote:


When a query of this form:

/* from xxx where yyy = ? */ select a,b,c,d from xxx where yyy = ?

is sent to the jdbc driver (as a prepared statement), it complains that 
parameter 2 is not set. This is a query from hibernate3.1, with hibernate's 
generate comments in sql option turned on.


Is the problem in the JDBC driver, in Hibernate, or in the JDBC spec?



This is the JDBC driver's fault.  There are a number of places it does not 
correctly detect placeholders vs. comments/text, for example dollar 
quoting is another one.


Kris Jurka

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Oliver Fürst

Hi Doug,

On 27.01.2006 21:01, Doug McNaught wrote:

Oliver Fürst [EMAIL PROTECTED] writes:


I have a question regarding rules on views. Are the commands inside a
ON INSERT (or UPDATE) DO INSTEAD (implicit) transactions for postgres?
I tried to put BEGIN; and COMMIT; around the commands in a ON ... DO
INSTEAD ( ) block, but keep getting a syntax error.


Everything that happens in Postgres is inside either an implicit or
explicit transaction, so you can't do BEGIN/COMMIT inside rules or
functions. 


You might be able to use savepoints, depending on what you're actually
trying to do.


Actually I'm just worried that something like the ON INSERT ... DO 
INSTEAD rule on a view (as stated in my example)...



   CREATE OR REPLACE RULE ab_insert AS ON INSERT TO ab DO INSTEAD (
 INSERT INTO a (foo) VALUES (new.from_a);
 INSERT INTO b (foo,a_id) VALUES (new.from_b,currval('a_id_seq'));
   );


...could yield unexpected results. Imagine two overlapping inserts on 
the view in a heavy load situation. (C1, C2 - two concurrent connections):


C1 - INSERT INTO ab (from_a,from_b) VALUES ('foo','bar');
C2 - INSERT INTO ab (from_a,from_b) VALUES ('hello','world');

...should translates to...

C1 - INSERT INTO a (foo) VALUES ('foo'); -- id == 1
C1 - INSERT INTO b (foo,a_id) VALUES ('bar',1);

C2 - INSERT INTO a (foo) VALUES ('hello'); -- id == 2
C2 - INSERT INTO b (foo,a_id) VALUES ('world',2);

...but could translate to...

C1 - INSERT INTO a (foo) VALUES ('foo'); -- id == 1
C2 - INSERT INTO a (foo) VALUES ('hello'); -- id == 2

C1 - INSERT INTO b (foo,a_id) VALUES ('bar',2);
C2 - INSERT INTO b (foo,a_id) VALUES ('world',2);

Basically I'm worried that the whole relying on the last value of a 
sequence isn't such a great idea.


(By the way, did I pick the wrong mailing list for that topic and should 
I move to pgsql-sql?)


Regards,
Oliver

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] xml_valid function

2006-01-27 Thread John Gray
On Fri, 2006-01-27 at 12:32 -0800, Roger Hand wrote:
 John Gray wrote on
 Friday, January 27, 2006 12:24 PM
  On Wed, 25 Jan 2006 17:11:04 -0800, George Pavlov wrote:
  
  Not sure what the correct forum for pgxml/xml2 questions is. I was
  wondering what is the definition of valid that the xml_valid(text)
  function that is part of that module uses? It seems different from the
  W3C definition of valid XML (is there an implicit DTD?) Maybe it is
  more akin to well-formed?
  
  
  It is indeed well-formed. That just seemed a long name for the function!
 
  John
 
 Valid means it's been checked against, and conforms to, a DTD. If it hasn't 
 been then it can't
 be said to be valid.
 

I know that - my point was just that when I was naming the functions, I
(perhaps foolishly, in hindsight) decided that xml_wellformed seemed a
longish name for a basic function. The README does in fact state that it
checks well-formedness and not validity. It's easily changed in the SQL
file if you'd rather have a different name for your installation.

As for changing it in the distribution, I can see some
backward-compatibility issues (I suspect it may be in production use
under that name) - but if there were to be a version which validated a
document against a DTD it would be a two parameter version which would
therefore have a different signature for PG.

Regards

John



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 Basically I'm worried that the whole relying on the last value of a
 sequence isn't such a great idea.

'currval()' is specifically written to Do The Right Thing.  See the
docs.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] xml_valid function

2006-01-27 Thread Scott Marlowe
On Fri, 2006-01-27 at 15:21, John Gray wrote:

 I know that - my point was just that when I was naming the functions, I
 (perhaps foolishly, in hindsight) decided that xml_wellformed seemed a
 longish name for a basic function. The README does in fact state that it
 checks well-formedness and not validity. It's easily changed in the SQL
 file if you'd rather have a different name for your installation.
 
 As for changing it in the distribution, I can see some
 backward-compatibility issues (I suspect it may be in production use
 under that name) - but if there were to be a version which validated a
 document against a DTD it would be a two parameter version which would
 therefore have a different signature for PG.

A lot of validators make it clear that you can validate xml with or
without a DTD.  So, it makes sense to have an overloaded xml_valid()
function that accepts a single argument (plain xml) and a two parter
that takes xml and a dtd as the two arguments.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Basic questions about PQprepare()

2006-01-27 Thread Alexander Farber
Hello,

I'm trying to write an Apache 1.3.29 module connecting to PostgreSQL 8.1.0
on OpenBSD -current and have few probably simple questions:

When an Apache child is initialized, I'd like to establish connection to
the database and to prepare 2 queries. And then later in the repeating
response phase I'd like to execute those prepared queries.

1) If PQconnectdb fails, do I still need to PQfinish the returned pointer?

cfg-conn = PQconnectdb(cfg-conninfo);
if (NULL == cfg-conn) {
ap_log_error(APLOG_MARK, APLOG_ERR, s,
Connection to database '%s' failed: out of memory,
 cfg-conninfo);
exit(1);
}
if (PQstatus(cfg-conn) != CONNECTION_OK) {
ap_log_error(APLOG_MARK, APLOG_ERR, s,
Connection to database '%s' failed: %s,
 cfg-conninfo, PQerrorMessage(cfg-conn));
PQfinish(cfg-conn);
exit(1);
}

2) Similar, if PQprepare fails, do I still need to PQclear its result?
 And what value is returned on PQprepare success, is it always
 PGRES_COMMAND_OK (I've got that value, but will it always be so)?

#define SQL_BANNED_USER \
select message, expire from bans where username = $1 and  \
(expire is null or expire  extract(epoch from localtime))

res = PQprepare(cfg-conn, sql_banned_user, SQL_FIND_USER, 1, NULL);
if (NULL == res) {
ap_log_error(APLOG_MARK, APLOG_ERR, s,
Preparing statement '%s' failed: out of memory,
 SQL_BANNED_USER);
PQfinish(cfg-conn);
exit(1);
}
if (PQresultStatus(res) != PGRES_COMMAND_OK) {
ap_log_error(APLOG_MARK, APLOG_ERR, s,
Preparing statement '%s' failed: %s,
 SQL_BANNED_USER, PQerrorMessage(cfg-conn));
PQclear(res);
PQfinish(cfg-conn);
exit(1);
}

3) Do I have to PQclear(res) inbetween if I want to prepare another query?

4) How do I set the last PQprepare argument, the const Oid *paramTypes?
The FAQ says an OID is a unique int. I'm confused how to use it here.
For example I know that the argument to my prepared  statement will be
a string (a username). What is the OID then?

I couldn't find any good examples with PQprepare() yet,
does anybody please have a pointer to nice short examples?

Regards
Alex

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Oliver Fürst

Hi Doug,

thanks for your answers so far. I think I should try to discuss that 
matter in [pgsql-sql] instead.


On 27.01.2006 21:21, Doug McNaught wrote:

Oliver Fürst [EMAIL PROTECTED] writes:


Basically I'm worried that the whole relying on the last value of a
sequence isn't such a great idea.


'currval()' is specifically written to Do The Right Thing.  See the
docs.


My problem isn't answered in the manual, otherwise I wouldn't have 
asked. I know that sequences are working correct for transactions 
(either implicit or explicit).


But is nowhere stated if (multiple) commands inside a rule are treated 
as an implicit transaction as a whole.


Regards,
Oliver

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Are rules transaction safe?

2006-01-27 Thread Doug McNaught
Oliver Fürst [EMAIL PROTECTED] writes:

 But is nowhere stated if (multiple) commands inside a rule are treated
 as an implicit transaction as a whole.

If you don't specifically open a transaction at the top level (i.e. in
'psql' or SQL from your application's code), PG will encapsulate every
query you execute within its own transaction.  The statements in any
rules called will execute in the context of that transaction, along
with any other operations such as trigger calls, table updates,
whatever.  If something in that implicit transaction fails, it will be
rolled back; otherwise, it will be committed once the statement is
finished. 

Also, you need to think of rules as 'query rewrites' rather than
'code that executes', because that's what they are.

I hope that helps.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] xml_valid function

2006-01-27 Thread Roger Hand

John Gray wrote on
Friday, January 27, 2006 12:24 PM
 On Wed, 25 Jan 2006 17:11:04 -0800, George Pavlov wrote:
 
 Not sure what the correct forum for pgxml/xml2 questions is. I was
 wondering what is the definition of valid that the xml_valid(text)
 function that is part of that module uses? It seems different from the
 W3C definition of valid XML (is there an implicit DTD?) Maybe it is
 more akin to well-formed?
 
 
 It is indeed well-formed. That just seemed a long name for the function!

 John

Valid means it's been checked against, and conforms to, a DTD. If it hasn't 
been then it can't
be said to be valid.

-Roger

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] SQL Lint

2006-01-27 Thread Roderick A. Anderson
Just rejoined after a year of so of being away and have a question.  The 
Search from the PostgreSQL.org mail-archives site isn't working ( or 
maybe not working like I think it should ) I get a 503 Server error.


The quesion just came up, as we start the process of moving from MySQL 
to PostgreSQL, if there was a 'lint' for SQL syntax?  Looking around 
using google.com I found only one decent reference but couldn't find 
anything further.


Anyone aware of a lint for SQL ( 92, 99, PostgreSQL )?  We want to look 
quickly at the code we're porting for any MySQL-isms so we can figure 
out what it is suppose to be so the final post will go smoother.



TIA,
Rod
--


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Matthew Hixson
I'm trying to get a large Java application which makes use of an  
Oracle JDO layer to work with Postgres.  Set aside for a moment the  
discussion of whether or not that is going to work.
  What I have found is that different parts of this application are  
referring to a table in all uppercase and in other parts referring to  
the table all in lowercase.  Is there a way to configure Postgres so  
that it does not treat FOO and foo as two different tables?

  Thanks,
   -M@

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Roger Hand
testdb=# CREATE TABLE foo (field_one int4);
CREATE TABLE

testdb=# INSERT INTO foo VALUES (1);
INSERT 0 1
testdb=# INSERT INTO foo VALUES (2);
INSERT 0 1

testdb=# SELECT * FROM foo;
 field_one 
---
 1
 2
(2 rows)

testdb=# SELECT * FROM FOO;
 field_one 
---
 1
 2
(2 rows)

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Matthew Hixson
Sent: Friday, January 27, 2006 3:53 PM
To: Postgres General
Subject: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes


I'm trying to get a large Java application which makes use of an  
Oracle JDO layer to work with Postgres.  Set aside for a moment the  
discussion of whether or not that is going to work.
   What I have found is that different parts of this application are  
referring to a table in all uppercase and in other parts referring to  
the table all in lowercase.  Is there a way to configure Postgres so  
that it does not treat FOO and foo as two different tables?
   Thanks,
-M@

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Importing Many XML Records

2006-01-27 Thread George Pavlov
 I'm sure that this has been asked before but I can't find any 
 reference to it in google, and the search facility on 
 postgresql.org is currently down.

http://groups.google.com/groups?q=group%3Apgsql.*

provides the same with a slight delay but arguably a better user
interface.

 I have a large number of entries (possibly 10,000+) in an XML 
 file that I need to import into the database (7.4 on Debian) 
 on a daily basis. Does anyone have any recommendations 
 concerning the best way to do this? Is there some tool I 
 should use or should I create the code in java to parse and 
 import the data?
 
 If anyone has done this before, I would appreciate hearing 
 how they did this.

This is generally outside the scope of this list. I am guessing (since I
don't know much about your data format or goals), but you probably want
to first transform the XML into a format suitable for importation into
the database using COPY, or (much less desirable) a bunch of insert
statements. In either case you should become familiar with XSLT
processing and write yourself an XSLT template to do the job.

I deal with a similar task using Saxon and TagSoup (which I highly
recommend for XML that is not well-formatted) and create a CSV file out
of a multitude of XML files (or a single XML file), which can then be
COPY-ed into a PG table. Instead of a CSV file one could create a SQL
script file of INSERT statements. I recommend Jeni Tennison's Beginning
XSLT book as an excellent reference on the subject of XSLT. 

Depending on what your XML looks like you may get away without XSLT at
all, but just preprocess it with awk, sed, perl (Template::Extract is a
useful module) or whatever strikes your fancy.

Other questions to answer are do you want the records to stay as XML
in the database or do you want to import them into a regular table
format? If the former you may want to get familiar with the pgxml (aka
xml2 module) so you can query the XML data once inside your database.

George




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Thomas Kellerer

Matthew Hixson wrote on 28.01.2006 00:53:
I'm trying to get a large Java application which makes use of an Oracle 
JDO layer to work with Postgres.  Set aside for a moment the discussion 
of whether or not that is going to work.
  What I have found is that different parts of this application are 
referring to a table in all uppercase and in other parts referring to 
the table all in lowercase.  Is there a way to configure Postgres so 
that it does not treat FOO and foo as two different tables?


Simply don't use quotes, never. Then case will not matter.

Thomas


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Matthew Hixson
The problem is that the JDO layer is looking for the table name in a  
system table like so:


SELECT  
n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull,a.atttypmod,a.attl 
en,a.attnum,def.adsrc,dsc.description  FROM pg_catalog.pg_namespace  
n  JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN  
pg_catalog.pg_attribute a ON (a.attrelid=c.oid)  LEFT JOIN  
pg_catalog.pg_attrdef def ON (a.attrelid=def.adrelid AND a.attnum =  
def.adnum)  LEFT JOIN pg_catalog.pg_description dsc ON  
(c.oid=dsc.objoid AND a.attnum = dsc.objsubid)  LEFT JOIN  
pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND  
dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON  
(dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE a.attnum  
 0 AND NOT a.attisdropped  AND c.relname LIKE 'FOO'  AND a.attname  
LIKE '%'  ORDER BY nspname,relname,attnum


That query returns nothing.  Changing FOO to foo returns the 4 rows  
that the JDO layer is looking for.


So, on second thought its not a problem with case insensitive table  
names, its a problem with the JDO layer looking for uppercase table  
names listed in the system tables even though Postgres table names  
are lowercase by default.  Completely different problems.


I renamed table foo to FOO, which makes the above query return  
correctly, and now I see this in the log, later in the initialization:


LOG:  statement: PREPARE unnamed AS SELECT ns.helper AS ns_helper,  
ns.name AS ns_name, ns.foo_id AS ns_foo_id, ns.title AS ns_title FROM  
foo ns

ERROR:  relation foo does not exist

So, it would seem that table names are case insensitive in select  
statements, but case sensitive in prepare statements.

  Can someone confirm or refute that?
  -M@


On Jan 27, 2006, at 3:59 PM, Roger Hand wrote:


testdb=# CREATE TABLE foo (field_one int4);
CREATE TABLE

testdb=# INSERT INTO foo VALUES (1);
INSERT 0 1
testdb=# INSERT INTO foo VALUES (2);
INSERT 0 1

testdb=# SELECT * FROM foo;
 field_one
---
 1
 2
(2 rows)

testdb=# SELECT * FROM FOO;
 field_one
---
 1
 2
(2 rows)

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Matthew Hixson
Sent: Friday, January 27, 2006 3:53 PM
To: Postgres General
Subject: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity  
woes



I'm trying to get a large Java application which makes use of an
Oracle JDO layer to work with Postgres.  Set aside for a moment the
discussion of whether or not that is going to work.
   What I have found is that different parts of this application are
referring to a table in all uppercase and in other parts referring to
the table all in lowercase.  Is there a way to configure Postgres so
that it does not treat FOO and foo as two different tables?
   Thanks,
-M@

---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Matthew Hixson

On Jan 27, 2006, at 4:10 PM, Thomas Kellerer wrote:


Matthew Hixson wrote on 28.01.2006 00:53:
I'm trying to get a large Java application which makes use of an  
Oracle JDO layer to work with Postgres.  Set aside for a moment  
the discussion of whether or not that is going to work.
  What I have found is that different parts of this application  
are referring to a table in all uppercase and in other parts  
referring to the table all in lowercase.  Is there a way to  
configure Postgres so that it does not treat FOO and foo as  
two different tables?


Simply don't use quotes, never. Then case will not matter.


These queries are being put together by the JDO layer.
  -M@

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Postgres 8.1.2, Java, JDO, and case sensitivity woes

2006-01-27 Thread Kris Jurka



On Fri, 27 Jan 2006, Matthew Hixson wrote:

The problem is that the JDO layer is looking for the table name in a system 
table like so:


When using DatabaseMetaData calls to determine what tables and columns are 
available you must be aware of what case the search terms need to be 
passed in with.  You must know if your tables were created with or 
without quotes, but then you can use the DatabaseMetaData methods 
stores[XXX]Case[Quoted]Idetifiers() to determine what case to pass 
parameters to methods like getTables().  It's unclear what control (if 
any) you have over this, but that's how it's supposed to work.


Kris Jurka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Importing Many XML Records

2006-01-27 Thread Ron St-Pierre
Thanks George. I just returned from the bookstore and was looking at an XSLT 
solution in one of the books there. I want to import the data into the DB as 
regular data, not as XML. I'll look into Saxon and TagSoup as well as the perl 
module you mentioned. As far as this being outside the scope of the list, I 
wasn't sure whether or not there were postgres modules to deal with this.

Thanks for pointing me to possible solutions.

Ron



 - Original Message -
 From: George Pavlov [EMAIL PROTECTED]
 To: Ron St-Pierre [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Importing Many XML Records
 Date: Fri, 27 Jan 2006 16:03:20 -0800
 
 
  I'm sure that this has been asked before but I can't find any 
  reference to it in google, and the search facility on 
  postgresql.org is currently down.
 
 http://groups.google.com/groups?q=group%3Apgsql.*
 
 provides the same with a slight delay but arguably a better user
 interface.
 
  I have a large number of entries (possibly 10,000+) in an XML 
  file that I need to import into the database (7.4 on Debian) on a 
  daily basis. Does anyone have any recommendations concerning the 
  best way to do this? Is there some tool I should use or should I 
  create the code in java to parse and import the data?
 
  If anyone has done this before, I would appreciate hearing how they did 
  this.
 
 This is generally outside the scope of this list. I am guessing (since I
 don't know much about your data format or goals), but you probably want
 to first transform the XML into a format suitable for importation into
 the database using COPY, or (much less desirable) a bunch of insert
 statements. In either case you should become familiar with XSLT
 processing and write yourself an XSLT template to do the job.
 
 I deal with a similar task using Saxon and TagSoup (which I highly
 recommend for XML that is not well-formatted) and create a CSV file out
 of a multitude of XML files (or a single XML file), which can then be
 COPY-ed into a PG table. Instead of a CSV file one could create a SQL
 script file of INSERT statements. I recommend Jeni Tennison's Beginning
 XSLT book as an excellent reference on the subject of XSLT.
 
 Depending on what your XML looks like you may get away without XSLT at
 all, but just preprocess it with awk, sed, perl (Template::Extract is a
 useful module) or whatever strikes your fancy.
 
 Other questions to answer are do you want the records to stay as XML
 in the database or do you want to import them into a regular table
 format? If the former you may want to get familiar with the pgxml (aka
 xml2 module) so you can query the XML data once inside your database.
 
 George
 
 
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




-- 
___
Play 100s of games for FREE! http://games.mail.com/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq