Re: [GENERAL] How feasible is this?

2010-05-21 Thread Craig Ringer

On 21/05/2010 9:22 AM, Chris Smith wrote:

I'm writing in desperate hope that something like this exists... because
if so, it would make my life a lot easier.  I want to be able to:

a) Roll back a transaction

b) Receive a notification when retrying the exact same transaction might
cause different data to be returned from something that was done up to
the point of the rollback; i.e., some result set, update count, etc.
might be different.


I don't see any way to do that without polling. You need to be able to 
discover every record that the query results are generated from and 
watch for one of them to change.


My non-expert feeling is that you could possibly extend a predicate 
locking scheme to do this. It's something that'd maybe be possible by 
hooking into the predicate locking schemes being being designed to 
support true serializability in Pg (see periodic discussion on -hackers) 
but those locking schemes aren't in the main PG code yet. Even if they 
were, using them for this would be a significant amount of C-coding work 
to extend the server.


It might be a good idea to take a few steps back and look at what you 
are trying to achieve with this. Why do you want it? What for? What 
problem will it solve for you?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Glyn Astill
Well I've ony just gotten round to taking another look at this, response inline 
below:

--- On Fri, 30/4/10, Tom Lane  wrote:

> Glyn Astill 
> writes:
> > The schema is fairly large, but I will try.
> 
> My guess is that you can reproduce it with not a lot of
> data, if you can
> isolate the trigger condition.
> 

Hmm, tried reducing the amount of data and the issue goes away. Could this 
indicate some issue with the file, like an issue with it's size (~~ 5gb)? Or 
could it be an issue with the data itself?

> > One thing I forgot to mention is that in the restore
> script I drop the indexes off my tables between restoring
> the schema and the data. I've always done this to speed up
> the restore, but is there any chance this could be causing
> the issue?
> 
> Possibly.  I think there must be *something* unusual
> triggering the
> problem, and maybe that is it or part of it.

I've removed this faffing with indexes inbetween but the problem still persists.

> 
> > I guess what would help is some insight into what the
> error message means. 
> 
> It's hard to tell.  The likely theories are (1) we're
> doing things in an
> order that requires seeking backwards in the file, and for
> some reason
> pg_restore thinks it can't do that; (2) there's a bug
> causing the code
> to search for a item number that isn't actually in the
> file.
> 
> One of the previous reports actually turned out to be pilot
> error: the
> initial dump had failed after emitting a partially complete
> file, and
> so the error from pg_restore was essentially an instance of
> (2).  But
> with three or so reports I'm thinking there's something
> else going on.
> 

So I'm still at a loss as to why it's happening.

I've tried to dig a little deeper (and I may just be punching thin air here) by 
adding the value of id into the error message at die_horribly() and it gives me 
id 7550 which is the first table in the TOC entry list when I do a pg_restore 
-l, everything above it is a sequence. 

Here's a snip of pg_restore -l:

7775; 0 0 SEQUENCE SET website ui_content_id_seq pgcontrol
7550; 0 22272 TABLE DATA _main_replication sl_archive_counter slony

And the output if run it under gdb:

GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu"...
(gdb) set args -U postgres --disable-triggers -j 4 -c -d SEE 
Way5a-pgsql-SEE-data.gz
(gdb) break die_horribly
Breakpoint 1 at 0x4044b0: file pg_backup_archiver.c, line 1384.
(gdb) run
Starting program: /usr/local/pgsql/bin/pg_restore -U postgres 
--disable-triggers -j 4 -c -d SEE Way5a-pgsql-SEE-data.gz
[Thread debugging using libthread_db enabled]
[New Thread 0x7f72480eb700 (LWP 4335)]
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
[Switching to Thread 0x7f72480eb700 (LWP 4335)]

Breakpoint 1, die_horribly (AH=0x61c210, modulename=0x4171f6 "archiver", 
fmt=0x4167d8 "worker process failed: exit code %d\n") at 
pg_backup_archiver.c:1384
1384{
(gdb) pg_restore: [custom archiver] dumping a specific TOC data block out of 
order is not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550

(gdb) bt
#0  die_horribly (AH=0x61c210, modulename=0x4171f6 "archiver", fmt=0x4167d8 
"worker process failed: exit code %d\n") at pg_backup_archiver.c:1384
#1  0x00408f14 in RestoreArchive (AHX=0x61c210, ropt=0x61c0d0) at 
pg_backup_archiver.c:3586
#2  0x00403737 in main (argc=10, argv=0x7fffd5b8) at 
pg_restore.c:380
(gdb) step
pg_restore: [archiver] worker process failed: exit code 1

Program exited with code 01.


Any further ideas of where I should dig would be appreciated.

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Alban Hertroys
On 21 May 2010, at 11:58, Glyn Astill wrote:

> Well I've ony just gotten round to taking another look at this, response 
> inline below:
> 
> --- On Fri, 30/4/10, Tom Lane  wrote:
> 
>> Glyn Astill 
>> writes:
>>> The schema is fairly large, but I will try.
>> 
>> My guess is that you can reproduce it with not a lot of
>> data, if you can
>> isolate the trigger condition.
>> 
> 
> Hmm, tried reducing the amount of data and the issue goes away. Could this 
> indicate some issue with the file, like an issue with it's size (~~ 5gb)? Or 
> could it be an issue with the data itself?

The file-size in combination with an "out of order" error smells of a 32-bit 
integer wrap-around problem.

And indeed, from the documentation 
(http://www.postgresql.org/docs/8.4/interactive/lo-intro.html):
"One remaining advantage of the large object facility is that it allows values 
up to 2 GB in size"

So I guess your large object is too large.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4bf6617510414104348269!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Glyn Astill
--- On Fri, 21/5/10, Alban Hertroys  wrote:

> On 21 May 2010, at 11:58, Glyn Astill
> wrote:
> 
> > Well I've ony just gotten round to taking another look
> at this, response inline below:
> > 
> > --- On Fri, 30/4/10, Tom Lane 
> wrote:
> > 
> >> Glyn Astill 
> >> writes:
> >>> The schema is fairly large, but I will try.
> >> 
> >> My guess is that you can reproduce it with not a
> lot of
> >> data, if you can
> >> isolate the trigger condition.
> >> 
> > 
> > Hmm, tried reducing the amount of data and the issue
> goes away. Could this indicate some issue with the file,
> like an issue with it's size (~~ 5gb)? Or could it be an
> issue with the data itself?
> 
> The file-size in combination with an "out of order" error
> smells of a 32-bit integer wrap-around problem.
> 
> And indeed, from the documentation 
> (http://www.postgresql.org/docs/8.4/interactive/lo-intro.html):
> "One remaining advantage of the large object facility is
> that it allows values up to 2 GB in size"
> 
> So I guess your large object is too large.

Hmm, we don't use any large objects though, all our data is pretty much just 
date, text and numeric fields etc

Glyn.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to lose transaction history (xmin values, WAL, etc.)?

2010-05-21 Thread Sam Mason
On Thu, May 20, 2010 at 09:33:23PM -0500, Peter Hunsberger wrote:
> On Thu, May 20, 2010 at 8:03 PM, Richard Walker  
> wrote:
> > If the hacker gets root access so they can read
> > the raw database files, they most likely also
> > have access to the means to decrypt any
> > encrypted data. This is particularly so if
> > the database updates are being done by stored
> > procedures.
> 
> Only if they also get at the keys.

It's very difficult (with a conventional OS) to remove the *entirety* of
the server hardware and software from the TCB.  Hence you should assume
that if PG ever sees a key it's also possible for an attacker to see the
same key.

The options are pretty much do all crypto away from the database server
(either client side, or on another server that you can trust) or you
have to trust (also in the technical sense) the database server itself
and things become greatly simplified.

> > If encryption/decryption happens
> > on a separate (not-also-compromised) client,
> > then OK.  Do you know of a way to deal with
> > this if the application is on the same computer
> > as the database?
> 
> Can you use an external key store?

How does this help? if the database has been compromised, what would
stop the attacker from inserting some code that records the responses
from this "external key store"?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Alban Hertroys
On 21 May 2010, at 12:44, Glyn Astill wrote:

>> So I guess your large object is too large.
> 
> Hmm, we don't use any large objects though, all our data is pretty much just 
> date, text and numeric fields etc


Doh! Seems I mixed up a few threads here. It was probably the mentioning of a 
5GB file that threw me off, hadn't realised you were referring to a dump file 
there.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4bf67a7e10411591919641!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How feasible is this?

2010-05-21 Thread Chris Smith
On Fri, 2010-05-21 at 16:08 +0800, Craig Ringer wrote:
> My non-expert feeling is that you could possibly extend a predicate 
> locking scheme to do this. It's something that'd maybe be possible by 
> hooking into the predicate locking schemes being being designed to 
> support true serializability in Pg (see periodic discussion on -hackers) 
> but those locking schemes aren't in the main PG code yet.

Thanks for that.  Gives me a place to start looking!

> It might be a good idea to take a few steps back and look at what you 
> are trying to achieve with this. Why do you want it? What for? What 
> problem will it solve for you?

It's not a typical application-level problem.  I'm playing with the idea
of extending the transactional memory system in the Haskell programming
language so that database access can be done inside of an application
atomic block.  Currently, database access counts as I/O, and therefore
must be done outside of atomic blocks, and this leads to a somewhat
strained programming model for applications combining transactional
memory with databases.

Haskell's transactional memory provides exactly the feature I'm asking
for: specifically, there is a "retry" action, which rolls back a
transaction, blocks until there's some change that makes it likely that
the transaction will behave differently in the future, and then retries
it.  This turns out to be very useful for transactional memory.  Whether
it's useful for database access or not may be an open question, but it
seems very messy to say "don't retry if you've touched the database",
since the point here is to be composable and not make people worry about
the implementation details of some other part of their transactions.

-- 
Chris Smith



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres startup failure

2010-05-21 Thread Tom Lane
Matt Bartolome  writes:
> On Thu, May 20, 2010 at 3:36 PM, Tom Lane  wrote:
>> So right now I'm wondering whether you're not failing to notice
>> a relevant log message(s).  You've evidently managed to launch
>> the syslogger --- where is it configured to write the postmaster
>> log?

> Gosh, you know I never even bothered looking in pg_log. Stupid! I was
> expecting to see some sort of error when doing:
> $ postgres -d 3 -D /data/postgres/

I wonder whether it'd be a good idea to write a log message on the order
of "switching to syslogger" just before cutting over the log output
during startup.  That's probably not quite the best wording, but
something like that might help to remind people where to look.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] last and/or first in a by group

2010-05-21 Thread Dino Vliet
From: 
"Thomas Kellerer" 
To: 
"" Dino Vliet wrote on 16.05.2010 18:07:
> Dear postgresql experts,
>
> I want to know 
if postgresql has facilities for getting the first and or
> the 
last in a by group.
>
> Suppose I have the following table:
>
> resnr,dep,arr,cls,dbd meaning reservationsnumber, departure station,
> arrival station, the class of the reservation and the
> 
daysbeforedeparture and records like:
> xxx,NYC,BRA,C,80
> 
xxx,NYC,BRA,M,75
> xxx,NYC,BRA,Q,50
> yyy,WAS,LIS,T,55
> zzz,NYC,LIS,Z,40
> zzz,NYC,LIS,J,39
>
> I want to 
select only the most recent records being:
> xxx,NYC,BRA,Q,50
> yyy,WAS,LIS,T,55
> zzz,NYC,LIS,J,39
>

Something like this?

SELECT *
FROM your_table  t1
WHERE dbd = (SELECT 
min(dbd)
  FROM your_table t2
  WHERE 
t2.dep = t1.dep
AND t2.arr = t1.arr
AND t2.resnr = t1.resnr)


Regards
Thomas



Thanks for your answer and if I look at it from a functionality point of view, 
this does the trick.

However, my table t1 (and hence t2) contains 6 million records AND I'm planning 
to do this repeatedly (with a scripting language for various moments in time) 
so this will end up being a very slow solution. 

How can I speed these kind of queries up? By using indices, but on what columns 
would that be the best way then?
Or by trying to do this one time by constructing a table with the relevant 
information which can be used in such a way that I join thing in stead of using 
this subquery construction.

Thanks
Dino



  

[GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Francisco Reyes

Postgres 8.4.1
CentOS 5.4

I am trying to do
select max(primary_key) from some_table;

The explain looks like:
explain select max(primary_key) from some_table;
  QUERY PLAN
 
Result  (cost=0.15..0.16 rows=1 width=0)

  InitPlan 1 (returns $0)
->  Limit  (cost=0.00..0.15 rows=1 width=8)
  ->  Index Scan Backward using some_table_pkey on some_table  
(cost=0.00..161797059.16 rows=1086279613 width=8)

Filter: (trans_id IS NOT NULL)

I checked there wasn't a lock of any kind
select waiting from pg_stat_activity;
waiting
-
f
f
f
f
f
f
f
(7 rows)

IOwait on the machine is around 20%
sar 10 5
Linux 2.6.18-128.el5 (trans05.afs)  05/21/2010
10:56:49 AM  CPU   %user   %nice %system %iowait  %steal  %idle
10:56:59 AM  all5.900.002.04   20.670.00  71.39
10:57:09 AM  all5.900.001.99   23.360.00  68.75
10:57:19 AM  all5.870.002.10   22.560.00  69.47
10:57:29 AM  all5.840.002.09   23.560.00  68.51
10:57:39 AM  all6.300.002.23   21.530.00  69.94
Average: all5.960.002.09   22.340.00  69.61

Any ideas why the select would be taking long.. It has gone on for minutes 
with no answer. I can just look at the value of the sequence for the primary 
key, but I am curious why something that usually is sub-second is taking so 
long..


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is postgres installed?

2010-05-21 Thread christophe . andre
Hi,

How I know whether Postgres is already installed or not on a machine(on
Linux and Windows)?
I found that pg_ctl --version could be used or I also tried to check into
the registry (for windows HKLM\SOFTWARE\PostgreSQL\Installations), however
with Windows Server 2003 pg_ctl is not recognized (probably not in the
path) and the registries are not written (nothing appears under
HKLM\SOFTWARE\)

Is there another check I can do?

Thanks
Christophe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Tom Lane
Francisco Reyes  writes:
> I am trying to do
> select max(primary_key) from some_table;

Are there a whole lot of nulls in that column?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is postgres installed?

2010-05-21 Thread Rob Richardson
What about searching your hard drive for pg_ctl.exe? 

RobR

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] NOTICE: ignoring incomplete trigger group for constraint

2010-05-21 Thread erobles




hi !

I have running  postgres 8.3.1  and   a dump file  from postgers 7.2 
:-P ,   but when  i tried to restore the dump i have  the next  
message:
(by the way  i   made  the dump file using pg_dump of postgresql 8.3)


CREATE CONSTRAINT TRIGGER "valida_ent_a_sal" AFTER DELETE ON "ent_a"  
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_noaction_del" ('valida_ent_a_sal', 'sal_d', 'ent_a', 'FULL',
'tagname_ed', 'tagname_ea');


psql:lostriggers:10: NOTICE:  ignoring incomplete trigger group for
constraint "valida_ent_a_sal" FOREIGN KEY sal_d(tagname_ed) REFERENCES
ent_a(tagname_ea)


DETAIL:  Found referenced table's DELETE trigger.


CREATE TRIGGER


and this:


CREATE CONSTRAINT TRIGGER "" AFTER UPDATE ON "scenes" 
FROM "scenes_sub"  INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_noaction_upd" ('', 'scenes_sub', 'scenes',
'UNSPECIFIED', 'scene', 'scene', 'tag', 'tag');


psql:lostriggers:965: NOTICE:  ignoring incomplete trigger group for
constraint "" FOREIGN KEY scenes_sub(scene,tag)
REFERENCES scenes(scene,tag)


DETAIL:  Found referenced table's UPDATE trigger.

CREATE TRIGGER



why some triggers are unnamed???

Which  is the best  way  to solve this  searching on google  
found  that i must  create foreign keys, but i don't know if  is enough
with the creation of the foreign key  or if  i must create  the foreign
key and the constraint  too.

there is an  'automagic'  way to  convert  a constraint into a foreign
key  ;-)


thanks!






Re: [GENERAL] How to lose transaction history (xmin values, WAL, etc.)?

2010-05-21 Thread Peter Hunsberger
On Fri, May 21, 2010 at 6:40 AM, Sam Mason  wrote:
> On Thu, May 20, 2010 at 09:33:23PM -0500, Peter Hunsberger wrote:
>> On Thu, May 20, 2010 at 8:03 PM, Richard Walker  
>> wrote:
>> > If the hacker gets root access so they can read
>> > the raw database files, they most likely also
>> > have access to the means to decrypt any
>> > encrypted data. This is particularly so if
>> > the database updates are being done by stored
>> > procedures.
>>
>> Only if they also get at the keys.
>
> It's very difficult (with a conventional OS) to remove the *entirety* of
> the server hardware and software from the TCB.  Hence you should assume
> that if PG ever sees a key it's also possible for an attacker to see the
> same key.

Fair enough. The scenario the OP was describing seemed to be a concern
with browsing the database and not at the level of inspecting running
code, but who knows, he hasn't said what he is really trying to
achieve.  There are some hardware level work arounds for parts of
this, but if you're running everything on the same server I doubt that
you're also looking at that kind of scenario.

> The options are pretty much do all crypto away from the database server
> (either client side, or on another server that you can trust) or you
> have to trust (also in the technical sense) the database server itself
> and things become greatly simplified.

Absolutely the best idea, the OP seems to have ruled that out however.
 As such, you can only make things reasonably safe from direct
inspection

>
>> > If encryption/decryption happens
>> > on a separate (not-also-compromised) client,
>> > then OK.  Do you know of a way to deal with
>> > this if the application is on the same computer
>> > as the database?
>>
>> Can you use an external key store?
>
> How does this help? if the database has been compromised, what would
> stop the attacker from inserting some code that records the responses
> from this "external key store"?
>

Again, I had the impression that code injection did not seem to be the
issue here.  The issue seemed to be inspection of the data in the
database after the fact.  Like I said, it would be good to know what
the real requirements are...

However, that aside, to answer your question;  among other things, the
key request includes a timestamped hash of internal memory to ensure a
non compromised server and the keys store returns functions with
embedded one time keys to do the actual work. The keys are composite
with portions that must match a compile time stored hash (you can't
have a compromised server requesting the key), and at run time the
same hash must be yielded (impossible to know without a previously
compromised server), or you will get garbage.  Replay attacks won't
work since the server will check the time stamp on the original
request (which we already know can't be compromised) before yielding
up the decryption function.  Much of the key exchange process is
essentially standard Kerberos with the client and the server
authenticating themselves to each other as usual, but you do need some
extensions to manage the extra integrity checks and create and manage
the additional message contents.

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOTICE: ignoring incomplete trigger group for constraint

2010-05-21 Thread Tom Lane
erobles  writes:
> 

[ Please don't send all-html mail to the lists ]

> I have running  postgres 8.3.1  and   a dump file  from postgers 7.2 
> :-P ,   but when  i tried to restore the dump i have  the next  
> message:

> psql:lostriggers:10: NOTICE:  ignoring incomplete trigger group for
> constraint "valida_ent_a_sal" FOREIGN KEY sal_d(tagname_ed) REFERENCES
> ent_a(tagname_ea)

Yeah, this is known to happen in some cases where there was a broken
(incompletely enforced) foreign key constraint in your old database.
The odds are good that what you should do is nothing at all, because
you probably didn't even realize you still had the FK constraint in the
old database: the most common error cases weren't enforced.  It's likely
that if you try to add the FK constraint now, you'll find it fails
because the data doesn't even satisfy the constraint.  So you could
just leave things alone and the new database will behave approximately
like the old one did.  But if you really want to add the FK constraint
back in, ALTER TABLE ADD FOREIGN KEY is the way.

BTW, the known cases for this are follow-on damage from a bug in 7.0
pg_dump.  Does the ancestry of this database go back that far?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cleaning wal files from postgres

2010-05-21 Thread erobles




which is the right procedure to clean wal files
after a  recovery ??





[GENERAL] getting all constraint violations

2010-05-21 Thread Gauthier, Dave
Hi:

I have a table with many constraints.  A user tries to insert a record that 
violates many of them.  The error message I get back lists the first violation. 
How cani I (or can I) get them all?

I'm running 8.3.4 on linux, running through perl/DBI, getting the error message 
from $dbh->errstr.

Thanks in Advance for any help :-)


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread A. Kretschmer
In response to Gauthier, Dave :
> Hi:
> 
>  
> 
> I have a table with many constraints.  A user tries to insert a record that
> violates many of them.  The error message I get back lists the first 
> violation.
> How cani I (or can I) get them all?

I think that isn't possible: the first violation throws the error, the
transaction stopped.


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Merlin Moncure
On Fri, May 21, 2010 at 1:27 PM, Gauthier, Dave  wrote:
> Hi:
>
>
>
> I have a table with many constraints.  A user tries to insert a record that
> violates many of them.  The error message I get back lists the first
> violation. How cani I (or can I) get them all?

the database stops processing after the first error.  the only way to
generate a list is to check it on the client first, or do something
like make a 'before' trigger to pre-scan.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Tom Lane
"Gauthier, Dave"  writes:
> I have a table with many constraints.  A user tries to insert a record that 
> violates many of them.  The error message I get back lists the first 
> violation. How cani I (or can I) get them all?

You can't, it stops running the command at the first error.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Francisco Reyes

Tom Lane writes:


Francisco Reyes  writes:

I am trying to do
select max(primary_key) from some_table;


Are there a whole lot of nulls in that column?



Zero nulls. It is a primary key.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Tom Lane
Francisco Reyes  writes:
> Tom Lane writes:
>> Francisco Reyes  writes:
>>> I am trying to do
>>> select max(primary_key) from some_table;

>> Are there a whole lot of nulls in that column?

> Zero nulls. It is a primary key.

Huh.  The proposed plan should have run in basically zero time then.
You might try strace'ing the backend or attaching to it with gdb to
see what it's doing.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Gauthier, Dave
Is there a way to temporarily suspend constraint checking for a particular 
constraint inside of the transaction, try the insert again, capture the next 
violation, then the next, etc... then rollback after all have been collected?

-dave 

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Friday, May 21, 2010 1:36 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] getting all constraint violations 

"Gauthier, Dave"  writes:
> I have a table with many constraints.  A user tries to insert a record that 
> violates many of them.  The error message I get back lists the first 
> violation. How cani I (or can I) get them all?

You can't, it stops running the command at the first error.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is postgres installed?

2010-05-21 Thread Raymond O'Donnell
On 21/05/2010 14:43, christophe.an...@elsys-design.com wrote:

> How I know whether Postgres is already installed or not on a machine(on
> Linux and Windows)?
> I found that pg_ctl --version could be used or I also tried to check into
> the registry (for windows HKLM\SOFTWARE\PostgreSQL\Installations), however
> with Windows Server 2003 pg_ctl is not recognized (probably not in the
> path) and the registries are not written (nothing appears under
> HKLM\SOFTWARE\)

On Windows, you could just have a look in c:\program files\postgresql.
There are usually some items in the Start Menu also.

On Linux, you'll see if it's actually running by doing

  ps ax | grep postgres

or the like... If it's not running, where you'd look for the files
varies from one distro to another, but using "find" should throw up some
of them.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Richard Broersma
On Fri, May 21, 2010 at 11:46 AM, Gauthier, Dave
 wrote:
> Is there a way to temporarily suspend constraint checking for a particular 
> constraint inside of the transaction, try the insert again, capture the next 
> violation, then the next, etc... then rollback after all have been collected?

If the constraint has the ability to be defined as deferr-able you can do this.

from the online pg documentation:

http://www.postgresql.org/docs/8.4/interactive/sql-createtable.html

[ CONSTRAINT constraint_name ]
{ NOT NULL |
  NULL |
  UNIQUE index_parameters |
  PRIMARY KEY index_parameters |
  CHECK ( expression ) |
  REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL |
MATCH SIMPLE ]
[ ON DELETE action ] [ ON UPDATE action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]


DEFERRABLE
NOT DEFERRABLE

This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every
command. Checking of constraints that are deferrable can be postponed
until the end of the transaction (using the SET CONSTRAINTS command).
NOT DEFERRABLE is the default. Only foreign key constraints currently
accept this clause. All other constraint types are not deferrable.
INITIALLY IMMEDIATE
INITIALLY DEFERRED

If a constraint is deferrable, this clause specifies the default
time to check the constraint. If the constraint is INITIALLY
IMMEDIATE, it is checked after each statement. This is the default. If
the constraint is INITIALLY DEFERRED, it is checked only at the end of
the transaction. The constraint check time can be altered with the SET
CONSTRAINTS command.



IIRC,  there is an exception to the deferrable rule.  I believe that
constraint triggers can also be made to be deferrable.

http://www.postgresql.org/docs/8.4/interactive/sql-createconstraint.html

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Tom Lane
"Gauthier, Dave"  writes:
> Is there a way to temporarily suspend constraint checking for a particular 
> constraint inside of the transaction, try the insert again, capture the next 
> violation, then the next, etc... then rollback after all have been collected?

You could do something like

BEGIN;
ALTER TABLE DROP CONSTRAINT ...
INSERT ...
ROLLBACK;

The major deficiency of this is that the ALTER TABLE would grab
exclusive lock on the table, so this doesn't scale if you need to
have several insertions happening in parallel.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting all constraint violations

2010-05-21 Thread Gauthier, Dave
I'm thinking more along the lines of creating a bunch of temp tables, each with 
one of the constraints.  Then, in a loop, throw the record at each of these 
temp tables and collect up the violations. 

Exploring now hot to get the pieces I need from the metadata tables to do this.


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Friday, May 21, 2010 3:10 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] getting all constraint violations 

"Gauthier, Dave"  writes:
> Is there a way to temporarily suspend constraint checking for a particular 
> constraint inside of the transaction, try the insert again, capture the next 
> violation, then the next, etc... then rollback after all have been collected?

You could do something like

BEGIN;
ALTER TABLE DROP CONSTRAINT ...
INSERT ...
ROLLBACK;

The major deficiency of this is that the ALTER TABLE would grab
exclusive lock on the table, so this doesn't scale if you need to
have several insertions happening in parallel.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cleaning wal files from postgres

2010-05-21 Thread Andreas Schmitz

erobles wrote:

which is the right procedure to clean wal files after a  recovery ??

which kind of recovery ? PITR or just a crash ?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cleaning wal files from postgres

2010-05-21 Thread Andreas Schmitz

erobles wrote:

which is the right procedure to clean wal files after a  recovery ??

what kind of recovery ?

regards

andreas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cleaning wal files from postgres

2010-05-21 Thread erobles


 PITR  recovery

On 05/21/2010 03:04 PM, Andreas Schmitz wrote:

erobles wrote:

which is the right procedure to clean wal files after a  recovery ??

what kind of recovery ?

regards

andreas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cleaning wal files from postgres

2010-05-21 Thread Andreas Schmitz


maybe I don't understand the problem. there is no need to clean WAL 
files after recovery. where exactly is the problem ?


regards

andreas


erobles wrote:


 PITR  recovery

On 05/21/2010 03:04 PM, Andreas Schmitz wrote:

erobles wrote:

which is the right procedure to clean wal files after a  recovery ??

what kind of recovery ?

regards

andreas






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general