[GENERAL] inet value validation

2006-04-16 Thread [EMAIL PROTECTED]


hello,

I want to write a function to validate an inet data type, but I figure 
that postgres should already have such a function to use before 
inserting values in inet type columns.
is it possible to access postgres's validation function for inet types ? 
I have snooped through the catalogs but I can not find such a function.



thanks,
Razvan Radu



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] unique index on variable time

2006-04-16 Thread [EMAIL PROTECTED]


hello,

I am interested to know if I can define an unique index on a timestamp 
column to reject values within one hour.


insert into table(timestamp_col) values(LOCALTIMESTAMP);
insert into table(timestamp_col) values(LOCALTIMESTAMP + '5 
minutes'::INTERVAL);

I want the second insert to fail with "unique violation".

is it possible to do something like that ? I am not interested in 
"date_trunc" because I want it to work and for minute 59.


thanks,
Razvan Radu



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] unique index on variable time

2006-04-16 Thread Michael Fuhr
On Sun, Apr 16, 2006 at 07:07:11PM +0300, [EMAIL PROTECTED] wrote:
> I am interested to know if I can define an unique index on a timestamp 
> column to reject values within one hour.

Last month I posted an idea for enforcing unique constraints on
date ranges by using a composite type and a custom comparison
operator:

http://archives.postgresql.org/pgsql-sql/2006-03/msg00055.php

The warning about the code being only minimally tested still stands
but maybe it'll give you some ideas.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] inet value validation

2006-04-16 Thread Andreas Kretschmer
[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:

> 
> hello,
> 
> I want to write a function to validate an inet data type, but I figure that 
> postgres should already have such a function to use before inserting values 
> in inet type columns.
> is it possible to access postgres's validation function for inet types ? I 
> have snooped through the catalogs but I can not find such a function.

You can try to cast a string into inet like this:

test=# select '127.0.0.255'::inet;
inet
-
 127.0.0.255
(1 row)

test=# select '127.0.0.256'::inet;
ERROR:  invalid input syntax for type inet: "127.0.0.256"


Now you can write a little function to do this:

,[  a little function  ]
| create or replace function check_ip(varchar) returns bool as $$
| declare
| i inet;
| begin
| i := $1::inet;
| return 't'::bool;
| EXCEPTION WHEN invalid_text_representation then
| return 'f'::bool;
| end;
| $$ language plpgsql immutable strict;
`

You can use this function now inside a transaction.



HTH, Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] inet value validation

2006-04-16 Thread [EMAIL PROTECTED]


yes, this is a good example, but I do not want to use an exception block 
because of the warning present on documentation regarding exception blocks
"*Tip: * A block containing an EXCEPTION clause is significantly more 
expensive to enter and exit than a block without one. Therefore, don't 
use EXCEPTION without need."


I intend to use this function heavily.


Razvan Radu

Andreas Kretschmer wrote:

[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:

  

hello,

I want to write a function to validate an inet data type, but I figure that 
postgres should already have such a function to use before inserting values 
in inet type columns.
is it possible to access postgres's validation function for inet types ? I 
have snooped through the catalogs but I can not find such a function.



You can try to cast a string into inet like this:

test=# select '127.0.0.255'::inet;
inet
-
 127.0.0.255
(1 row)

test=# select '127.0.0.256'::inet;
ERROR:  invalid input syntax for type inet: "127.0.0.256"


Now you can write a little function to do this:

,[  a little function  ]
| create or replace function check_ip(varchar) returns bool as $$
| declare
| i inet;
| begin
| i := $1::inet;
| return 't'::bool;
| EXCEPTION WHEN invalid_text_representation then
| return 'f'::bool;
| end;
| $$ language plpgsql immutable strict;
`

You can use this function now inside a transaction.



HTH, Andreas
  




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] inet value validation

2006-04-16 Thread Andreas Kretschmer
[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:

> 
> yes, this is a good example, but I do not want to use an exception block 
> because of the warning present on documentation regarding exception blocks

No problem, i want to show a transaction-secure solution. You can use
this without the exception block, but if you got a error, a transaction
will be failed.

> Andreas Kretschmer wrote:
> >[EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:
> >  
> >>hello,

Argh. Please no silly fullquote below the text.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-16 Thread felix-accts-pgsql
On Sat, Apr 15, 2006 at 10:31:26AM -0400, Francisco Reyes wrote:
> [EMAIL PROTECTED] writes:
> 
> >Usage is to match data from the key and val tables to fetch the data
> >value from the sid table.
> 
> What is the relation between key and val tables?
> Will key.id and val.id be equal?

This benchmark explores an idea for a simple berkeley-db-like lookup,
but faster and allowing ranges, and specialized for looking up info in
some other tables I have.  The key table data is table.column, and
1000 is a rough guess on how many unique column names there might be.
The val table is the contents of those columns, and 100K is nother
rough guess.  The end result, the sid table, is a generic ID I have,
coudl be anything, like lat/lon, room-bookshelf-shelf-book, etc.

key.id and val.id have no bearing on each other.

I have made some minor changes and speeded things up to around 15-20
lookups/sec, good enough, but not exciting :-) and in the process,
come across some odd misbehavior.  I have a writeup, almost ready to
post, but I want to make sure I cxross my Is and dot my Ts properly.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Query runs fast or slow

2006-04-16 Thread felix-accts-pgsql
I have a benchmark test which runs a query very slowly under certain
circumstances.  I used Ethereal to capture the packet traffic, and
also enabled debug5 logging out of curiousity.  While the slow query
is in progress, there is no log or packet activity, but the cpu is
busy.  These packets are below; look for SLOW PACKETS HERE to skip all
this explanatory drudge.

This WHERE clause is fast under all conditions:

... AND (val.data = $2) AND ...

This WHERE clause is fast as a simple query, but is excruciatingly
slow as prepare / execute / fetch:

... AND (val.data > $2 AND val.data < $3) AND ...

My test program is in Perl and uses DBI/DBD::Pg.  Postgresql version
is 8.0.3 on a dual core dual opteron with 2G of RAM.  DBI is version
1.48.  DBD::Pg is version 1.42.  The OS is rPath Linux 2.6.15.

The test runs each SQL statement three times, first as a simple query
to preload caches, then as prepare / execute / fetch, and lastly as a
simple query again.

$sth = $dbh->prepare(sql_with_placeholders);
$dbh->selectall_arrayref(sql_with_values_substituted);
$sth->execute(@values);
$sth->fetchall_arrayref();
$dbh->selectall_arrayref(sql_with_values_substituted);

I captured packet traffic and tailed the log while these were running.

Everything is fine except one query, which took 75 seconds to run,
when the others took 3 milliseconds.  During this 75 seconds, the
postmaster log showed no activity, but top showed the postmaster quite
busy.

75 seconds!  That's an eternity.  I can't imagine any circumstances
where it makes sense.  EXPLAIN ANALYZE doesn't show the slow timing
because it requires values, not $n placeholders, and it is the prepare
/ execute operation which is so slow.  I will be glad to send the log,
the packet capture file, the test program itself, and anything else
which helps.  Here are the EXPLAIN statements in case it helps.

EXPLAIN for the equality WHERE clause:

felix=> explain analyze SELECT sid.data, glue.key, glue.val, glue.sid FROM 
key, val, sid, glue WHERE (key.data = 
'x6ATArB_k1cgLp1mD5x2nzVVf2DQw4Lw1-Ow5NCzzs5Pupg6K' AND key.id = glue.key) AND 
(val.data = 357354306) AND val.id = glue.val AND glue.sid = sid.id;
   QUERY 
PLAN   


 Nested Loop  (cost=5.82..1119.29 rows=1 width=60) (actual 
time=2.271..36.184 rows=1 loops=1)
   ->  Hash Join  (cost=5.82..1116.27 rows=1 width=16) (actual 
time=2.079..35.976 rows=1 loops=1)
 Hash Cond: ("outer"."key" = "inner".id)
 ->  Nested Loop  (cost=0.00..1105.43 rows=1001 width=16) (actual 
time=0.315..31.820 rows=1000 loops=1)
   ->  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.119..0.123 rows=1 loops=1)
 Index Cond: (data = 357354306)
   ->  Index Scan using glue_val_idx on glue  
(cost=0.00..702.58 rows=31747 width=16) (actual time=0.181..24.438 rows=1000 
loops=1)
 Index Cond: ("outer".id = glue.val)
 ->  Hash  (cost=5.82..5.82 rows=1 width=4) (actual 
time=0.292..0.292 rows=0 loops=1)
   ->  Index Scan using key_data_key on "key"  (cost=0.00..5.82 
rows=1 width=4) (actual time=0.266..0.271 rows=1 loops=1)
 Index Cond: (data = 
'x6ATArB_k1cgLp1mD5x2nzVVf2DQw4Lw1-Ow5NCzzs5Pupg6K'::text)
   ->  Index Scan using sid_pkey on sid  (cost=0.00..3.01 rows=1 width=52) 
(actual time=0.179..0.183 rows=1 loops=1)
 Index Cond: ("outer".sid = sid.id)
 Total runtime: 37.880 ms
(14 rows)

EXPLAIN for the range WHERE clause:

felix=> explain analyze SELECT sid.data, glue.key, glue.val, glue.sid FROM 
key, val, sid, glue WHERE (key.data = 'kOSkZ5iN6sz-KqGo51aTwqZnvCKQRUH2SZ8k' 
AND key.id = glue.key) AND (val.data > 183722006 AND val.data < 183722206) AND 
val.id = glue.val AND glue.sid = sid.id;
  QUERY 
PLAN   

---
 Nested Loop  (cost=5.82..1119.30 rows=1 width=60) (actual 
time=15.016..15.525 rows=1 loops=1)
   ->  Hash Join  (cost=5.82..1116.27 rows=1 width=16) (actual 
time=14.879..15.374 rows=1 loops=1)
 Hash Cond: ("outer"."key" = "inner".id)
 ->  Nested Loop  (cost=0.00..1105.43 rows=1001 width=16) (actual 
time=0.211..11.666 rows=1000 loops=1)
   ->  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.071..0.076 rows=1 loops=1)
 Index Cond:

Re: [GENERAL] Query runs fast or slow

2006-04-16 Thread Tom Lane
[EMAIL PROTECTED] writes:
> EXPLAIN ANALYZE doesn't show the slow timing
> because it requires values, not $n placeholders,

To analyze the plan used for a parameterized query, try

PREPARE foo(...) AS SELECT ... $n ...

EXPLAIN ANALYZE EXECUTE foo(...)

But I already know what you're going to find: the planner's estimates
for the range query are not going to be very good when it has no idea
what the range bounds are.  This is a situation where it may be best
to absorb the hit of re-planning each time instead of using a generic
parameterized plan.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] unique index on variable time

2006-04-16 Thread Tom Lane
Michael Fuhr <[EMAIL PROTECTED]> writes:
> On Sun, Apr 16, 2006 at 07:07:11PM +0300, [EMAIL PROTECTED] wrote:
>> I am interested to know if I can define an unique index on a timestamp 
>> column to reject values within one hour.

Perhaps
CREATE UNIQUE INDEX foo ON tab (date_trunc('hour', col));
It's not clear whether this does exactly what you wished for.

> Last month I posted an idea for enforcing unique constraints on
> date ranges by using a composite type and a custom comparison
> operator:
> http://archives.postgresql.org/pgsql-sql/2006-03/msg00055.php
> The warning about the code being only minimally tested still stands
> but maybe it'll give you some ideas.

I don't think that actually works; the problem is that its idea of
"equality" is not transitive, which is going to confuse btree terribly.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Query runs fast or slow

2006-04-16 Thread felix
On Sun, Apr 16, 2006 at 04:32:25PM -0400, Tom Lane wrote:

> To analyze the plan used for a parameterized query, try
> 
>   PREPARE foo(...) AS SELECT ... $n ...
> 
>   EXPLAIN ANALYZE EXECUTE foo(...)
> 
> But I already know what you're going to find: the planner's estimates
> for the range query are not going to be very good when it has no idea
> what the range bounds are.  This is a situation where it may be best
> to absorb the hit of re-planning each time instead of using a generic
> parameterized plan.

I will try this Monday, but isn't 75 seconds an awful long time?  It
almost seems like even the worst plan could find records faster than
that, and if it were actually scanning everything sequentially, there
would be a fair amount of variation, say 25 seconds, 50 seconds, 100
seconds.  The most I have seen is a range of, say, 75-77.  That just
seems way too slow.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] ERROR: could not access status of transaction

2006-04-16 Thread Thomas F. O'Connell

I recently saw this in some postgres logs:

ERROR:  could not access status of transaction ...
DETAIL:  could not open file "pg_clog/...": No such file or directory
STATEMENT:  COPY ...

The copy command was issued amongst a number of other queries from an  
application layer at the same time as a pg_dumpall was running.


Based on this:

http://archives.postgresql.org/pgsql-committers/2006-01/msg00287.php

I would not have thought this cluster was vulnerable. It's running  
8.1.3. It makes me a little nervous that the nightly backup was  
interrupted, but it makes me a little more nervous that it seems  
indicative of an abstruse error condition. Any thoughts from the  
hackers?


--
Thomas F. O'Connell
Database Architecture and Programming
Sitening, LLC

http://www.sitening.com/
3004 B Poston Avenue
Nashville, TN 37203-1314
615-260-0005 (cell)
615-469-5150 (office)
615-469-5151 (fax)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] hard shutdown of system

2006-04-16 Thread surabhi.ahuja
the user tries to do kill -9 -1 and log in 
again
 
in the startup script i do the 
following
 
/sbin/pidof -s postmaster
 
and it still displays some value,
 
however ps -aef | grep postmaster does not display 
anything
 
is it ok if i do the following
 pid1=`/sbin/pidof -s 
postmaster` pid2=`ps -eaf | grep postmaster | grep -v grep | tail -1 | 
awk '{print $2}'`
if ($pid1 and $pid2)
=> postmaster is already running
 
otherwise
 
i check if postmaster.pid exists
if it does, i delete it
and then start postmaster by doing $PGCTL -l 
$POSTGRES_LOG -D $PGDATA -p $POSTMASTER -o '-p ${PGPORT}' start  > 
/dev/null 2>&1
 
is it ok?
 
thanks,
regards
Surabhi