Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Martijn van Oosterhout
On Fri, Aug 17, 2007 at 03:18:30PM +0930, Tyson Lloyd Thwaites wrote:
 It seems like something that would be fairly easy to change... I don't 
 know. I read someone talking about putting automatic checkpoints on 
 every statement that goes through the jdbc driver to get around this 
 issue. Perhaps this is the only way... unless pg could be put in a mode 
 where it did this for you... like autocommit, but auto-checkpoint.

FWIW, this is exactly how the psql client handles ON_ERROR continue.
There is resistance to building it into the protocol itself since the
autocommit fiasco. The backward compatability issues are daunting and
as pointed it can be handled adequately inside the driver
(jdbc/odbc/etc).

(That's not to say it couldn't be made easier for the clients also. For
example as part of the query request have a flag that says: start
savepoint prior to execution.)

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites



The auditing is an interesting question, to which I do not have an
answer.  Seems impossible to do in a transaction, by definition (ie
not product specific).  Thoughts?


We do this with MSSQL. I have never given it a second thought until now. 
If anything goes wrong we send an audit event, usually in a catch block, 
all in the same transaction. If we have caught the exception, Spring 
will commit, and the 'good' statements will stick.


The best example of this is if we can't delete a record, we will 'mark' 
it instead, for example:


try {
 delete record
} catch (DataIntegrityViolationException e) {
 mark record deleted/invisible
}

These deleted records are then cleaned up at a later stage. This works 
fine for MSSQL, I must admit we are not actively maintaining our Oracle 
support.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites

Martijn van Oosterhout wrote:


On Fri, Aug 17, 2007 at 03:18:30PM +0930, Tyson Lloyd Thwaites wrote:
 

It seems like something that would be fairly easy to change... I don't 
know. I read someone talking about putting automatic checkpoints on 
every statement that goes through the jdbc driver to get around this 
issue. Perhaps this is the only way... unless pg could be put in a mode 
where it did this for you... like autocommit, but auto-checkpoint.
   



FWIW, this is exactly how the psql client handles ON_ERROR continue.
There is resistance to building it into the protocol itself since the
autocommit fiasco. The backward compatability issues are daunting and
as pointed it can be handled adequately inside the driver
(jdbc/odbc/etc).

(That's not to say it couldn't be made easier for the clients also. For
example as part of the query request have a flag that says: start
savepoint prior to execution.)

Have a nice day,
 

I am not familiar with the autocommit fiasco, but I can use my 
imagination... :)


You mention it can be built into the driver - is this a 'could be' or 
'has been', ie is this a mainstream practice, or would such a thing be 
considered risky? I would think it would be wonderful to allow users to 
choose to override this behaviour - and suffer the consequences if 
necessary! It's like playing with the fsync setting... turn it off at 
your own risk.


--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites
Our app uses system state. We scan filesystems and record file 
information in a database.


Here is one example:

txn started by Spring in web container
- insert 250 files
- update some stats (MUST work even if insert fails)
- update agent last-contact time (also must work so we know it's not awol)
txn commited by Spring in web container

When last insert has finished, server will kick into summary mode:

txn started by Spring in web container
- wait for asynchronous inserts to finish
- summarise data
- do some misc operations
- switch back to 'Idle' state
txn committed by Spring in web container

That last step is where the trick is: if anything goes wrong with the 
previous operations, we MUST be able to put the system back into an 
'Idle' state. Currently we do this in a catch block, ie:


try {
do the summary stuff
} catch {
switch back to idle
}

Now of course that won't work in pg. We use the same logic all through 
the system, so you can see my problem. For example, if the user deletes 
a drive record that still has referential links to it, we do this:


try {
delete drive
} catch (ReferentialIntegrityException e) {
mark drive as deleted so it becomes invisible, is deleted later
}

We rely on the fact that we can still do things and commit a transaction 
even if a single statement has failed.


The above drive delete case is where I first noticed the problem when 
switching from MSSQL: instead of the server handling the exception and 
doing something else instead, the 'something else' also threw an 
exception (txn is aborted), which propagated to the client end.


UPDATE: Webb has proposed that this behaviour may be MSSQL misbehaving.

--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Martijn van Oosterhout
On Fri, Aug 17, 2007 at 04:10:24PM +0930, Tyson Lloyd Thwaites wrote:
 I am not familiar with the autocommit fiasco, but I can use my 
 imagination... :)

The changed transaction semantics caused much havoc with librarys and
drivers because client program could change the setting and driver no
long had any idea when their queries would commit.

 You mention it can be built into the driver - is this a 'could be' or 
 'has been', ie is this a mainstream practice, or would such a thing be 
 considered risky? I would think it would be wonderful to allow users to 
 choose to override this behaviour - and suffer the consequences if 
 necessary! It's like playing with the fsync setting... turn it off at 
 your own risk.

Looking at the archives it's definitly been discussed but looks like
it's waiting for someone to implement it... (Odd, I wouldn't have
thought it was so difficult).

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites
It looks like it would be best if we re-worked our transactions and 
controlled them manually for the portions that need it. It looks like we 
have inadvertently been relying on a nasty 'quirk' ;) in MSSQL. I would 
rather not go down the path of doing workarounds to make pgsql work like 
mssql.


It is still a possible point of confusion, but I am starting to think 
that pgsql has it right, and mssql has it wrong. (I feel betrayed!) This 
issue probably deserves a prominant place in the FAQ!


Thanks for your help everyone.

Our app uses system state. We scan filesystems and record file 
information in a database.


Here is one example:

txn started by Spring in web container
- insert 250 files
- update some stats (MUST work even if insert fails)
- update agent last-contact time (also must work so we know it's not 
awol)

txn commited by Spring in web container

When last insert has finished, server will kick into summary mode:

txn started by Spring in web container
- wait for asynchronous inserts to finish
- summarise data
- do some misc operations
- switch back to 'Idle' state
txn committed by Spring in web container

That last step is where the trick is: if anything goes wrong with the 
previous operations, we MUST be able to put the system back into an 
'Idle' state. Currently we do this in a catch block, ie:


try {
do the summary stuff
} catch {
switch back to idle
}

Now of course that won't work in pg. We use the same logic all through 
the system, so you can see my problem. For example, if the user 
deletes a drive record that still has referential links to it, we do 
this:


try {
delete drive
} catch (ReferentialIntegrityException e) {
mark drive as deleted so it becomes invisible, is deleted later
}

We rely on the fact that we can still do things and commit a 
transaction even if a single statement has failed.


The above drive delete case is where I first noticed the problem when 
switching from MSSQL: instead of the server handling the exception and 
doing something else instead, the 'something else' also threw an 
exception (txn is aborted), which propagated to the client end.


UPDATE: Webb has proposed that this behaviour may be MSSQL misbehaving.




--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Procedural Code Profiling

2007-08-17 Thread Pavel Stehule
2007/8/6, Lewis Cunningham [EMAIL PROTECTED]:
 Hi all,

 What is the best tool for an app to profile procedural code in
 postgres?  I want to instrument my code and trace it so that I can
 see which code bits are sucking up the cpu and time.  I know I can
 stick messages in the code and gather my own stats but I was
 wondering is there are any good tools to automate this.

 Specifically working with pl/pgSQL and 8.2 (on Linux or windows) if
 that helps.

 Thanks,

 LewisC


pgfouine is the best

Regards
Pavel Stehule

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [BUGS] some information

2007-08-17 Thread Pavel Stehule
Hello

it isn't bug!

PostgreSQL's driver for perl
http://search.cpan.org/~dbdpg/DBD-Pg-1.49/Pg.pm

look to postgresql.conf (port) and pg_hba.conf (enable access)

Regards
Pavel Stehule

2007/8/17, rakesh kumar [EMAIL PROTECTED]:

 how to connect postgresql database with perl


 Please if server ip:192.168.0.1
 server name=abcd
 we using Slackware 10.2  kernel 2.6.17
 which suport postgresql version

 rakesh


  
 Looking for a deal? Find great prices on flights and hotels with Yahoo!
 FareChase.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Vacuum problem

2007-08-17 Thread Kari Lavikka

Hello!

Our 300GB database requires vacuum about once a week to prevent 
transaction id wrap around. Database is heavily updated and sometimes 
vacuuming takes over 12 hours.


We have a major problem with a table that tracks online users 
(user_online). A row is inserted when a user logs in and a seen stamp is 
updated every five minutes if he/she loads any pages. Periodical cron job 
deletes all idle users - rows that have seen older than 20 minutes. 
Frequent vacuuming is required to delete old row versions. We have usually 
about 20k users online.


However, database wide vacuum prevents user_online vacuum from deleting 
old row versions and the table gets incredibly bloated. Queries involving 
the user_online practically cease to work.


Any suggestions for this problem?

Btw, we have PostgreSQL 8.1.


|\__/|
( oo )Kari Lavikka - [EMAIL PROTECTED] - (050) 380 3808
__ooO(  )Ooo___ _ ___ _ _  _   __  _  _
  

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Gregory Stark
Tyson Lloyd Thwaites [EMAIL PROTECTED] writes:

 Gregory Stark wrote:

Tyson Lloyd Thwaites [EMAIL PROTECTED] writes:

Normally if we catch the exception, other dbs (Oracle, MSSQL) will let us
keep going.

How do you catch exceptions in these other dbs?

 plain java try/catch. In other dbs, if I am in a txn, and I run 3 statements,
 and one of them fails, spring's jdbc wrapper will throw an exception up to the
 wrapping template, which will roll the txn back. However if I wrap one of 
 those
 statements in a try/catch(RuntimeException), then the exception will not
 propagate, but the db will not automatically fry the whole transaction either.

The way to do it in Postgres is with the SAVEPOINT command.

 Yeah, but that is totally unrealistic in our case. I can't go sprinkling if
 current db is pgsql then savepoint here code all through the app. It's a bit
 annoying that pg insists on being so different to others in this respect,
 especially since the spec does not mandate it, and appears even to say that 
 the
 transaction should be allowed to continue. (I read this in another pg thread, 
 I
 will have to find the argument there, it was a good one). I wish it could be
 turned off - it seems a bit draconian and heavy-handed to me.

Well you could suggest to your driver that it should wrap your queries in a
savepoint and throw a java exception in case it gets an error.

For example, if something goes wrong, I can't even write an event row to our
auditing table!

This is actually a somewhat more complex example than handling an expected
error. For audit records you really want to be able to commit the audit record
independently of the rest of the transaction. These are called autonomous
transactions and there's no built-in support for them in Postgres but you can
put together something equivalent using dblink or a second client connection.
  

 it is more like this:

 try {
  do something
 } catch (Exception e) {
  insert into audit table: hey, it broke
 }

Well that doesn't work if you have more statements to execute within the same
transaction. Yo would still want the audit record to be committed even if you
subsequently roll back the transaction.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites

Gregory Stark wrote:


Tyson Lloyd Thwaites [EMAIL PROTECTED] writes:

 


Gregory Stark wrote:

   


Tyson Lloyd Thwaites [EMAIL PROTECTED] writes:

 


Normally if we catch the exception, other dbs (Oracle, MSSQL) will let us
keep going.
   


How do you catch exceptions in these other dbs?
 


plain java try/catch. In other dbs, if I am in a txn, and I run 3 statements,
and one of them fails, spring's jdbc wrapper will throw an exception up to the
wrapping template, which will roll the txn back. However if I wrap one of those
statements in a try/catch(RuntimeException), then the exception will not
propagate, but the db will not automatically fry the whole transaction either.

   


The way to do it in Postgres is with the SAVEPOINT command.
 


Yeah, but that is totally unrealistic in our case. I can't go sprinkling if
current db is pgsql then savepoint here code all through the app. It's a bit
annoying that pg insists on being so different to others in this respect,
especially since the spec does not mandate it, and appears even to say that the
transaction should be allowed to continue. (I read this in another pg thread, I
will have to find the argument there, it was a good one). I wish it could be
turned off - it seems a bit draconian and heavy-handed to me.
   



Well you could suggest to your driver that it should wrap your queries in a
savepoint and throw a java exception in case it gets an error.

 

I am not sure how one would go about suggesting such a thing to the 
driver? That sounds good though.



For example, if something goes wrong, I can't even write an event row to our
auditing table!
   


This is actually a somewhat more complex example than handling an expected
error. For audit records you really want to be able to commit the audit record
independently of the rest of the transaction. These are called autonomous
transactions and there's no built-in support for them in Postgres but you can
put together something equivalent using dblink or a second client connection.


 


it is more like this:

try {
do something
} catch (Exception e) {
insert into audit table: hey, it broke
}
   



Well that doesn't work if you have more statements to execute within the same
transaction. Yo would still want the audit record to be committed even if you
subsequently roll back the transaction.
 

Exactly - it won't work for pgsql but it WILL work for MSSQL (although 
it probably shouldn't). This is my problem exactly.



--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites

You are right, it is a Java webapp.

I could post code, but the actual statements I am running are just plain 
sql (wrapped in wrappers of wrapped wrappers...) which are run in a DAO 
object in the third layer of the app. I would have to post reams of 
code, which would break my non-disclosure agreement unfortunately. The 
real issue is the fact that Spring intercepts certain requests and wraps 
them in a transaction (a big one around many smaller statements), but I 
want the luxury of allowing some of those statements to fail, but keep 
going with the 'grand picture' transaction, if you catch my drift. To 
post any useful code I would probably have to code up a simple JDBC test 
case that demonstrates the problem.


Here is a real world example, though: after we have filled up a certain 
table with records, we then create all the indexes on it. The problem is 
that because of the design, there is no guarantee that this has not 
already happened. So in this case, I would like to be able to say try 
creating indexes on the table, but you get an error because they are 
already there, then continue on. I think this can only be done using 
checkpoints.


I am not opposed to introducing checkpoints to our API, but it would be 
nicer if I didn't have to. At the moment I have resigned myself to 
turning off spring declarative txns for certain methods, and handling 
them manually by doing multiple txn blocks. In the above example 
however, the bit that I want to allow to fail is inside a method that 
would have to be wrapped in a transaction  see the web of 
complexity that is growing?



It is still a possible point of confusion, but I am starting to think
that pgsql has it right, and mssql has it wrong. (I feel betrayed!) This
issue probably deserves a prominant place in the FAQ!
   



Betrayed? yes.  Surprised?  I hope not :)

I think your driver (JDBC? or?)  is doing autocommit (though I am not
sure), and in the discussion we are confounding confusing rollbacks
(in the database) with exception handling (in the application,which, I
presume, is Java?).

You may be best reworking your transactions, but in order to clear up
the confusion you will probably need to post some actual code here (we
can handle it, don't worry...).

-W
 




--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Greg Smith

On Fri, 17 Aug 2007, Tyson Lloyd Thwaites wrote:

It looks like it would be best if we re-worked our transactions and 
controlled them manually for the portions that need it.


I am glad you have moved so quickly through grief and into acceptance.

It is still a possible point of confusion, but I am starting to think that 
pgsql has it right, and mssql has it wrong. (I feel betrayed!) This issue 
probably deserves a prominant place in the FAQ!


In whose FAQ?  Would need more details about what MS-SQL is doing badly to 
cover it on this side of things.  There's been some collecting of trivia 
in that category lately, haven't gotten to issues with their product much 
so far.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites

Greg Smith wrote:


On Fri, 17 Aug 2007, Tyson Lloyd Thwaites wrote:

It looks like it would be best if we re-worked our transactions and 
controlled them manually for the portions that need it.



I am glad you have moved so quickly through grief and into acceptance.


Heh heh - maybe I've had good councellors...



It is still a possible point of confusion, but I am starting to think 
that pgsql has it right, and mssql has it wrong. (I feel betrayed!) 
This issue probably deserves a prominant place in the FAQ!



In whose FAQ?  Would need more details about what MS-SQL is doing 
badly to cover it on this side of things.  There's been some 
collecting of trivia in that category lately, haven't gotten to issues 
with their product much so far.



Pgs... like a warning that you can't do this;

begin
insert 1 --works
insert 2 --fails
commit

row 1 will exist in db (yes, no kidding). This will not work in pg, 
which I now see is obviously correct.



--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD




--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Webb Sprague
 Pgs... like a warning that you can't do this;

 begin
 insert 1 --works
 insert 2 --fails
 commit

 row 1 will exist in db (yes, no kidding). This will not work in pg,
 which I now see is obviously correct.

This should either a FAQ for MS-SQL or Spring, but since PG does it
canonically it doesn't actually get asked very frequently ...

Sounds like a nightmare specifically designed for vendor lock-in.  My
condolences.

I am not sure how you can insert into a log even with savepoints,
unless you put the logging statement first and then follow it with the
insert.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] CREATE RULE.. RETURNING?

2007-08-17 Thread Hannes Dorbath

In a rule for INSERT, UPDATE, or DELETE on a view, you can add a RETURNING 
clause that emits the view's columns. This clause will be used to compute the 
outputs if the rule is triggered by an INSERT RETURNING, UPDATE RETURNING, or 
DELETE RETURNING command respectively.


Can someone enlighten me about the exact syntax to use this feature? I 
just end up with syntax errors.


Thanks in advance.


--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Mikko Partio
On 8/16/07, Douglas McNaught [EMAIL PROTECTED] wrote:

 Devrim GÜNDÜZ [EMAIL PROTECTED] writes:

  What I'm pondering here is that is the cluster able to keep the
  postmasters synchronized at all times so that the database won't get
  corrupted.
 
  Keep all the $PGDATA in the shared disk. That would minimize data loss
  (Of course, there is still a risk of data loss -- the postmasters are
  not aware of each other and they don't share each other's buffers, etc.)

 It would be much better to have the cluster software only run one
 postmaster at a time, starting up the secondary if the primary fails.
 That's the usual practice with shared storage.



This was my original intention. I'm still quite hesitant to trust the
fencing devices ability to quarantee that only one postmaster at a time is
running, because of the disastrous possibility of corrupting the whole
database.

Maybe I'm just better off using the more simple (crude?) method of drbd +
heartbeat?

Regards

MP


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Hannes Dorbath

On 17.08.2007 11:12, Mikko Partio wrote:

Maybe I'm just better off using the more simple (crude?) method of drbd +
heartbeat?


Crude? Use what you like to use, but you should keep one thing in mind: 
If you don't know the software you are running in each and every detail, 
how it behaves in each and every situation you can think of, it's a bad 
idea to use it in a HA setup.


You don't want to be one of those admins that just configured something 
in a few days, moved production stuff on it and fail to recover from a 
split brain situation. Setting up a HA environment is something you do 
in months, not days, at least if you want to do it right. There is so 
much that can go wrong, and so much to learn. Keep it simple.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Mikko Partio
On 8/17/07, Hannes Dorbath [EMAIL PROTECTED] wrote:

 On 17.08.2007 11:12, Mikko Partio wrote:
  Maybe I'm just better off using the more simple (crude?) method of drbd
 +
  heartbeat?

 Crude? Use what you like to use, but you should keep one thing in mind:
 If you don't know the software you are running in each and every detail,
 how it behaves in each and every situation you can think of, it's a bad
 idea to use it in a HA setup.

 You don't want to be one of those admins that just configured something
 in a few days, moved production stuff on it and fail to recover from a
 split brain situation. Setting up a HA environment is something you do
 in months, not days, at least if you want to do it right. There is so
 much that can go wrong, and so much to learn. Keep it simple.



Exactly my thoughts, as I have some experience with drbd and I know it
works. My point was that since I have access to a san environment, a shared
storage would be a more elegant solution, but as you pointed out it's
probably better to stick to the method that feels most comfortable.

Thanks for your thoughts.

Regards

MP


[GENERAL] serial grows on failed requests

2007-08-17 Thread rihad

Hi, my table is defined as:
CREATE TABLE users (
id integer NOT NULL,
...
);

CREATE SEQUENCE users_id_seq
INCREMENT BY 1
NO MAXVALUE
NO MINVALUE
CACHE 1;

ALTER SEQUENCE users_id_seq OWNED BY users.id;

Although it's just a more verbose way to say
create table users (id serial primary key);
:)

When I do an insert that fails (like FK inconsistency, illegal value, 
etc.) the users.id grows nonetheless... This is unacceptable for my 
current needs. Any way to prevent that while still maintaining ease of 
use? Using PostgreSQL 8.2.4


Thanks.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Alban Hertroys
Tyson Lloyd Thwaites wrote:
 I am not opposed to introducing checkpoints to our API, but it would be
 nicer if I didn't have to. At the moment I have resigned myself to
 turning off spring declarative txns for certain methods, and handling
 them manually by doing multiple txn blocks. In the above example
 however, the bit that I want to allow to fail is inside a method that
 would have to be wrapped in a transaction  see the web of
 complexity that is growing?

Isn't the 'try' statement rather similar to a 'savepoint' command? I
realize it would be difficult to override the behaviour of try {...}
catch (...) {...}, but it shouldn't be too hard to wrap it somehow for
exceptions in database code.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Postmaster does not shut down

2007-08-17 Thread Jeff Amiel
A 'bad' thing happened yesterday.
Postgresql 8.1.X FreeBSD 6.0
At some point in the day, ran out of space on the root
filesystem.  (db is elsewhere)

Took about 10 minutes to clear enough space to make
processes stop freaking out and to slow my heart-rate
down to below 200 beats per minute.

Everything 'seemed' to be fine after that.  Database
was constantly utilized with no errors for several
more hours.

Later that evening (during our peak activity time), a
process was inadvertently run that when combined with
peak activity time caused the load average on the box
to spike into the stratosphere (19).  *sigh*

Database activity was taking so long (heavy web app
server use), that db connection handles skyrocketed
(over 120 when normally stays below 30) and started
getting complaints from pool management  No
ManagedConnections available within configured
blocking timeout ( 3 [ms] )

At that point, started stopping all unnecessary
activity and even restarted app server to clear all
the connections.  Howeverthe db connections all
stayed in some sort of 'frozen' state.showing that
they were either idle (in transaction) or processing
SELECTs or whatever they were last doing.  (Normally
restarting app server will cause those connections to
reset pretty quickly)  Tried killing a few of the idle
ones manually...but it was like pissing on a forest
fire.

H.  It was time to do something hadn't done in a
LOOONG time, restart postgres.  
Uh oh postmaster does not shut down.
I tried fast, I tried immediate.  No dice.  Postmaster
refused to die.  I don't know how many times I've
heard don't kill -9 the postmaster, but that's what
I did  (on the 'head' of the snake referenced in the
pid file)it died, but all the other processes
remained. 
*grumble*.  Couldn't get a killall to get rid of the
rest (not sure why) so I was back to pissing on the
forest file typing kill kill kill.  My co-hort in the
office claimed that more processes were showing up for
each one I killed(I assume he was delirious at
that point) but this was taking too long anyway.  So I
bounced the box.

I prayed.  

Came back up fine...postgres, slony, etc.
no errors reported when the database started.

I see no errors in my logs through the night when
autovacuum and backups occurred, so the database
'appears' to be intact.

I do notice that all of my logs (all.log, etc) are
'missing' between the filesystem full issue earlier in
the day and the time I rebooted (I assume the OS was
NOT happy with me letting it run out of space on / )

So.other than the half-dozen things that I
obviously did wrong, was there anything else I could
have done with postgres when it refused to shut down?







   

Be a better Globetrotter. Get better travel answers from someone who knows. 
Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=listsid=396545469

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Alban Hertroys
Webb Sprague wrote:
 I am not sure how you can insert into a log even with savepoints,
 unless you put the logging statement first and then follow it with the
 insert.

and delete it after success?

Alternatively you could use one connection for your normal queries, and
another for auditing. Your application will need twice as many
connections that way, though... In that case you shouldn't commit
records on the 'normal' connection before the audit records have been
committed I suppose?

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] serial grows on failed requests

2007-08-17 Thread Andreas Kretschmer
rihad [EMAIL PROTECTED] schrieb:

 When I do an insert that fails (like FK inconsistency, illegal value, etc.) 
 the users.id grows nonetheless... This is unacceptable for my current 

normal behavior.

 needs. Any way to prevent that while still maintaining ease of use? Using 
 PostgreSQL 8.2.4

Look at:
http://www.varlena.com/GeneralBits/130.php


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DML value format

2007-08-17 Thread Kev
On Aug 10, 6:56 am, [EMAIL PROTECTED] (Alejandro Torras) wrote:
 -- English --
 Hi,

 Is there some way to put values in a INSERT statement
 without taking care of apostrophes?

 In example:
 INSERT INTO persons VALUES ('Harry', 'O'Callaghan');
   ^^^

 I think that it can be used some kind of length-marker
 to help the parsing of the value.

 In example:
 INSERT INTO persons VALUES ('Harry', @11:O'Callaghan);

 I think this approach could help reducing the sql
 injections.

If you're worried about sql injections, I'm not sure what kind of
client library you're using to access your pgsql database, but most of
them should have placeholder capability, I think, which definitely
takes care of that.  Your statement would become INSERT INTO persons
VALUES (?, ?) and then you would pass the execution function your two
values and it would escape them properly for you.  At least that's how
it works with Perl and DBI.

Kev


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Problem Designing Index

2007-08-17 Thread Alan J Batsford

Hello,

I'm doing some select statements on my table that look like:
SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp 
'2007-07-18 21:29:57' OR prod_num  '1234567' ORDER BY prod_num ASC,
transaction_timestamp ASC LIMIT 1;

I've added two indices one for prod_num and another transaction_timestamp.
This table has 151,000 rows and the above statement returns in less than a
millisecond. If I change the above statement from '' to '' it takes 8
seconds to complete. Prod_num '1234567' is towards the end of the 151k
rows. If i use a prod_num like '123' towards the front the problem is
reversed with '' and ''.

I tried adding a third index that uses both prod_num and
transaction_timestamp. The average performance at each end of the data for
both '' and '' improved but the problem wasn't resolved. Selects at the
end of the data with '' conditions (Like the original statement) then
becomes broken and takes 500 ms to finish, which is unacceptable for the
application.

I did analyze on the table with no effect.

Is it possible to design an index that can account for all the scenerios?
Thanks for any help you can provide.

-Alan


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] non superuser creating flat files

2007-08-17 Thread Terri Reid
Tom, 

Thank you very much, that works now.

Terri Reid
01925 732359
 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: 13 August 2007 15:50
To: Terri Reid
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] non superuser creating flat files 

Terri Reid [EMAIL PROTECTED] writes:
 I have data that is being updated in a table that I need to export to a
flat
 file via a database trigger on insert or update. The user performing the
 update will not be a superuser. I've tried to use COPY TO, but that
doesn't
 work for non-superusers.

It will if invoked in a trigger function that's marked SECURITY DEFINER
and owned by a superuser.

regards, tom lane


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Postgres : Close cursor / PerformPortalClose

2007-08-17 Thread Eric Rasoa
Hello,
 
I have a question about the PerformPortalClose function associated with the SQL 
command Close Cursor. Sorry if you have already answered it but I didn't find 
the answer :
- With releases  7.4, the function listed above produced only a warning 
message when the cursor to be closed didn't exist (in documentation, it says 
that it is following the SQL92 standard).
- Since releases 7.4, the same function now produces an error message
 
1/ Can you tell me why this has changed
2/ Is there any way to have an option or something similar at compilation time 
or run-time to choose if we want the performportalclose to raise an error or 
just a warning
 
Thanks in advance for your help.
 
Eric Rasoa


[GENERAL] Non-superuser creating a flat file

2007-08-17 Thread Terri Reid
I have data that is being updated in a table that I need to export to a flat
file via a database trigger on insert or update. The user performing the
update will not be a superuser. I've tried to use COPY TO, but that doesn't
work for non-superusers. Is there some other functionality that can write
out flat files? I'm looking for something similar to the Oracle UTL file
functionality, as ideally I'd want to append the data to an existing file
and be able to manipulate the data before I wrote it out.

 

Thanks

Terri

 

Terri Reid

BCT Software Solutions 

Mail: [EMAIL PROTECTED]

Tel: 01925 732359

 



Re: [GENERAL] SQL question: checking all required items

2007-08-17 Thread Carlos Ortíz
?
Try
 
Select * from people where person_id in (
  Select person_ID from Items_for_people group by Person_id Having Count(*) 
= (
   Select count(*) from Items Where is_required = true))
 
Or something like that. That's the idea. Probe it and tell us.
(May be the sintaxis it's not correct, but I'm new in postgresql. In sql server 
it's ok)
 



De: [EMAIL PROTECTED] en nombre de Raymond O'Donnell
Enviado el: Vie 10/08/2007 03:07 p.m.
Para: 'PostgreSQL'
Asunto: [GENERAL] SQL question: checking all required items



Hi all,

Given the following tables -

   create table people (
 person_id text primary key,
 person_name text,
 [...etc...]
   );

   create table items (
 item_id text primary key,
 item_name text,
 is_required boolean,
 [...etc...]
   );

   create table items_for_people (
 person_id text,
 item_id text,
 primary key (person_id, item_id),
 foreign key person_id references people(person_id),
 foreign key item_id references items(item_id)
   );


- how can I find those people who don't have _all_ of the items which
are marked required?

In other words, how do I select those rows in people which don't have
a corresponding row in items_for_people for *each* row in items
which has is_required=true?

Many thanks,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




Re: [GENERAL] Confusing performance of specific query

2007-08-17 Thread Adam Endicott
On Aug 9, 10:47 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
 Do you have comparable work_mem settings on both machines?  Another
 thing to look at, if any of the sort key columns are textual, is whether
 the lc_collate settings are the same.

work_mem is commented out in both postgresql.conf files:
desktop: #work_mem = 1MB
production: #work_mem = 1024

So I guess they use the defaults?

Ahh, but lc_collate is different (assuming I'm finding it right). On
my machine it's set to C, and on the production server it's set to
en_US.UTF-8. I'm going to have to do some research to figure out
what that does.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How can I avoid PGPool as a single point of failure?

2007-08-17 Thread [EMAIL PROTECTED]
My application currently has a single PG 8.2 database server, and I'm
bringing more boxes online to mitigate the risk of a single point of
failure.

I'm interested in using PGPool to do the load balancing, and it was
suggested that I put one server running PGPool in front of two
database servers. This only seems to move my risk to the PGPool
machine.

Is it possible to point two servers running PGPool at the same two
database servers? If so, I seem to recall reading about being able to
use some kind of virtual IP address to split the traffic to both.

Any ideas are greatly appreciated!


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Tyson Lloyd Thwaites

Hi,

I know this issue has been discussed at length before, but postgresql's 
behaviour of forcing a rollback when any error occurs is making life 
very difficult for me. We use Spring's transaction proxies, which are 
applied to methods in web controllers. In the backend code, if a runtime 
exception occurs and we let it propagate, then Spring will catch it and 
roll back the transaction for us. However, sometimes we need to catch 
exceptions on certain ops that are allowed to fail, and let the rest of 
the transaction proceed.


For example, in our app there may be many operations that are performed 
in the course of a single web request: changing system state, updating 
last request times, writing logs records, etc. Normally if we catch the 
exception, other dbs (Oracle, MSSQL) will let us keep going. However 
with postgresql, if something goes wrong that we would normally ALLOW, 
it bombs the whole request! This is no good! :(


What has effectively happened is that postgresql has taken away my right 
to allow certain non-fatal SQL errors to occur during a transactional 
request. For example, if something goes wrong, I can't even write an 
event row to our auditing table! Argh!! We do not have the option to 
turn off the transactions at the top level; this would require a major 
rework, and require us to put manual transaction blocks everywhere - 
exactly what Spring helps us avoid.


Is there some way to turn this behaviour off without having to resort to 
pg-specific code (ie sprinkling checkpoints everywhere)? We allow 
customers to plug their chosen db into our app for a backend. This has 
been fine until someone asked for postgresql... this behaviour is 
different to the other big dbs. Is there any reason this behaviour is 
not at least optional?


Thanks for listening. My research on other threads has not given me much 
hope, but maybe things have changed recently?


--
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Help creating a function

2007-08-17 Thread Madison Kelly

Hi all,

  I'm using ulogd with PostgreSQL which stores IP addresses as 32bit 
unsigned integers. So when I select some data I get something like:


ulogd= SELECT id, ip_saddr, ip_daddr, raw_pktlen, ip_totlen, tcp_window 
FROM ulog LIMIT 20;

 id |  ip_saddr  |  ip_daddr  | raw_pktlen | ip_totlen | tcp_window
++++---+
  1 | 3232235874 | 1074534522 | 46 |46 |  25825

  Where 'ip_saddr' and 'ip_daddr' are 'bigint'. I know I can convert 
these numbers to dotted-decimal in perl with a small script like:


-=-=-
#!/usr/bin/perl

# This would be the number read from the DB
my $num=3232235874;

# Now do the math
my $temp=$num/256;
my $D=256*($temp-int($temp));
$temp=(int($temp))/256;
my $C=256*($temp-int($temp));
$temp=(int($temp))/256;
my $B=256*($temp-int($temp));
my $A=int($temp);
my $ip=$A.$B.$C.$D;

# Print the results
print 'num': [$num] - 'IP': [$ip]\n;
-=-=-

  What I would like to do is create a function that would do the same 
thing so I could read out the IP addresses as standard dotted-decimal 
format. Could anyone help me with this? I am quite the n00b when it 
comes to functions. :)


Thanks all!

Madi

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Persistent connections in PHP

2007-08-17 Thread Torsten Zühlsdorff

Hannes Dorbath schrieb:

On 14.08.2007 23:13, Dmitry Koterov wrote:

Pconnects are absolutely necessary if we use tsearch2, because it
initializes its dictionaries on a first query in a session. It's a very
heavy process (500 ms and more). So, if we do not use pconnect, we waste
about 500 ms on each DB connection. Too much pain.


We've been using pconnect for exactly the same reason. Though startup 
time for our dictionary is even higher (around 2 seconds). The problem 
is that persistent connections in PHP are not clean implemented, they 
can get randomly garbage collected. The problem seems well known, though 
I'm unaware of any fix. I think it's best to use pgbouncer and plain 
connect ATM. Additionally, as mentioned earlier, using pconnect under 
the Apache webserver is not a good idea at all, at least not with it's 
current architecture.


If the dictionary is not too large, you should store it directly in the 
memory of the server. Therefore you can use Shared Memory 
(http://www.php.net/shmop, http://de3.php.net/manual/en/ref.sem.php).


Another advantage of the solution is, that you have one dictionary for 
all php-childs - so you do not waste memory by loading the dictionary 
each request.


Greetings,
Torsten

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Re: Best practice for: ERROR: invalid byte sequence for encoding UTF8

2007-08-17 Thread Lew

Phoenix Kiula wrote:

I am not advocating what others should do. But I know what I need my
DB to do. If I want it to store data that does not match puritanical
standards of textual storage, then it should allow me to...


Instead you want it to store tyrannically-chosen alternatives to the user's 
invalid input without regard for their intent.


Calling rejection of invalid input puritanical is not an engineering 
evaluation.

--
Lew

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Blobs in Postgresql

2007-08-17 Thread Trent Shipley
On Wednesday 2007-08-15 05:52, Gregory Stark wrote:
 Ron Olson [EMAIL PROTECTED] writes:
  Hi all-
 
  I am evaluating databases for use in a large project that will hold image
  data as blobs. I know, everybody says to just store pointers to files on
  the disk...

 Well not everyone. I usually do, but if you're not handling these blobs
 under heavy load independent of the database (like web servers) then either
 approach works.

I've always wondered how you keep transactions working when you only store 
pointers to large data.  Do you need an external transaction manager to 
insure that the file doesn't get deleted when you delete the data via the 
pointer?  Do you need an external application that handles all deletes, 
inserts, and updates?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-17 Thread Mohd Kamal Bin Mustafa
On 8/16/07, Steve Manes [EMAIL PROTECTED] wrote:
 On 8/15/07, Rohit [EMAIL PROTECTED] wrote:

 Another is because I typically do my web application programming in PHP5
 but the offline scripts in Perl.  Both can call the same stored
 procedures so I don't have multiple copies of database code to maintain
 and debug.
I'm also thinking the same thing once before. Doing as much logic in
the database and at the application layer most of it would be just a
SELECT .. call. It allows me to experiment with different kind of
langguage (PHP, Python, Ruby) without having to rewrite the business
logic of the application. But using this kind of approach make it hard
to debug and maintain. Changing simple thing would mean updating the
database schema and it would always frigthened me some silly mistake
can affect the integrity of the data. In the end, I slowly move the
logic out of the db and decided on one platform for the application
(PHP).

-- 
kamal, www.k4ml.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Emigrar SqlServer a Posgres

2007-08-17 Thread IVAN JOSE RIVERA URIARTE

  Bueno me pueden dar una mano para emigrar una bases
de dato de sql a posgres.
 
 Slds

Ivan Rivera
Tel#(505)8074075
Managua, Nicaragua

__
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis! 
Regístrate ya - http://correo.espanol.yahoo.com/ 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] 8.3 performance

2007-08-17 Thread Andy
I came upon this article

http://www.internetnews.com/dev-news/article.php/3647376

The last 2 paragraphs caught my eyes:

Among the improvements expected in PostgreSQL 8.3 are
further performance gains.

'The most exciting of these is an optimization that
would improve performance on OLTP systems between 50
percent and 200 percent depending on the application,'
Berkus noted. 'However, that feature involves some
difficult engineering that may not be ready in time.'


- what kind of optimization was he referring to?
- does it really improve OLTP performance across the
board by 50 - 200%, or does it only work in special
cases?
- Will the optimization make it to 8.3?

Thanks.


   

Building a website is a piece of cake. Yahoo! Small Business gives you all the 
tools to get online.
http://smallbusiness.yahoo.com/webhosting 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Persistent connections in PHP

2007-08-17 Thread Torsten Zühlsdorff

Hannes Dorbath schrieb:

On 15.08.2007 10:53, Torsten Zühlsdorff wrote:
If the dictionary is not too large, you should store it directly in 
the memory of the server. Therefore you can use Shared Memory 
(http://www.php.net/shmop, http://de3.php.net/manual/en/ref.sem.php).


Uhm, but how does TSearch get it from there? And even if it does, it 
will still held its own copy?


In this case I misunderstood you. I believed, that you get a dictionary 
from pg and use it. -.-
Because often people load a dictionary from db to get the translation of 
$content, shared memory can make this very fast. My advise based on this 
assumption.


Greetings,
Torsten

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Bytea question with \208

2007-08-17 Thread Lew

Decibel! wrote:

On Thu, Aug 09, 2007 at 04:16:15PM -0400, Woody Woodring wrote:

My bad,  the table I was looking (8.7) at had the first column as the
decimal representation and I did notice that the numbers changed as they
moved right.

Is there a way for bytea to take a hex number, or do I need to convert the
bit stream to octal numbers?


to_hex()?


to_hex() produces the text representation of the hex digits, not the actual 
bytes with those values.


What the OP wants is to conveniently represent the bytestream to send to the 
bytea column.  They're looking for an escape sequence to embed into a text 
representation that will map to the exact byte contents to insert, not a text 
representation of that representation.


Say,

Also supported are \digits, where digits represents an octal byte value, and 
\xhexdigits, where hexdigits represents a hexadecimal byte value. (It is your 
responsibility that the byte sequences you create are valid characters in the 
server character set encoding.)

http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS

Amazing how useful the docs are.

--
Lew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread Phoenix Kiula
I'm loving the fact that while I am doing some one-time updates to the
DB, users can still SELECT away to glory. This is a major boon in
comparison to my experience with another major opensource database.

However, I am a little frustrated by the amount of time PGSQL takes to
complete tasks. Just to accommodate these tasks, my conf file has the
following:

autovacuum = off
wal_buffers=64
checkpoint_segments=1000
checkpoint_timeout=900
fsync = off
maintenance_work_mem = 128MB


[PS: I will enable fsync after these operations, and decrease the
checkpoint_segments.]

I have dropped all indexes/indicises on my table, except for the
primary key. Still, when I run the query:

UPDATE mytable SET mycolumn = lower(mycolumn);

This is, at the time of this writing, has taken well over 35 minutes!
On a table of a mere 6 million rows (quoted from one discussion on
this mailing list).

I am on a 4GB RAM machine with two Intel Dual Core processors. Albeit
this is not a dedicated db server, another comparable FOSS database
never took these kinds of times to perform its operations.

Suspecting that locking may be the cause of this, I read up on
http://www.postgresql.org/docs/8.2/static/explicit-locking.html and
found nothing specific that would help a person starting out on the DB
to actually do meaningful explicit locking that the UPDATE command
does not already do.

I am now trying doing something like

UPDATE mytable SET mycolumn = lower(mycolumn)
WHERE id BETWEEN x AND y ;

This is way too laborious and untenable because I want to put the
fsync back on as soon as possible; this is a production database!

What else can I do to make this go fast enough to be normal!? Penny
for any thoughts and tips.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Are these two creation commands functionally identical?

2007-08-17 Thread dterrors
On Aug 11, 5:21 am, [EMAIL PROTECTED] (Ragnar) wrote:
 no these 2 are not fuctionally identical, because the second one
 does not have a NOT NULL constraint on the foreign keys,
 allowing you to insert:
 INSERT INTO sales (saleid,userid,parent_saleid)
 VALUES (100,null,100);


OK thank you.  Besides the not null, are they otherwise identical?
The first statement is from a graphical db model tool I want to use,
the second is from my own handwritten sql. I want to see if it's
essentially the same.  thanks.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Windows XP installation problem

2007-08-17 Thread Gilbert Albero
Hi!

I downloaded the
postgresql-8.2.4-1.ziphttp://wwwmaster.postgresql.org/download/mirrors-ftp?file=%2Fbinary%2Fv8.2.4%2Fwin32%2Fpostgresql-8.2.4-1.zip
and
install it to win xp service pack 2 but i'm encountering this error:

--
This installation package cannot be opened. Verify that the package exists
and you can access it, or contat the application vendor to verify that this
is a valid windows installer package.

I check the file, and its there. My windows login account has an
administrator level. I tried re-downloading the installer but still the same
error.

I would very much appreciate your help on this matter.

Lastly, my apology if i posted on the wrong postresql lists.

Thank!

xgee.one


Re: [GENERAL] why it doesn't work? referential integrity

2007-08-17 Thread Janning Vygen
On Saturday 11 August 2007 12:28:45 Pavel Stehule wrote:
 Hello

 I found strange postgresql's behave. Can somebody explain it?

 Regards
 Pavel Stehule

 CREATE TABLE users (
   id integer NOT NULL,
   name VARCHAR NOT NULL,
   PRIMARY KEY (id)
 );

 INSERT INTO users VALUES (1, 'Jozko');
 INSERT INTO users VALUES (2, 'Ferko');
 INSERT INTO users VALUES (3, 'Samko');

 CREATE TABLE tasks (
   id integer NOT NULL,
   owner INT REFERENCES  users (id) ON UPDATE CASCADE ON DELETE SET NULL,
   worker INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET NULL,
   checked_by INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET
 NULL, PRIMARY KEY (id)
 );
 INSERT INTO tasks VALUES (1,1,NULL,NULL);
 INSERT INTO tasks VALUES (2,2,2,NULL);
 INSERT INTO tasks VALUES (3,3,3,3);

 DELETE FROM users WHERE id = 1; -- works simple
 DELETE FROM users WHERE id = 2; -- works ok
 DELETE FROM users WHERE id = 3; -- doesn't work, why?

 ERROR:  insert or update on table tasks violates foreign key
 constraint tasks_checked_by_fkey
 DETAIL:  Key (checked_by)=(3) is not present in table users.
 CONTEXT:  SQL statement UPDATE ONLY public.tasks SET worker =
 NULL WHERE $1 OPERATOR(pg_catalog.=) worker

looks strange to me too, but i never had foreign keys to the same table. 
it works if you define your chekced_by FK deferrable with 

checked_by INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET NULL  
DEFERRABLE INITIALLY DEFERRED,

it seams that postgresql does its job in a procedural way instead of 
relational. 

kind regards,
Janning



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Compound Indexes

2007-08-17 Thread Lew

Phoenix Kiula writes:

I have a table with ten columns. My queries basically one column as
the first WHERE condition, so an index on that column is certain. But
the columns after that one vary depending on end-user's choice (this
is a reporting application) and so does the sorting order.

In MySQL world, I had sort_buffer in the config file, and I made a
compound index with the columns most often used in these types of
queries. So my index looked like:

  INDEX idx_trades(id, t_id, c_id, s_id,  t_brief, created_on);

This has five columns in it. While reading the pgsql documentation, I
gather than anything beyond three columns offers diminishing benefits.

My queries will look like these:

   SELECT * from trades where id = 9
   and c_id = 
   ORDER by s_id;

   SELECT * from trades where id = 9
   and s_id = 0
   ORDER by created_on desc ;

   SELECT * from trades where id = 9
   and s_id = 0
   and t_brief ~* 'more|than|one|word'
   ORDER by created_on desc ;



The documentation at
http://www.postgresql.org/docs/8.2/interactive/indexes-multicolumn.html
also points out that your multi-column index will not help much with the 
columns after id, certainly not with the columns after t_id since t_id 
isn't involved in any of the conditions or ordering.



The exact rule is that equality constraints on leading columns, plus any 
inequality constraints on the first column that does not have an equality 
constraint, will be used to limit the portion of the index that is scanned. 
Constraints on columns to the right of these columns are checked in the index, 
so they save visits to the table proper, but they do not reduce the portion of 
the index that has to be scanned.


and goes on to say,

Multicolumn indexes should be used sparingly. In most situations, an index on a 
single column is sufficient and saves space and time.


The planner can make use of single-column indexes in combination (if your PG 
is recent enough).

http://www.postgresql.org/docs/8.2/interactive/indexes-bitmap-scans.html

Sometimes multicolumn indexes are best, but sometimes it's better to create 
separate indexes and rely on the index-combination feature.


The selectivity of each column is also relevant.  If you have ten million rows 
with s_id values of only either 0 or 1, an index on s_id is not 
going to help much.


--
Lew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Enterprise Wide Deployment

2007-08-17 Thread john_sm

Hey guys, for an enterprise wide deployment, what will you suggest and why
among - Red Hat Linux, Suse Linux and Ubuntu Linux, also, do you think, we
can negotiate the support pricing down?
-- 
View this message in context: 
http://www.nabble.com/Enterprise-Wide-Deployment-tf4265690.html#a12139899
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Finding my database

2007-08-17 Thread apitt

Hi 
 
I initially had the Windows version of PostgreSQL v8.0 running on my C drive
and was connected to a database in the data subdirectory below PostgreSQL.  I
needed to move it to the D Drive.  After completely messing everything up, I
am now unable to get it working.  I have uninstalled everything and now have
v8.2 installed on the D drive, and I put the data directory back within that
installation.  On windows start up I am unable to get the service to start
and it generates the following message:

Error #1
Unable to connect to PostgreSQL (Port#:5432) to get the list of databases.
The error was:
101 - SQLSTATE = 08001

 
My question is, how do I get it to see the database which is still right
there so this message goes away and all is normal again.  I am totally
stumped as it must be the most unintuitve program I have ever come across,
and trying to find anything in the 1600+ page manual is fruitless.  I am
hoping I have not deleted any critical files.  

Can users give me some guidance please, as I am totally unfamiliar with the
complexities of the program.  I'm just an end user of the database within a
separate program. 
 
Thanks,
Adrian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-17 Thread Lew

Rohit wrote:

(4) Is it faster to work at application level or at the database level?


Richard Huxton wrote:
Probably faster in the database, assuming you have only one machine. If 
you have more than one machine then you can have each machine designed 
for its purpose. Of course, faster to run might be at the cost of more 
difficult to write. It might also mean your process code can crash the 
database server.


Which would you rather have, a wrong answer really quickly, a correct answer 
slowly, or no answer at all?


Be clear on the metric for faster before devising strategies.  Personally I 
prefer more correct first.  As Richard hinted, reliability and stability are 
usually important also, I'd say more important.


Faster to run also depends on the degree of client concurrency.  Sometimes 
middleware can scale logic better than the database engine can.  Systems can 
achieve higher aggregate throughput with perhaps a reduction in individual 
response time.



If you put logic where it belongs that can help scalability.  The database 
tends to be a central resource for all concurrent clients, so logic in that 
layer is potentially a bottleneck.  Danger in that layer is danger to everyone.


Business logic that relies on data but performs session-specific processing 
should live in the middleware in many cases.  One can throw extra hardware at 
that layer to parallelize client services, and errors tend to not escape a 
specific client session.


Business logic that is actually data-integrity logic could break the system if 
it were in the middleware layer.  Deadlocks, race conditions and other 
nastiness argue that that type of logic belongs in a more central location, 
with the data.  One can then exploit the database engine capabilities for this 
stuff.


As Richard metioned, there are plenty of boundary cases that require a 
judgment call.  Try to analyze which approach will have more risk in such 
cases; sometimes that helps discriminate.  It ain't always easy; that's why 
they pay us the big bucks.


--
Lew

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Help with this query (some join stuff I think)

2007-08-17 Thread Pat Maddox
I've got a bunch of companies that are associated with several videos.
 The videos have different statuses.  I want to select all the
companies in the database, and order them by videos that have a
complete status.

Here's what I have so far

SELECT
  companies.id,
  companies.name,
  companies.nickname,
  COUNT(company_id) AS num_videos
FROM companies
LEFT JOIN videos ON companies.id=videos.company_id
GROUP BY
  companies.id,
  companies.name,
  companies.nickname
ORDER BY num_videos DESC
LIMIT 10

This orders the companies by number of videos...but it says nothing
about the video status.  If I include a
WHERE videos.status='complete'

then it filters out the companies that have no complete videos.  I
want to include those companies in the result set though, saying that
num_videos is 0.

Anyone know how to get that done?

Pat

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Help creating a function

2007-08-17 Thread Madison Kelly
Note: This is being sent again (in case it shows up later). It never 
seemed to have made it to the list.


Hi all,

  I'm using ulogd with PostgreSQL which stores IP addresses as 32bit
unsigned integers. So when I select some data I get something like:

ulogd= SELECT id, ip_saddr, ip_daddr, raw_pktlen, ip_totlen, tcp_window
FROM ulog LIMIT 20;
 id |  ip_saddr  |  ip_daddr  | raw_pktlen | ip_totlen | tcp_window
++++---+
  1 | 3232235874 | 1074534522 | 46 |46 |  25825

  Where 'ip_saddr' and 'ip_daddr' are 'bigint'. I know I can convert
these numbers to dotted-decimal in perl with a small script like:

-=-=-
#!/usr/bin/perl

# This would be the number read from the DB
my $num=3232235874;

# Now do the math
my $temp=$num/256;
my $D=256*($temp-int($temp));
$temp=(int($temp))/256;
my $C=256*($temp-int($temp));
$temp=(int($temp))/256;
my $B=256*($temp-int($temp));
my $A=int($temp);
my $ip=$A.$B.$C.$D;

# Print the results
print 'num': [$num] - 'IP': [$ip]\n;
-=-=-

  What I would like to do is create a function that would do the same
thing so I could read out the IP addresses as standard dotted-decimal
format. Could anyone help me with this? I am quite the n00b when it
comes to functions. :)

Thanks all!

Madi


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] memory optimization

2007-08-17 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Sabin Coanda [EMAIL PROTECTED] wrote:

[...]
% So, what is better from the postgres memory point of view: to use temporary 
% objects, or to use common variables ?

Temp tables can cause serious bloat in some of the system catalog tables.
-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Trigger Procedure Error: NEW used in query that is not in a rule

2007-08-17 Thread Lew

Javier Fonseca V. wrote:

Hello.

I'm doing a Trigger Procedure in pl/pgSQL.  It makes some kind of auditing.

I think that it's working alright except for the next line:

EXECUTE 'INSERT INTO ' || quote_ident(somedynamictablename) || ' SELECT 
new.*';


PostgreSQL keeps telling me: ERROR: NEW used in query that is not in a 
rule.  I think that this NEW problem is because of the scope of the 
EXECUTE statement (outside the scope of the trigger), so it doesn't 
recognize the NEW record.


Maybe I could fix it concatenating column names and the 'new' values but 
I want to do my trigger as flexible as possible (I have several tables 
to audit).


Somebody has any suggestion?


You got some suggestions in response to your multipost of this question in 
pgsql.sql.


--
Lew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] copy command - date

2007-08-17 Thread Raj A
Thanks again guys =)
I've managed to use temp table to load the data and create new table/s
Now, how do I convert a text field with 'YY/MM/DD' to date field 'DD/MM/YY'?

On 13/08/07, Tom Lane [EMAIL PROTECTED] wrote:
 Paul Lambert [EMAIL PROTECTED] writes:
  novice wrote:
  db5=  \copy maintenance FROM test.txt

  I don't think copy allows you to leave columns out of your input file -
  even if they belong to a sequence.

 Well, it does, but you have to specify which ones are being provided,
 eg \copy tab(col1,col4,col7, ...

 But the long and the short of it is that COPY doesn't see any column
 delimiters at all in this file.  We're guessing as to what the OP
 intends the columns to be, but whatever he wants, he needs something
 other than an uncertain number of spaces to separate them ...

 regards, tom lane

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Repeat posts

2007-08-17 Thread Raymond O'Donnell

Hi all,

Is it just me? :-) from time to time I get repeat broadcasts from 
various PG mailing lists - posts that I've already received several days 
previously are sent again.


It's not a major problem, nor even annoying in any wayI was just 
wondering if anyone else has noticed it.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Finding my database

2007-08-17 Thread Raymond O'Donnell

On 17/08/2007 13:32, Raymond O'Donnell wrote:

Was the previous installation of Postgres also 8.2? If not - if it was 
an earlier version - I'd put the old version back, point it at the data 
directory, then use pg_dump to export the data if you want to upgrade at 
that point.


I meant to add that you can get Win32 versions back as far as 8.0 on the 
PostgreSQL home page (http://www.postgresql.org/) - top right of the 
page - click on the Binary link beside the version number.


To check the version number of your data files, look in the PG_VERSION file.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] tsearch2 query question

2007-08-17 Thread Roy Buyuksimkesyan

Hi,

SELECT contentid, title, (rank(to_tsvector(body),q) + 
rank(to_tsvector(title),q) + rank(to_tsvector(subtitle),q)) AS Score

FROM content, to_tsquery('parkyeri') AS q
WHERE statusid = 1
AND ispublished = 1
AND (to_tsvector(body) @@ q
   OR to_tsvector(title) @@ q
   OR to_tsvector(subtitle) @@ q )
ORDER BY Score

I have such a query. I'm not very sure if it will work but that's not 
the part of the question. As you see I'm using a lot to_tsvector() 
function. Which I believe it will not be good for the performance. So I 
thought changing my query to something like this:


SELECT contentid, title, (rank(fts_body, q) + rank(fts_title,q) + 
rank(fts_subtitle,q) ) AS Score

FROM content,
   to_tsquery('search  string') AS q,
   to_tsvector(body) AS fts_body,
   to_tsvector(title) AS fts_title,
   to_tsvector(subtitle) AS fts_subtitle
WHERE statusid = 1
AND ispublished = 1
AND ( fts_body @@ q
   OR fts_title @@ q
   OR fts_subtitle @@ q )
ORDER BY Score

So when I have changed to this, will the to_tsvector part will be 
available for every row? Or will it be just computed once? I mean in the 
first query where part is executed for every row, so I'm sure that it 
will be evaluated for all the rows. But when I put that in the from part 
will it compute the value once and will use the same value for all the 
where clauses? If that's the case what will be the value of fts_body? 
The tsvector of every row's data or just one row's data?


thank you for your time and patience

roy simkes

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Repeat posts

2007-08-17 Thread Alban Hertroys
Raymond O'Donnell wrote:
 Hi all,
 
 Is it just me? :-) from time to time I get repeat broadcasts from
 various PG mailing lists - posts that I've already received several days
 previously are sent again.
 
 It's not a major problem, nor even annoying in any wayI was just
 wondering if anyone else has noticed it.

I just finished going through my new mail since this morning, which
contained several fresh duplicates of messages I already read. So yes,
it happens to me too.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread Martijn van Oosterhout
On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
 However, I am a little frustrated by the amount of time PGSQL takes to
 complete tasks. Just to accommodate these tasks, my conf file has the
 following:
 
 autovacuum = off
 wal_buffers=64
 checkpoint_segments=1000
 checkpoint_timeout=900
 fsync = off
 maintenance_work_mem = 128MB

I note you did not give either the shared_buffers setting or the
work_mem setting, both of which would have a major impact on your
problem.

 I have dropped all indexes/indicises on my table, except for the
 primary key. Still, when I run the query:
 
 UPDATE mytable SET mycolumn = lower(mycolumn);
 
 This is, at the time of this writing, has taken well over 35 minutes!
 On a table of a mere 6 million rows (quoted from one discussion on
 this mailing list).

How big is the actual table itself (in bytes).

 Suspecting that locking may be the cause of this, I read up on
 http://www.postgresql.org/docs/8.2/static/explicit-locking.html and
 found nothing specific that would help a person starting out on the DB
 to actually do meaningful explicit locking that the UPDATE command
 does not already do.

I doubt it's locking.

 What else can I do to make this go fast enough to be normal!? Penny
 for any thoughts and tips.

Check your I/O throughput. You should be maxing out your disk...
Although, with 6million rows, it might even fit in memory. Can you see
(in ps) what it's actually doing?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Repeat posts

2007-08-17 Thread Martijn van Oosterhout
On Fri, Aug 17, 2007 at 01:13:33PM +0100, Raymond O'Donnell wrote:
 Hi all,
 
 Is it just me? :-) from time to time I get repeat broadcasts from 
 various PG mailing lists - posts that I've already received several days 
 previously are sent again.

AIUI, posts from non-subscribers can get held for moderation. Because
they CC the other people the thread kept going. Later on the moderator
approves the messages and they get sent out again.

Hope this help,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread Phoenix Kiula
On 17/08/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
  However, I am a little frustrated by the amount of time PGSQL takes to
  complete tasks. Just to accommodate these tasks, my conf file has the
  following:
 
  autovacuum = off
  wal_buffers=64
  checkpoint_segments=1000
  checkpoint_timeout=900
  fsync = off
  maintenance_work_mem = 128MB

 I note you did not give either the shared_buffers setting or the
 work_mem setting, both of which would have a major impact on your
 problem.



Oh, sorry, here are the rest of them:

max_connections = 250
shared_buffers = 6
effective_cache_size = 6
work_mem = 6
max_fsm_relations = 1500
max_fsm_pages = 8
sort_mem = 16348
temp_buffers = 4096
authentication_timeout = 10s
ssl = off


Are these too low?




 How big is the actual table itself (in bytes).



Where should I find this?

I tried /usr/lib/pgsql/data and /var/lib/pgsql/data but there is
no folder that seems to show the data files.

TIA!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Accessing pg_timezone_names system view

2007-08-17 Thread Naz Gassiep
I was wondering if there is any reason that accessing the system view 
pg_timezone_names is extremely slow relative to other queries. The 
following query:


   SELECT * FROM pg_timezone_names;

Executes in between 29ms and 32ms on my server. It takes about the same 
when I put a


   WHERE name = 'some/timezone'

clause in it. To put this into perspective, on the pages that execute 
this, it accounts for something like 3/4 of my DB execution time.


Here's a screenshot to show you what I'm talking about:

   http://www.mrnaz.com/dbetime.gif

As you can see, the execution of that single fetch dwarfs all other 
processing loads. I've run this a few times, and the timings are always 
roughly the same. Is there a way for me to speed this up? Would I be 
better off loading these into a static table and executing from there? 
It seems kinda purpose defeating to do that though. Perhaps this has 
been addressed in 8.3 ? I eagerly await.


Regards,
- Naz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Repeat posts

2007-08-17 Thread Raymond O'Donnell

On 17/08/2007 13:48, Martijn van Oosterhout wrote:


AIUI, posts from non-subscribers can get held for moderation. Because
they CC the other people the thread kept going. Later on the moderator
approves the messages and they get sent out again.


Ah - I see. As I said, it wasn't a problemjust curious. - Thanks for 
explaining.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] SELECT question

2007-08-17 Thread Michelle Konzack
*
* Do not Cc: me, because I am on THIS list, if I write here.*
* Keine Cc: an mich, bin auf DIESER Liste wenn ich hier schreibe.   *
* Ne me mettez pas en Cc:, je suis sur CETTE liste, si j'ecris ici. *
*

Hello, 

I have a very huge table of references from Websites (html, pics, ...)
where the elements stored on a fileserver using sha384.

Now the indextable hold all filenames and download dates but now I like
to get a snapshoot on a paticular day.

How must I create the SELCT statement to get ALL files valid on a
particular day?

Note:  There can be every time a new index.html for example but images
   change only once a month...

   So I need all elements valable on the paticular day which mean,
   I need to select that LAST version of the elements...

Greetings
Michelle Konzack


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: [GENERAL] Accessing pg_timezone_names system view

2007-08-17 Thread Magnus Hagander
On Fri, Aug 17, 2007 at 11:51:52PM +1000, Naz Gassiep wrote:
 I was wondering if there is any reason that accessing the system view 
 pg_timezone_names is extremely slow relative to other queries. The 
 following query:
 
SELECT * FROM pg_timezone_names;
 
 Executes in between 29ms and 32ms on my server. It takes about the same 
 when I put a
 
WHERE name = 'some/timezone'
 
 clause in it. To put this into perspective, on the pages that execute 
 this, it accounts for something like 3/4 of my DB execution time.

This view is backed by a set returning function that will enumerate all the
files in the timezone directory. The WHERE clause doesn't apply until after
the function has already traversed all files.

 As you can see, the execution of that single fetch dwarfs all other 
 processing loads. I've run this a few times, and the timings are always 
 roughly the same. Is there a way for me to speed this up? Would I be 
 better off loading these into a static table and executing from there? 

Yes, much better if it's something you're querying regularly. 

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] I get a error msg when i try to connect to the DB

2007-08-17 Thread Rajaram J

Hi

I tred this option but still i get the same message. Any more options that i 
can try.


Regards
Rajaram J
- Original Message - 
From: Brad Nicholson [EMAIL PROTECTED]

To: Rajaram J [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Thursday, August 16, 2007 9:35 PM
Subject: Re: [GENERAL] I get a error msg when i try to connect to the DB



On Thu, 2007-08-16 at 21:16 +0530, Rajaram J wrote:

Hi

I have installed PostGRESql 7.4.17 and when i try to connect to the db
i get the msg

psql: FATAL: no pg_hba.conf entry for host ???, user sfmdb,
database LOGDB, SSL off


You are allowing local, non TCP/IP connctions, and connections through
the loop back, but you are not allowing remote conncetions.  You need an
entry of some form that is

host all all ??? 255.255.255.255 trust

btw, trust is usually bad, you should use some form of authentication.


My pg_hba.conf entry is as below



# TYPE  DATABASEUSERIP-ADDRESSIP-MASK
METHOD

# Disable default configuration

local   all all
trust
# IPv4-style local connections:
hostall all 127.0.0.1 255.255.255.255
trust
# IPv6-style local connections:
hostall all ::1
:::::::trust


can you please guide me on what the problem might be.

Regards

Rajaram



--
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Hannes Dorbath

On 17.08.2007 15:59, Tom Lane wrote:

On the other side of the coin, I have little confidence in DRBD
providing the storage semantics we need (in particular guaranteeing
write ordering).  So that path doesn't sound exactly risk-free either.


To my understanding DRBD provides this. I think a discussion about that 
with the DRBD developers would be very useful for many users searching 
for a solution to replicate PostgreSQL, so I'm cross posting this to 
DRBD list. Maybe you can make clear in detail what requirements 
PostgreSQL has.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread hubert depesz lubaczewski
On Fri, Aug 17, 2007 at 10:22:55PM +0800, Phoenix Kiula wrote:
 Wow, smartest advice of the day! Yes, a lot of our data in that column
 has dots and numbers (800,000 compared to 6 million), so I wanted to
 get only to the stuff that was pure alphabets, but just didn't think
 of how.

what i really meant was: check how many record will actually be modified
- i.e. contain upper-case letters.

the fact that some records contain only dots and digits is nice, but i
suspect that you also have a lot of record with only lower-case letters
in the field. or am i wrong?

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Sander Steffann
Hi,
 
 On the other side of the coin, I have little confidence in DRBD
 providing the storage semantics we need (in particular guaranteeing
 write ordering).  So that path doesn't sound exactly risk-free either.

DRBD seems to enforce strict write ordering on both sides of the link
according to the docs. I didn't look at the code, but my plug-pulling tests
on a busy PostgreSQL server didn't cause any problems. No conclusive
evidence, but useful at lease in my use-case. (And yes: I make ps_dumps
often just in case)

- Sander


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Tom Lane
Mikko Partio [EMAIL PROTECTED] writes:
 This was my original intention. I'm still quite hesitant to trust the
 fencing devices ability to quarantee that only one postmaster at a time is
 running, because of the disastrous possibility of corrupting the whole
 database.

Making that guarantee is a fencing device's only excuse for existence.
So I think you should trust that a properly-implemented fence will do
what it's claimed to do.

On the other side of the coin, I have little confidence in DRBD
providing the storage semantics we need (in particular guaranteeing
write ordering).  So that path doesn't sound exactly risk-free either.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread hubert depesz lubaczewski
On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
 I have dropped all indexes/indicises on my table, except for the
 primary key. Still, when I run the query:
 UPDATE mytable SET mycolumn = lower(mycolumn);

can you please check this:

select count(*) from mytable;
select count(*) from mytable where mycolumn ~ '[A-Z]';

and if the second is lower than first make the update:
update mytable set mycolumn = lower(mycolumn) where mycolumn ~ '[A-Z]';

of course if your data contain national characters you have to include
them (upper case only) in this regexp.

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread hubert depesz lubaczewski
On Fri, Aug 17, 2007 at 09:50:42PM +0800, Phoenix Kiula wrote:
  How big is the actual table itself (in bytes).
 Where should I find this?

select pg_relation_size('mytable');

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Repeat posts

2007-08-17 Thread Tom Lane
Raymond O'Donnell [EMAIL PROTECTED] writes:
 Is it just me? :-) from time to time I get repeat broadcasts from 
 various PG mailing lists - posts that I've already received several days 
 previously are sent again.

There are people on the lists with broken mail software that resubmits
old traffic back to the list.  The listserv has some defenses against
that --- it's supposed to reject message-ids that it's seen before ---
but count on Microsoft to mangle mail messages badly enough to get
around that.

The latest example, which I too just got, looks on inspection of the
headers to be blamable on [EMAIL PROTECTED].  The time delay suggests
it was held for moderation, so maybe the software *did* recognize it as
a dup and it was moderator error to let it through.

regards, tom lane

--- Forwarded Message

Return-Path: [EMAIL PROTECTED]
Delivery-Date: Fri Aug 17 08:31:02 2007
Received: from postgresql.org (postgresql.org [200.46.204.71])
by sss.pgh.pa.us (8.14.1/8.14.1) with ESMTP id l7HCUw15003684
for [EMAIL PROTECTED]; Fri, 17 Aug 2007 08:31:01 -0400 (EDT)
Received: from localhost (maia-1.hub.org [200.46.204.191])
by postgresql.org (Postfix) with ESMTP id E647F9FB91B;
Fri, 17 Aug 2007 09:17:47 -0300 (ADT)
Received: from postgresql.org ([200.46.204.71])
 by localhost (mx1.hub.org [200.46.204.191]) (amavisd-maia, port 10024)
 with ESMTP id 12766-01-6; Fri, 17 Aug 2007 09:17:47 -0300 (ADT)
Received: from postgresql.org (postgresql.org [200.46.204.71])
by postgresql.org (Postfix) with ESMTP id BAD949FC15F;
Fri, 17 Aug 2007 09:04:42 -0300 (ADT)
Received: from localhost (maia-5.hub.org [200.46.204.182])
by postgresql.org (Postfix) with ESMTP id 2B8409F983B
for [EMAIL PROTECTED]; Tue, 14 Aug 2007 21:32:16 -0300 (ADT)
Received: from postgresql.org ([200.46.204.71])
 by localhost (mx1.hub.org [200.46.204.182]) (amavisd-maia, port 10024)
 with ESMTP id 15822-01 for [EMAIL PROTECTED];
 Tue, 14 Aug 2007 21:32:06 -0300 (ADT)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.5
Received: from mx3.hub.org (206-223-169-73.beanfield.net [206.223.169.73])
by postgresql.org (Postfix) with ESMTP id B3CD39F9838
for pgsql-general@postgresql.org; Tue, 14 Aug 2007 21:32:10 -0300 
(ADT)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.5
Received: from avscan-cl-a.dmv-online.net (avscan-cl-a.dmv-online.net 
[216.240.97.7])
by mx3.hub.org (Postfix) with ESMTP id 329CC37CC65
for pgsql-general@postgresql.org; Tue, 14 Aug 2007 21:32:10 -0300 
(ADT)
Received: from avscan-cl-a.dmv-online.net (localhost.dmv-online.net [127.0.0.1])
by avscan-cl-a.dmv-online.net (8.13.8/8.13.6) with ESMTP id 
l7ENS25C060794;
Tue, 14 Aug 2007 19:28:02 -0400 (EDT)
(envelope-from [EMAIL PROTECTED])
Received: (from [EMAIL PROTECTED])
by avscan-cl-a.dmv-online.net (8.13.8/8.13.6/Submit) id l7ENS2H9060793;
Tue, 14 Aug 2007 19:28:02 -0400 (EDT)
(envelope-from postmaster)
Received: from mx2.hub.org (mx2.hub.org [200.46.204.254])
by mgw-cl-a.dmv.com (8.13.6/8.13.6) with ESMTP id l7EJ2gc8017686
for [EMAIL PROTECTED]; Tue, 14 Aug 2007 15:02:48 -0400 (EDT)
(envelope-from [EMAIL PROTECTED])
Received: from postgresql.org (postgresql.org [200.46.204.71])
by mx2.hub.org (Postfix) with ESMTP id C20728B367E;
Tue, 14 Aug 2007 16:02:40 -0300 (ADT)
Received: from localhost (maia-5.hub.org [200.46.204.182])
by postgresql.org (Postfix) with ESMTP id 826889FA117
for [EMAIL PROTECTED]; Tue, 14 Aug 2007 15:59:54 -0300 (ADT)
Received: from postgresql.org ([200.46.204.71])
 by localhost (mx1.hub.org [200.46.204.182]) (amavisd-maia, port 10024)
 with ESMTP id 11907-01 for [EMAIL PROTECTED];
 Tue, 14 Aug 2007 15:59:47 -0300 (ADT)
X-Greylist: domain auto-whitelisted by SQLgrey-1.7.5
Received: from rv-out-0910.google.com (rv-out-0910.google.com [209.85.198.188])
by postgresql.org (Postfix) with ESMTP id B385D9FA0A7
for pgsql-general@postgresql.org; Tue, 14 Aug 2007 15:59:48 -0300 
(ADT)
Received: by rv-out-0910.google.com with SMTP id f1so1312494rvb
for pgsql-general@postgresql.org; Tue, 14 Aug 2007 11:59:43 -0700 
(PDT)
DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed;
d=gmail.com; s=beta;

h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;

b=HBo76On4+5ImPvzBDlrWomsojLb/BaU/+0YrO09J55NTtUddXNyU9Ju+wW7DuUC2fSxsZmPlzrc4yPGJ9xNcbP8vfoqYqnxy3U2N21tBOLCWIAdQCQTBj+vCHQG7U+ttJW1ITm+og8OXWEE3pHnFwoDo8RInExAwhniSaMid+TM=
DomainKey-Signature: a=rsa-sha1; c=nofws;
d=gmail.com; s=beta;

h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread Phoenix Kiula
On 17/08/07, hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
 On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
  I have dropped all indexes/indicises on my table, except for the
  primary key. Still, when I run the query:
  UPDATE mytable SET mycolumn = lower(mycolumn);

 can you please check this:

 select count(*) from mytable;
 select count(*) from mytable where mycolumn ~ '[A-Z]';

 and if the second is lower than first make the update:
 update mytable set mycolumn = lower(mycolumn) where mycolumn ~ '[A-Z]';

 of course if your data contain national characters you have to include
 them (upper case only) in this regexp.


Wow, smartest advice of the day! Yes, a lot of our data in that column
has dots and numbers (800,000 compared to 6 million), so I wanted to
get only to the stuff that was pure alphabets, but just didn't think
of how.

[Slithers away to get dunce cap].

Thanks much!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Some frustrations with admin tasks on PGSQL database

2007-08-17 Thread Phoenix Kiula
On 17/08/07, hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
 On Fri, Aug 17, 2007 at 10:22:55PM +0800, Phoenix Kiula wrote:
  Wow, smartest advice of the day! Yes, a lot of our data in that column
  has dots and numbers (800,000 compared to 6 million), so I wanted to
  get only to the stuff that was pure alphabets, but just didn't think
  of how.

 what i really meant was: check how many record will actually be modified
 - i.e. contain upper-case letters.

 the fact that some records contain only dots and digits is nice, but i
 suspect that you also have a lot of record with only lower-case letters
 in the field. or am i wrong?




You guessed it right. The query makes perfect sense. Only update those
records that have uppercase, and therefore require updating! Many
thanks.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Magnus Hagander
On Fri, Aug 17, 2007 at 04:19:57PM +0200, Hannes Dorbath wrote:
 On 17.08.2007 15:59, Tom Lane wrote:
 On the other side of the coin, I have little confidence in DRBD
 providing the storage semantics we need (in particular guaranteeing
 write ordering).  So that path doesn't sound exactly risk-free either.
 
 To my understanding DRBD provides this. I think a discussion about that 
 with the DRBD developers would be very useful for many users searching 
 for a solution to replicate PostgreSQL, so I'm cross posting this to 
 DRBD list. Maybe you can make clear in detail what requirements 
 PostgreSQL has.

It does, AFAIK, if yuo configure it properly. I think it's the protocol
parameter you need to set to C which is the slowest, but it's the only one
that waits for the block to hit *both* disks.

//Magnus


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Where are the Server error logs

2007-08-17 Thread Rajaram J
Hi

when i try to use psql i get message

bash-2.01# /opt/sfmdb/pgsql/bin/psql -U sfmdb
psql: FATAL:  missing or erroneous pg_hba.conf file
HINT:  See server log for details.

where can the server log files be found. if i need to set some parameter in 
which file do i do that.

Regards
Rajaram J

Re: [GENERAL] Where are the Server error logs

2007-08-17 Thread Raymond O'Donnell

On 17/08/2007 15:47, Rajaram J wrote:

where can the server log files be found. if i need to set some parameter 
in which file do i do that.


The server logging method and log files are set in postgresql.conf - 
this is well documented in the file itself, as well as the PostgreSQL docs.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] why it doesn't work? referential integrity

2007-08-17 Thread Pavel Stehule

 looks strange to me too, but i never had foreign keys to the same table.
 it works if you define your chekced_by FK deferrable with

 checked_by INT REFERENCES users (id) ON UPDATE CASCADE ON DELETE SET NULL
 DEFERRABLE INITIALLY DEFERRED,

 it seams that postgresql does its job in a procedural way instead of
 relational.


It is solved. It was PostgreSQL bug repaired
http://archives.postgresql.org/pgsql-committers/2007-08/msg00207.php
Thank you
nice a day
Pavel Stehule

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Ted Byers
Sorry for top posting - but this is an annoying of this web interface to email. 
 :-(
   
  Isn't what you're doing here a misuse of the idea of a transaction.  I don't 
claim to be an expert in this, but I thought the idea of a transaction was that 
you bundle a group of statements together that must all succeed or all fail 
together; so if any one of them fails, all the preceding work completed in the 
transaction gets rolled back.  Did I misunderstand something here, then?  If 
not, then the operations you describe for your first transaction really ought 
not be in the same transaction.
   
  Now, when I run a script of SQL statements in MS SQL Server, a statement that 
encounters an error will simply stop, and then control can be passed to the 
next SQL statement in the script, BUT THIS IS NOT IN A TRANSACTION!!!  It is 
only a script, and in the context in which I was working, I didn't need to use 
transactions.  And, I can provide additional options in the statements that 
will modify this behaviour, so the script aborts at the error condition rather 
than continuing with the next SQL statement in the script.  With some 
statements, what I would regard as an error requiring the script to abort seem 
to be regarded as merely a warning by default.  For example, in a bulk load of 
data into a table, and there is a problem with the data for the tenth field on 
the second row, the first row, which is correct, is stored, the statement 
aborts at the problem row, and control is passed to the next SQL statement.  In 
my situations, I had other programs that would clean up the
 data if this sort of problem arises, so where the problem is seen by default 
as warranting only a warning, I could upgrade it to be regarded as an error.  
And of course, there are options for controlling how the script behaves when an 
error occurs.  Perhaps that is involved in the behaviour you are reporting for 
MS SQL Server??  I haven't investigated this myself as I haven't had the 
problem you describe.
   
  I didn't quite understand your description, in another post, of how Spring 
treats your database statements.  Am I to understand it puts all your SQL 
statements into a single transaction?  If so, either they badly mishandle 
transactions or they are working with a very different concept of what a 
transaction is.
   
  One last question.  You describe part of your problem as being correct 
addition of data to an audit table.  If I haven't misunderstood what you're 
doing, isn't it incomplete if you record only statement failures?  When I deal 
with audits, I put the logic into triggers whenever possible.  And I'd maintain 
an independant error log from my application code, probably in a catch clause, 
and this either goes to a flat file or uses an independant connection to the 
database.  And my practice is to use separate try/catch blocks for each 
statement that can throw an exception, to improve the granularity of error 
handling logic.  That is the only way to have a chance of getting one statement 
to execute regardless of whether or not a preceding statement throws an 
exception.
   
  I have a special interest in this because I am just beginning to look at 
Spring (I downloaded it just a few days ago).
   
  Cheers,
   
  Ted

Tyson Lloyd Thwaites [EMAIL PROTECTED] wrote:
  Our app uses system state. We scan filesystems and record file 
information in a database.

Here is one example:


- insert 250 files
- update some stats (MUST work even if insert fails)
- update agent last-contact time (also must work so we know it's not awol)


When last insert has finished, server will kick into summary mode:


- wait for asynchronous inserts to finish
- summarise data
- do some misc operations
- switch back to 'Idle' state


That last step is where the trick is: if anything goes wrong with the 
previous operations, we MUST be able to put the system back into an 
'Idle' state. Currently we do this in a catch block, ie:

try {

} catch {

}

Now of course that won't work in pg. We use the same logic all through 
the system, so you can see my problem. For example, if the user deletes 
a drive record that still has referential links to it, we do this:

try {

} catch (ReferentialIntegrityException e) {

}

We rely on the fact that we can still do things and commit a transaction 
even if a single statement has failed.

The above drive delete case is where I first noticed the problem when 
switching from MSSQL: instead of the server handling the exception and 
doing something else instead, the 'something else' also threw an 
exception (txn is aborted), which propagated to the client end.

UPDATE: Webb has proposed that this behaviour may be MSSQL misbehaving.

-- 
Tyson Lloyd Thwaites
Lead Software Developer
Allianza Services Pty Ltd

M: 0404 456 858
P: (+618) 8232 5525
F: (+618) 8232 8252
W: www.allianza.com.au


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner 

Re: [GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Michael Glaesemann


On Aug 17, 2007, at 10:58 , Phoenix Kiula wrote:


What would be the SQL to find data of this nature? My column can only
have alphanumeric data, and the only symbols allowed are - and _,
so I tried this regexp query:

select id, t_code
from traders
where t_code ~ '[^A-Za-z1-9\-]'


If you're including - in a range as a character, doesn't it have to  
go first?

Try this:

WHERE t_code ~ $re$[^-A-Za-z1-9_]$re$

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Phoenix Kiula
I'm noticing that some of my data has been imported as junk text:

For instance:

klciã«

What would be the SQL to find data of this nature? My column can only
have alphanumeric data, and the only symbols allowed are - and _,
so I tried this regexp query:

select id, t_code
from traders
where t_code ~ '[^A-Za-z1-9\-]'
limit 100;

But this starts to return values such as 181xn-807199 which is valid
as per the above regexp? Also, when I try to include the underscore,
as follows...

select id, t_code
from traders
where t_code ~ '[^A-Za-z1-9\-\_]'
limit 100;

This gives me an error: ERROR:  invalid regular expression: invalid
character range.

What am I missing? Does this have something to do with erroneous
encodings? I want my data to be utf-8 but I do want to find it with
latin1 queries when the text in columns is supposed to be only latin1
characters! Or is a-z in utf-8 considered different from a-z in
latin1?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Manuel Sugawara
Phoenix Kiula [EMAIL PROTECTED] writes:


 select id, t_code
 from traders
 where t_code ~ '[^A-Za-z1-9\-\_]'
 limit 100;

 This gives me an error: ERROR:  invalid regular expression: invalid
 character range.

Put the dash at the start of the character class: [^-A-Za-z1-9_]


 What am I missing? 

In a character class expression the dash has an special meaning. If
you need to match a dash it has to be the first character.

Regards,
Manuel.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Belinda M. Giardine



On Fri, 17 Aug 2007, Michael Glaesemann wrote:



On Aug 17, 2007, at 10:58 , Phoenix Kiula wrote:


What would be the SQL to find data of this nature? My column can only
have alphanumeric data, and the only symbols allowed are - and _,
so I tried this regexp query:

   select id, t_code
   from traders
   where t_code ~ '[^A-Za-z1-9\-]'


If you're including - in a range as a character, doesn't it have to go first?
Try this:

WHERE t_code ~ $re$[^-A-Za-z1-9_]$re$

Michael Glaesemann
grzm seespotcode net



How about
WHERE t_code ~ $re$[^-A-Za-z0-9_]$re$

So that zeros are allowed?

Belinda


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] More Time Zone fun

2007-08-17 Thread Naz Gassiep
I'm making (slow) progress in my timezone system, and I just noticed 
this little behavioral nugget, which surely is a bug. In the system view 
pg_timezone_names is a few timezones that use leap seconds. An example 
which I tested is Asia/Riyadh87. When I attempt to SET TIME ZONE using 
this timezone, I get this:


ERROR: time zone Asia/Riyadh87 appears to use leap seconds DETAIL: 
PostgreSQL does not support leap seconds.


Surely it's a bug to have timezones in the list that result in errors 
being thrown? Also, is there a reason that PostgreSQL does not support 
leap seconds?

Regards,
- Naz

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Enterprise Wide Deployment

2007-08-17 Thread Scott Marlowe
On 8/14/07, john_sm [EMAIL PROTECTED] wrote:

 Hey guys, for an enterprise wide deployment, what will you suggest and why
 among - Red Hat Linux, Suse Linux and Ubuntu Linux, also, do you think, we
 can negotiate the support pricing down?

It's more about your skill set and familiarity than whether any one of
those will work well enough.

If you're considering Ubuntu, you should be looking at debian, the
parent distro for ubuntu.  The server version of debian is very stable
and reliable.

As for cost, Debian and Ubuntu are free, and RedHat is available for
free as either Centos or White Box, with slightly delayed updates.

I lean towards RHEL because that's what I know.

No matter what you pick, you need to test it with your setup of course.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Help creating a function

2007-08-17 Thread Steve Atkins


On Aug 16, 2007, at 9:35 AM, Madison Kelly wrote:

Note: This is being sent again (in case it shows up later). It  
never seemed to have made it to the list.


Hi all,

  I'm using ulogd with PostgreSQL which stores IP addresses as 32bit
unsigned integers. So when I select some data I get something like:

ulogd= SELECT id, ip_saddr, ip_daddr, raw_pktlen, ip_totlen,  
tcp_window

FROM ulog LIMIT 20;
 id |  ip_saddr  |  ip_daddr  | raw_pktlen | ip_totlen | tcp_window
++++---+
  1 | 3232235874 | 1074534522 | 46 |46 |  25825

  Where 'ip_saddr' and 'ip_daddr' are 'bigint'. I know I can convert
these numbers to dotted-decimal in perl with a small script like:

-=-=-
#!/usr/bin/perl

# This would be the number read from the DB
my $num=3232235874;

# Now do the math
my $temp=$num/256;
my $D=256*($temp-int($temp));
$temp=(int($temp))/256;
my $C=256*($temp-int($temp));
$temp=(int($temp))/256;
my $B=256*($temp-int($temp));
my $A=int($temp);
my $ip=$A.$B.$C.$D;

# Print the results
print 'num': [$num] - 'IP': [$ip]\n;
-=-=-

  What I would like to do is create a function that would do the same
thing so I could read out the IP addresses as standard dotted-decimal
format. Could anyone help me with this? I am quite the n00b when it
comes to functions. :)


These functions convert between signed 32 bit integers (with a -2^31
offset) and dotted quads. You should be able to tweak them pretty
easily:

create or replace function ip2int(text) returns int as '
DECLARE
  a int;
  b int;
  c int;
  d int;
BEGIN
  a := split_part($1, ''.'', 1);
  b := split_part($1, ''.'', 2);
  c := split_part($1, ''.'', 3);
  d := split_part($1, ''.'', 4);
  RETURN (a-128) * 16777216 + b * 65536 + c * 256 + d;
END;
' LANGUAGE plpgsql IMMUTABLE;

create or replace function int2ip(int) returns text as '
DECLARE
  a int;
  b int;
  c int;
  d int;
BEGIN
   a := (($1  24)  255) # 128;
   b := ($1  16)  255;
   c := ($1  8)  255;
   d := $1  255;
  RETURN to_char(a, ''FM999'') || ''.'' || to_char(b, ''FM999'') ||  
''.'' || to_char(c,

''FM999'') || ''.'' || to_char(d, ''FM999'');
END;
' LANGUAGE plpgsql IMMUTABLE;

There's probably a neater way to do it via the inet (or ip4) data
types, but these functions should be easier to tweak to use bigint.

Cheers,
  Steve



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] SELECT question

2007-08-17 Thread Michael Glaesemann


On Aug 17, 2007, at 7:27 , Michelle Konzack wrote:


*
* Do not Cc: me, because I am on THIS list, if I write here.*


You might want to consider changing your mailing list subscription  
settings to eliminatecc, e.g., send email to

[EMAIL PROTECTED] (not the list address!) with body

set pgsql-general eliminatecc

This should prevent the mailing list from sending you a second copy.


I have a very huge table of references from Websites (html, pics, ...)
where the elements stored on a fileserver using sha384.

Now the indextable hold all filenames and download dates but now I  
like

to get a snapshoot on a paticular day.

How must I create the SELCT statement to get ALL files valid on a
particular day?

Note:  There can be every time a new index.html for example but images
   change only once a month...

   So I need all elements valable on the paticular day which mean,
   I need to select that LAST version of the elements...


I think what you want is something like:

SELECT DISTINCT ON (website_reference) website_reference,  
download_date, file_path

FROM indextable
WHERE download_date = ? -- whatever date you're interested in
ORDER BY website_reference, download_date DESC;

This should return the most recent website_reference and its  
download_date that's earlier than the download_date specified in the  
WHERE clause.


DISTINCT ON is a (very helpful) PostgreSQL extension. You can get  
similar results using a subquery;


SELECT website_reference, download_date, file_path
FROM indextable
NATURAL JOIN (
SELECT website_reference, max(download_date) as download_date
FROM indextable
WHERE download_date = ?
GROUP BY website_reference
) most_recent_versions;

This may return more than one row per website_reference if the  
website_reference has more than on file_path for a particular  
download_date.


Does this help? If not, could you give a bit more of a concrete example?

(Is is just me or have there been a lot of queries that can be solved  
using DISTINCT ON recently?)


Michael Glaesemann
grzm seespotcode net



PGP.sig
Description: This is a digitally signed message part


Re: [GENERAL] Enterprise Wide Deployment

2007-08-17 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Scott Marlowe wrote:
 On 8/14/07, john_sm [EMAIL PROTECTED] wrote:
 Hey guys, for an enterprise wide deployment, what will you suggest and why
 among - Red Hat Linux, Suse Linux and Ubuntu Linux, also, do you think, we
 can negotiate the support pricing down?

I would suggest the following:

If you have real in house expertise:

CentOS or Debian

If you do not:

Redhat or Ubuntu Dapper.

Sincerely,

Joshua D. Drake


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGxd1VATb/zqfZUUQRAi13AJ9c06AAfO7vGsThkZ/q2FIVvl5yVgCggK1L
saQH2nBItoQdvpXmrJTIss8=
=1ied
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Non-superuser creating a flat file

2007-08-17 Thread John DeSoi


On Aug 13, 2007, at 10:07 AM, Terri Reid wrote:

I have data that is being updated in a table that I need to export  
to a flat file via a database trigger on insert or update. The user  
performing the update will not be a superuser. I’ve tried to use  
COPY TO, but that doesn’t work for non-superusers. Is there some  
other functionality that can write out flat files? I’m looking for  
something similar to the Oracle UTL file functionality, as ideally  
I’d want to append the data to an existing file and be able to  
manipulate the data before I wrote it out.


Maybe it could work if you create your function using the SECURITY  
DEFINER option?


http://www.postgresql.org/docs/8.2/interactive/sql-createfunction.html



John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [RESEND] Transaction auto-abort causes grief with Spring Framework

2007-08-17 Thread Webb Sprague
 Isn't the 'try' statement rather similar to a 'savepoint' command? I
 realize it would be difficult to override the behaviour of try {...}
 catch (...) {...}, but it shouldn't be too hard to wrap it somehow for
 exceptions in database code.

Yes, but I believe the OP was getting two levels of his application
mixed up:  he was doing something that caused a rollback in the
*database*, then hoping to recover in a catch block in the
*application* without terminating the aborted transaction in the
database.  Or so I gather.

You are right in another post about the purpose and design of
transactions, and don't use the discussion here as a model, though
drivers seem to often do weird stuff with transactions behind your
back.  Psycopg (python) does an implicit begin, so you must commit,
which then starts another begin automatically.  I think you can set  a
handle to do autocommit, but I never do. This seems best because it
forces you to handle transactions explicitly, but I can imagine other
(bad) approaches, and Spring may use them (though I think the
difficulty is that MS-SQL is sloppy, not Spring, and that the OP is
still getting used to TX's and MS-SQL covered up some things that
shouldn't have been covered).

W

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Michael Glaesemann
[Please reply to the list so that others may benefit from and  
participate in the discussion.]


On Aug 17, 2007, at 12:50 , Phoenix Kiula wrote:



On 18/08/07, Michael Glaesemann [EMAIL PROTECTED] wrote:



On Aug 17, 2007, at 10:58 , Phoenix Kiula wrote:


What would be the SQL to find data of this nature? My column can  
only
have alphanumeric data, and the only symbols allowed are - and  
_,

so I tried this regexp query:

select id, t_code
from traders
where t_code ~ '[^A-Za-z1-9\-]'



If you're including - in a range as a character, doesn't it have to
go first?
Try this:

WHERE t_code ~ $re$[^-A-Za-z1-9_]$re$






Thanks, yes, this is sweet!

If I include this into a check constraint on the table, would that be
very resource intensive for INSERTs and UPDATEs?



Maybe. I don't know. What's very? Measure, change, and measure again.  
Premature optimization and all that. :) First I make it correct.


Michael Glaesemann
grzm seespotcode net




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] FInding corrupt values in UTF-8 tables (regexp question, I think)

2007-08-17 Thread Phoenix Kiula
On 18/08/07, Michael Glaesemann [EMAIL PROTECTED] wrote:
 [Please reply to the list so that others may benefit from and
 participate in the discussion.]

  If you're including - in a range as a character, doesn't it have to
  go first?
  Try this:
 
  WHERE t_code ~ $re$[^-A-Za-z1-9_]$re$
 
 
 
 
 
  Thanks, yes, this is sweet!
 
  If I include this into a check constraint on the table, would that be
  very resource intensive for INSERTs and UPDATEs?
 

 Maybe. I don't know. What's very? Measure, change, and measure again.
 Premature optimization and all that. :) First I make it correct.



Yes, I meant to reply all. Thanks much for your help. I'll play around
with the check constraint.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] I get a error msg when i try to connect to the DB

2007-08-17 Thread RAJARAM JAYARAMAN
HiSorry I forgot to give the pg_hba.conf detail
Error message - 
== removing existing temp installation  creating temporary installation  initializing database system  starting postmaster ==running on port 65432 with pid 8227== creating database "regression" ==createdb: could not connect to database template1: FATAL: missing or erroneous pg_hba.conf fileHINT: See server log for details.pg_regress: createdb failedPresent pg_hba.conf entries -
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
local all all trusthost all all ??? 255.255.255.255 trusthost all all 0.0.0.0 0.0.0.0 trust
# IPv4-style local connections:host all all 127.0.0.1 255.255.255.255 trust# IPv6-style local connections:host all all ::1 ::::::: trust
I'm stuck here and could not proceed any further can you please help.

Regards 
Rajaram J Live the life in style with MSN Lifestyle. Check out!  



Re: [GENERAL] Enterprise Wide Deployment

2007-08-17 Thread Douglas McNaught
Joshua D. Drake [EMAIL PROTECTED] writes:

 Scott Marlowe wrote:
 On 8/14/07, john_sm [EMAIL PROTECTED] wrote:
 Hey guys, for an enterprise wide deployment, what will you suggest and why
 among - Red Hat Linux, Suse Linux and Ubuntu Linux, also, do you think, we
 can negotiate the support pricing down?

 I would suggest the following:

 If you have real in house expertise:

 CentOS or Debian

You can get support from HP for Debian AFAIK.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-17 Thread Guy Rouillier

Steve Manes wrote:
I'm fairly hardcore about keeping as much business logic as I can in the 
database.  In fact, I only do SELECTs from the application, and usually 
via Views.  All inserts, updates and deletes are via procs.

...
And, yes, it's faster.  Particularly if business logic decisions have to 
be made in context with a transaction.


I have a thread I started ages ago over on the PERFORM list that I'm 
sadly just now being able to provide some insight on.  I'll be replying 
on that thread in more detail, but the short of it turns out to be that 
at least in this one application, using stored procs for inserts is 
slowing down the app considerably.  The app does high volume inserts and 
updates, about 16 million rows a day.  By switching from stored procs to 
inline inserts, elapsed time dropped from 2595 seconds to 991 seconds 
for a test run.


So the moral of the story is that, as anyone who has worked 
professionally for a number of years knows, no magic bullets exist. 
General guidelines can be helpful, but each scenario must be 
individually investigated.


--
Guy Rouillier

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] I get a error msg when i try to connect to the DB

2007-08-17 Thread Andrej Ricnik-Bay
On 8/18/07, Rajaram J [EMAIL PROTECTED] wrote:
 Hi

 I tred this option but still i get the same message. Any more options that i
 can try.
And after that change you restarted the postmaster?


 Regards
 Rajaram J
Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Enterprise Wide Deployment

2007-08-17 Thread Andrej Ricnik-Bay
On 8/14/07, john_sm [EMAIL PROTECTED] wrote:
 Hey guys, for an enterprise wide deployment, what will you
 suggest and why among - Red Hat Linux, Suse Linux and
 Ubuntu Linux, also, do you think, we can negotiate the
 support pricing down?
For all it's worth:  my personal experiences with RH support
were shocking, to say the least, and I can't fathom why
anyone would want to pay for it.

If you have in-house linux expertise, choose whatever they're
familiar with.  If you don't - find a local company that can give
you support and use what they're familiar with.  Just my 2 cents.


Cheers,
Andrej

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Writing most code in Stored Procedures

2007-08-17 Thread Kenneth Downs

Guy Rouillier wrote:

Steve Manes wrote:
I'm fairly hardcore about keeping as much business logic as I can in 
the database.  In fact, I only do SELECTs from the application, and 
usually via Views.  All inserts, updates and deletes are via procs.

...
And, yes, it's faster.  Particularly if business logic decisions have 
to be made in context with a transaction.


I have a thread I started ages ago over on the PERFORM list that I'm 
sadly just now being able to provide some insight on.  I'll be 
replying on that thread in more detail, but the short of it turns out 
to be that at least in this one application, using stored procs for 
inserts is slowing down the app considerably.  The app does high 
volume inserts and updates, about 16 million rows a day.  By switching 
from stored procs to inline inserts, elapsed time dropped from 2595 
seconds to 991 seconds for a test run.


So the moral of the story is that, as anyone who has worked 
professionally for a number of years knows, no magic bullets exist. 
General guidelines can be helpful, but each scenario must be 
individually investigated.




I would suggest separating the strategy of server-side biz rules from 
the implementation method of stored procedures.  You can also implement 
biz rules as triggers and stick with inline inserts, updates and 
deletes, which is what we do in my shop.


That being said, the enforcement of the biz rules has to be done 
somewhere, and no matter where it is done it will involve disk reads and 
validations.  It always *seemed* to me that putting the code on a table 
trigger involves the least possible overhead for doing these things. It 
provides complete real-time enforcement within the transaction with no 
network overhead, and has the simplest possible interface: plain old SQL.


--
Kenneth Downs
Secure Data Software, Inc.
www.secdat.comwww.andromeda-project.org
631-689-7200   Fax: 631-689-0527
cell: 631-379-0010


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   >