[GENERAL] How to check if an array is empty

2008-09-11 Thread Tobias Anstett
Hi,

 

in my special case I'd like to check if a XML[] is empty, because postgres
doesn't implement any equality operators, this is not a 'simple' task.

 

My current solution is:

 

SELECT * FROM sometable WHERE XMLSERIALIZE(CONTENT ((XPATH('/a:bar,
somexmlcolumn, ARRAY[ARRAY['a', 'http://foo]]))[1]) AS text)''

 

But there must be a better way - maybe I missed something important - I also
tried queries like the following:

 

SELECT * FROM sometable WHERE (XPATH('/a:bar', somexmlcolumn,
ARRAY[ARRAY['a', 'http://foo']]))('{}'::XML[])

 

Any ideas and feedback are welcome.

 

Cheers, Tobias 



Re: [GENERAL] Stuck query

2008-09-11 Thread Tommy Gildseth

Erik Jones wrote:


On Sep 10, 2008, at 5:57 AM, Tommy Gildseth wrote:


Tom Lane wrote:

Tommy Gildseth [EMAIL PROTECTED] writes:

Richard Huxton wrote:


For what it's worth, I've run into a situation similar to this with a 
client a couple time in the last week or two (I can't say identical as I 
don't know all of the details about the client end of your connection).  
Using the client port # you can use lsof in addition to netstat (lsof -i 
tcp:49004) to track down the client process.  In our case, the client 
process was a connection made via an ssh tunnel and was sitting in 
FIN_WAIT2 status.  Killing the client process individually made 
everything go away nicely without any kind of extra downtime necessary.



Thanks, I'll keep that in mind if/when it happens again.


--
Tommy Gildseth

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] declare column update expression

2008-09-11 Thread Chris Velevitch
In 7.4, how do I declare that a column in a table is to be
automatically set to the value of some fixed expression whenever a row
is updated?

Eg column last_modified is always set to current_timestamp



Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
September meeting: It's Going To Be Brilliant
Date: Mon 29th September 6pm for 6:30 start
Details and RSVP on http://apugs2008september.eventbrite.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres user account on OSX

2008-09-11 Thread Dave Page
On Thu, Sep 11, 2008 at 3:53 AM, Shane Ambler [EMAIL PROTECTED] wrote:
 Tom Lane wrote:

 Darren Weber [EMAIL PROTECTED] writes:

 There is a postgres user account on my OSX system.  I'm not clear
 about how it was created.  I've installed a binary version of 8.3 in
 /Library/PostgreSQL/8.3/ and built another version from source into
 /usr/local/pgsql/.  When I login as root and then 'su - postgres' it
 takes me to the postgres account and the user directory is at
 /opt/local/var/db/postgresql83/.

 Can someone explain how this user account was created?

 I'm guessing that binary package created it for you --- check its docs,
 or look at the install script if you can.


 Quite sure that the EDB binary installer sets /Library/PostgreSQL8 as the
 home path, matching the install location.

/Library/PostgreSQL/8.3 by default, but yes, that is essentially correct.

 The /opt/local/. home path would indicate that you used macports or
 similar at some stage and the install from there would have created it.

 As the postgres user existed the EDB installer would have left it as is.

Correct.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No error when column doesn't exist

2008-09-11 Thread Dean Rasheed

 Hmm.  It's a feature, but maybe a dangerous one.  The expression is
 being treated as text(foo), which is intentional in order to allow
 use of functions as if they were virtual columns.  However, then it
 decides that what you've got there is a cast request.  There wasn't
 any ability to cast composite types to text before 8.3, so this fails
 in the expected way in 8.2 and before; but in 8.3 the cast
 interpretation succeeds, and away we go.
 

Thanks for the explanation. I see what's going on now.

 foo.char and foo.varchar have similarly unexpected behavior; I think
 that's probably the end of it, though, since those are the only types
 that CoerceViaIO will take as targets.
 

... and also any user defined domains based on those, which is
what I actually had. I was unlucky enough that the row text matched
the regexp on my domain, so my typo went unnoticed for a while ;-(

 Maybe we could/should restrict things so that the syntax continues to
 fail, but I can't think of any restrictions that don't seem like warts.
 What's worse, they might break stuff that used to work.
 
   regards, tom lane

OK, I can live with that. At least I know what to look out for now!

Cheers, Dean

_
Win New York holidays with Kellogg’s  Live Search
http://clk.atdmt.com/UKM/go/111354033/direct/01/
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Pavel Stehule
Hello,

2008/9/11 Chris Velevitch [EMAIL PROTECTED]:
 In 7.4, how do I declare that a column in a table is to be
 automatically set to the value of some fixed expression whenever a row
 is updated?

 Eg column last_modified is always set to current_timestamp


you should to use trigger

regards
Pavel Stehule



 Chris
 --
 Chris Velevitch
 Manager - Adobe Platform Users Group, Sydney
 m: 0415 469 095
 www.apugs.org.au

 Adobe Platform Users Group, Sydney
 September meeting: It's Going To Be Brilliant
 Date: Mon 29th September 6pm for 6:30 start
 Details and RSVP on http://apugs2008september.eventbrite.com

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-11 Thread Artis Caune
On Wed, Sep 10, 2008 at 5:43 PM, Filip Rembiałkowski
[EMAIL PROTECTED] wrote:
 nothing wrong here but this can also be rewritten to pure SQL function
 (can be few percent faster and optimizable by planner)

 CREATE OR REPLACE FUNCTION get_user_data( INT )
 RETURNS SETOF RECORD AS $$
 SELECT
(SELECT email FROM emails WHERE id = $1) as email,
(SELECT backend FROM backends WHERE id = $1) as backend
 $$ LANGUAGE 'sql' STABLE STRICT SECURITY DEFINER;

We need some logic in selects (because applications like postfix can
do just simple queries):
- select email
- if not found then return empty
- if email.type is 1 then return foo
- if email.type is 2 then return bar

'IF FOUND THEN' or 'IF variable = X THEN' features are only in plpgsql, i think.
Didn't know that sql functions also can be definied with SECURITY
DEFINER. We use them, so query_user can only select from function and
can not see the whole table/tables. I'll check sql functions.


 one question, why SETOF? this is supposed to always return one row
 always, right?
 you could create a TYPE and return this. queries would be a bit simpler:

 SELECT * FROM get_user_data('${id}');

Yes, it should return only one row. I can not use return query without SETOF.
Or should I create my_type, select into my_type_variable and return
my_type_variable?





-- 
regards,
Artis Caune

. CCNA
|
' didii FreeBSD

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-11 Thread Artis Caune
On Wed, Sep 10, 2008 at 5:56 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
 Ah, right, you only forgot to mention that other 99% of the
 requirements.
:)

 What's wrong with your first example?

It works, but those type casts TRUE::BOOLEAN AS forwards and selects
with AS (email VARCHAR, backend VARCHAR) are confusing me.





-- 
regards,
Artis Caune

. CCNA
|
' didii FreeBSD

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgadmin-support] PostgreSQL

2008-09-11 Thread Tino Wildenhain

Mickey Shekdar wrote:

Hello,
 
We are switching our web booking engine vendor. The present vendor is 
using Microsoft SQL and the new vendor uses PostgreSQL.
 
I have three questions and will appreciate your comments:
 
[1] Will PostgreSQL perform better on Linux or Windows OS


The performance is better on Linux, last but not least because
it is developed and thus optimized there much longer then it
is natively available for windows. Also system management is a lot
more easy on unix like systems.


[2] What is the best RAID configuration/number of dives you recommend


You should ask the vendor of your web booking engine, it really depends
much on the nature of the transactions. Also have a look in the archive
of the postgresql general - list, which is also much more appropriate
then this list here.


[3] Is there a hosting company offering fully monitored/managed PostgreSQL


Yes, plenty. Please check the webpage of postgresql.org

Cheers
Tino Wildenhain


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] xml queries date format

2008-09-11 Thread Peter Eisentraut

Tom Lane wrote:

Jef Peeraer [EMAIL PROTECTED] writes:

i am using the xml add-ons, but the date output format seems to be wrong :


I think the conversion to xml intentionally always uses ISO date format,
because that's required by some spec somewhere.


Yes, it follows XML Schema.  Which is why the output format is even 
slightly different from the SQL-mandated ISO format.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Jef Peeraer


On Thu, 11 Sep 2008, Peter Eisentraut wrote:

 Tom Lane wrote:
  Jef Peeraer [EMAIL PROTECTED] writes:
   i am using the xml add-ons, but the date output format seems to be wrong :
  
  I think the conversion to xml intentionally always uses ISO date format,
  because that's required by some spec somewhere.
 
 Yes, it follows XML Schema.  Which is why the output format is even slightly
 different from the SQL-mandated ISO format.
i understand, but that makes it very difficult to change the date format 
afterwards. i simple flag to indicate no date conversion would be 
helpfull
  
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] external query VS user function

2008-09-11 Thread Pavel Stehule
Hello

2008/9/11 Joao Ferreira gmail [EMAIL PROTECTED]:
 Hello all,

 I'm trying to optimize the execution of a query which deletes a big
 amount of records based on time

 I need to remove from 100.000 to 1.000.000 records from my table once a
 day, and I'dd like to make that removal as fast as possible. This is the
 idea:

 DELETE FROM tt WHERE time  $1;


 Would it be considerably faster if I declare that query inside a user
 function, let's say function_delete(integer), and invoque it instead

 SELECT function_delete($max_time);



using function hasn't any positive effect there. Evaluation is same,

regards
Pavel Stehule

 Would this second approach be faster ? I imagine there could be some
 internal mechanism that would allow pg to have that query pre-optimized
 somehow ?

 thanks for the feedback.

 Joao

 [[[ I've been looking at Partitioning, but it seems to be a quite
 intrusive setup if you already have your system up and running]]]



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Pavel Stehule
2008/9/11 Jef Peeraer [EMAIL PROTECTED]:


 On Thu, 11 Sep 2008, Peter Eisentraut wrote:

 Tom Lane wrote:
  Jef Peeraer [EMAIL PROTECTED] writes:
   i am using the xml add-ons, but the date output format seems to be wrong 
   :
 
  I think the conversion to xml intentionally always uses ISO date format,
  because that's required by some spec somewhere.

 Yes, it follows XML Schema.  Which is why the output format is even slightly
 different from the SQL-mandated ISO format.
 i understand, but that makes it very difficult to change the date format
 afterwards. i simple flag to indicate no date conversion would be
 helpfull
  

no, use explicit casting to varchar

-- xml formating style
postgres=# select xmlforest(current_timestamp as date);
   xmlforest
---
 date2008-09-11T12:21:44.600512+02:00/date
(1 row)

postgres=# select xmlforest(current_timestamp::text as date);
 xmlforest

 date2008-09-11 12:22:25.180611+02/date
(1 row)

postgres=# set datestyle to German ;
SET
postgres=# select xmlforest(current_timestamp::text as date);
  xmlforest
--
 date11.09.2008 12:22:32.947672 CEST/date
(1 row)

regards
Pavel Stehule


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Peter Eisentraut

Jef Peeraer wrote:


On Thu, 11 Sep 2008, Peter Eisentraut wrote:


Tom Lane wrote:

Jef Peeraer [EMAIL PROTECTED] writes:

i am using the xml add-ons, but the date output format seems to be wrong :

I think the conversion to xml intentionally always uses ISO date format,
because that's required by some spec somewhere.

Yes, it follows XML Schema.  Which is why the output format is even slightly
different from the SQL-mandated ISO format.
i understand, but that makes it very difficult to change the date format 
afterwards. i simple flag to indicate no date conversion would be 
helpfull


Well, these table_to_xml etc. functions are heavily constrained by the 
SQL standard, XML Schema, and others.  They do what they are supposed to 
do.  You are free to design your own XML export format or apply 
postprocessing to the existing ones (XSLT?).  I don't think we should 
overload the existing functions with everyone's favorite but apparently 
completely nonstandard formatting variant flag.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] external query VS user function

2008-09-11 Thread Peter Eisentraut

Joao Ferreira gmail wrote:

I need to remove from 100.000 to 1.000.000 records from my table once a
day, and I'dd like to make that removal as fast as possible. This is the
idea:

DELETE FROM tt WHERE time  $1;


Would it be considerably faster if I declare that query inside a user
function, let's say function_delete(integer), and invoque it instead

SELECT function_delete($max_time);


Would this second approach be faster ? I imagine there could be some
internal mechanism that would allow pg to have that query pre-optimized
somehow ?


No, the internal mechanisms will be pretty much the same (assuming you 
write the function LANGUAGE SQL).  The bottleneck in this command will 
most likely be the disk, so the conceivable pre-optimization 
mechanisms such as prepared statements or plan caching wouldn't really 
matter here in practice.  So just use whatever you like best.



[[[ I've been looking at Partitioning, but it seems to be a quite
intrusive setup if you already have your system up and running]]]


Partitioning is probably something you should look into if you can't get 
the statement to run quickly enough otherwise.  Or combine partitioning 
and TRUNCATE to get superfast deletes.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Greg Smith [EMAIL PROTECTED] writes:

 On Tue, 9 Sep 2008, Artacus wrote:
 Can psql access environmental variables or command line params?

 $ cat test.sql
 select :TEST as input;
 $ psql -v TEST=16 -f test.sql
  input
 ---
 16
 (1 row)

Nice trick, but when I try the following variant:

  psql -v TEST=16 -c 'select :TEST as input'

I get

  ERROR:  syntax error at or near :
  LINE 1: select :TEST as input

This seems to be contrary to the psql manual page:

 These assignments are done during a very early stage of start-up...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No error when column doesn't exist

2008-09-11 Thread Tom Lane
Dean Rasheed [EMAIL PROTECTED] writes:
 foo.char and foo.varchar have similarly unexpected behavior; I think
 that's probably the end of it, though, since those are the only types
 that CoerceViaIO will take as targets.

 ... and also any user defined domains based on those, which is
 what I actually had.

Ouch.  That makes the scope for unexpected behavior wider than I thought.
Maybe we do need some restriction here?

The ideas I had involved not considering the cast interpretation when
the actual syntax is table.column and some-set-of-other-conditions.
While this is certainly possible to implement, any variant of it will
break the existing 100% equivalence of foo.bar and bar(foo); which
seems to me to be a nice principle, though I grant you won't find it
anywhere in the SQL standard.

The other-conditions are a bit up for grabs.  The narrowest restriction
that would serve the purpose is table variable is of composite type
and the cast would be a CoerceViaIO cast, but that definitely seems
like a wart.  However, cleaner-seeming restrictions like no casts on
composites at all could potentially break applications that worked
okay before 8.3.

Comments anyone?  Should we try to change this, or leave well enough
alone?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] about partitioning

2008-09-11 Thread Joao Ferreira gmail
Hello all,

my application is coming to a point on which 'partitioning' seems to be
the solution for many problems:

- query speed up
- data elimination speed up



I'dd like to get the feeling of it by talking to people who use
partitioning, in general..

- good, bad,
- hard to manage, easy to manage,
- processing over-head during INSERT/UPDATE,
- stability/compatibility of pg_dump and restore operations,
- how many partitions would be reasonable for read _and_ write  access
optimal speed;

thx
joao



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb memory segment creation error

2008-09-11 Thread Tom Lane
questions [EMAIL PROTECTED] writes:
 I am getting this error with initdb while creating shared segment -
 memory segment exceeded available memory or swap space.  To reduce the
 request size (currently 1785856 bytes), reduce PostgreSQL's shared_buffers
 parameter (currently 50)) and/or its max_connections parameter (currently
 13).
 Total memory is 256 MB and memory available to user processes is 178 MB.
 It's not an issue with shared segment exceeding SHMMAX but value of SHMMAX
 is 67108864 and SHMALL is 8192. It is not exactly a Postgresql problem but
 what am I missing here?

Hm, what else is running on the box?  PG itself should fit in 256MB but
if you've got other stuff hogging memory, you might have little choice
but to buy more RAM.

(You should also double-check what SHMALL is measured in on your
platform; maybe that 8192 isn't pages but bytes ...)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] abusing plpgsql array variables

2008-09-11 Thread Tom Lane
Artacus [EMAIL PROTECTED] writes:
 If I want to pass in a text[] argument to a plpgsql function, at what 
 array size am I asking for problems? 100? 10,000? 100,000?

 Yeah, like you I was pretty worried about how it would handle using 
 larger arrays. But I was surprised to find that it did a super job of 
 handling even large arrays.

Note that this depends quite a lot on what datatype the array elements
are.  Fixed-width types like int or float should be fast.
Variable-width types like text or numeric, not so much.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Jef Peeraer


On Thu, 11 Sep 2008, Peter Eisentraut wrote:

 Jef Peeraer wrote:
  
  On Thu, 11 Sep 2008, Peter Eisentraut wrote:
  
   Tom Lane wrote:
Jef Peeraer [EMAIL PROTECTED] writes:
 i am using the xml add-ons, but the date output format seems to be
 wrong :
I think the conversion to xml intentionally always uses ISO date format,
because that's required by some spec somewhere.
   Yes, it follows XML Schema.  Which is why the output format is even
   slightly
   different from the SQL-mandated ISO format.
  i understand, but that makes it very difficult to change the date format
  afterwards. i simple flag to indicate no date conversion would be
  helpfull
 
 Well, these table_to_xml etc. functions are heavily constrained by the SQL
 standard, XML Schema, and others.  They do what they are supposed to do.  You
 are free to design your own XML export format or apply postprocessing to the
 existing ones (XSLT?).  I don't think we should overload the existing
 functions with everyone's favorite but apparently completely nonstandard
 formatting variant flag.
it would be a flag to indicate no conversion from the datestyle settings 
in the database...i think, from a users perspective, the table_to_xml is 
completely useless, if you have to reformat everything afterwards

jef
  
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Pavel Stehule
2008/9/11 Jef Peeraer [EMAIL PROTECTED]:


 On Thu, 11 Sep 2008, Peter Eisentraut wrote:

 Jef Peeraer wrote:
 
  On Thu, 11 Sep 2008, Peter Eisentraut wrote:
 
   Tom Lane wrote:
Jef Peeraer [EMAIL PROTECTED] writes:
 i am using the xml add-ons, but the date output format seems to be
 wrong :
I think the conversion to xml intentionally always uses ISO date 
format,
because that's required by some spec somewhere.
   Yes, it follows XML Schema.  Which is why the output format is even
   slightly
   different from the SQL-mandated ISO format.
  i understand, but that makes it very difficult to change the date format
  afterwards. i simple flag to indicate no date conversion would be
  helpfull

 Well, these table_to_xml etc. functions are heavily constrained by the SQL
 standard, XML Schema, and others.  They do what they are supposed to do.  You
 are free to design your own XML export format or apply postprocessing to the
 existing ones (XSLT?).  I don't think we should overload the existing
 functions with everyone's favorite but apparently completely nonstandard
 formatting variant flag.
 it would be a flag to indicate no conversion from the datestyle settings
 in the database...i think, from a users perspective, the table_to_xml is
 completely useless, if you have to reformat everything afterwards

I am not sure - mostly people should generate valid xml file.
Sending invalid dates in XML is wrong.

Pavel


 jef
  
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-11 Thread Peter Eisentraut

Jef Peeraer wrote:
it would be a flag to indicate no conversion from the datestyle settings 
in the database...i think, from a users perspective, the table_to_xml is 
completely useless, if you have to reformat everything afterwards


Just write a function that does your formatting afterwards.  You can 
even name it table_to_xml.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Healing a table after massive updates

2008-09-11 Thread Gauthier, Dave
Hi:

 

I have a job that loads a large table, but then has to update about
half the records for various reasons.  My perception of what happens on
update for a particular recors is...

- a new record will be inserted with the updated value(s).

- The old record is marked as being obselete.

- Not sure what happens to index elements that pointed to the original
(now obselete) record. Is it updated to point directly at the newly
inserted record?  Or does it use the obselete record as a link to the
newly inserted record?

 

My concern is that the resulting table is not in optimal shape for
queries.  I would like to get rid of the obseleted records (vacuum I
believe) but also heal the table in terms of filling in the holes left
where those deleted records used to be (will gather more records per
disk block read if record density on disk is greater).  Is there a way
to do this?

 

Thanks



Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Gauthier, Dave
I might be able to answer my own question...

vacuum FULL (analyze is optional)

 

Correct?

 



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gauthier, Dave
Sent: Thursday, September 11, 2008 10:01 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Healing a table after massive updates

 

Hi:

 

I have a job that loads a large table, but then has to update about
half the records for various reasons.  My perception of what happens on
update for a particular recors is...

- a new record will be inserted with the updated value(s).

- The old record is marked as being obselete.

- Not sure what happens to index elements that pointed to the original
(now obselete) record. Is it updated to point directly at the newly
inserted record?  Or does it use the obselete record as a link to the
newly inserted record?

 

My concern is that the resulting table is not in optimal shape for
queries.  I would like to get rid of the obseleted records (vacuum I
believe) but also heal the table in terms of filling in the holes left
where those deleted records used to be (will gather more records per
disk block read if record density on disk is greater).  Is there a way
to do this?

 

Thanks



Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Brad Nicholson
On Thu, 2008-09-11 at 07:01 -0700, Gauthier, Dave wrote:
 Hi:
 
  
 
 I have a job that loads a large table, but then has to “update” about
 half the records for various reasons.  My perception of what happens
 on update for a particular recors is...
 
 - a new record will be inserted with the updated value(s).
 
 - The old record is marked as being obselete.

 - Not sure what happens to index elements that pointed to the original
 (now obselete) record. Is it updated to point directly at the newly
 inserted record?  Or does it use the obselete record as a “link” to
 the newly inserted record?

Depends on the version of Postgres.  Prior to 8.3, the obsolete tuples
and index entries are dead.  In 8.3, the updates are HOT updates, it
will not leave the dead tuples or index.  8.3 might be a big help for
you.  It could remove the need to vacuum this table entirely.

 My concern is that the resulting table is not in optimal shape for
 queries.  I would like to get rid of the obseleted records (vacuum I
 believe) but also “heal” the table in terms of filling in the holes
 left where those deleted records used to be (will gather more records
 per disk block read if record density on disk is greater).  Is there a
 way to do this?

Regular VACUUM is the correct operation to get rid of the dead tuples.

If you want to compact the the table, you either need to use CLUSTER or
VACUUM FULL + REINDEX.
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Alvaro Herrera
Harald Fuchs escribió:

 Nice trick, but when I try the following variant:
 
   psql -v TEST=16 -c 'select :TEST as input'
 
 I get
 
   ERROR:  syntax error at or near :
   LINE 1: select :TEST as input
 
 This seems to be contrary to the psql manual page:
 
  These assignments are done during a very early stage of start-up...

Seems like a bug in -c ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Alvaro Herrera
Brad Nicholson wrote:

 If you want to compact the the table, you either need to use CLUSTER or
 VACUUM FULL + REINDEX.

Actually those are all pretty slow.  If you can do a no-op ALTER TYPE
that rewrites the entire table, it is a lot faster.  Something like

ALTER TABLE tab ALTER COLUMN col TYPE integer;

Assume that column col on table tab already has type integer.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Bill Moran
In response to Gauthier, Dave [EMAIL PROTECTED]:

 I might be able to answer my own question...
 
 vacuum FULL (analyze is optional)

CLUSTER _may_ be a better choice, but carefully read the docs regarding
it's drawbacks first.  You may want to do some benchmarks to see if it's
really needed before you commit to it as a scheduled operation.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023


IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Alvaro Herrera
Bill Moran wrote:
 In response to Gauthier, Dave [EMAIL PROTECTED]:
 
  I might be able to answer my own question...
  
  vacuum FULL (analyze is optional)
 
 CLUSTER _may_ be a better choice, but carefully read the docs regarding
 it's drawbacks first.  You may want to do some benchmarks to see if it's
 really needed before you commit to it as a scheduled operation.

What drawbacks?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Bill Moran
In response to Alvaro Herrera [EMAIL PROTECTED]:

 Bill Moran wrote:
  In response to Gauthier, Dave [EMAIL PROTECTED]:
  
   I might be able to answer my own question...
   
   vacuum FULL (analyze is optional)
  
  CLUSTER _may_ be a better choice, but carefully read the docs regarding
  it's drawbacks first.  You may want to do some benchmarks to see if it's
  really needed before you commit to it as a scheduled operation.
 
 What drawbacks?

There's the whole there will be two copies of the table on-disk thing
that could be an issue if it's a large table.

Depending on the version of PG in use, there are warnings about tuple
visibility during CLUSTER.  It seems as if most of these have been
removed for 8.3.

And while this applies to VACUUM FULL as well, it might just be a bad
idea.  If the number of rows inserted isn't a significant increase of
the overall size of the table, he may find that overall performance is
better if he uses plain old VACUUM so that the FSM stays at a stable
size.

Some of this is dependent on PG version, which the OP neglected to mention.
Other stuff is dependent on what kind of maintenance window he has, which
was also not mentioned.  Other stuff id dependent on various various
details of the actual process, which (looking back through the thread)
I may have misunderstood.  I took the post to mean that he was loading
additional data into a pre-existing table, which is a difference scenario
than loading up a virgin table.

So, you know, standard disclaimer about YMMV, which it seems that _I_
worded poorly.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Chris Velevitch
On Thu, Sep 11, 2008 at 6:01 PM, Pavel Stehule [EMAIL PROTECTED] wrote:
 you should to use trigger

I've never used trigger before, it looks messy and error prone having
to write functions.

How is it that you can declare the default value of a column on insert
but not on update?


Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
September meeting: It's Going To Be Brilliant
Date: Mon 29th September 6pm for 6:30 start
Details and RSVP on http://apugs2008september.eventbrite.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] European PGDay 2008 - registration open

2008-09-11 Thread Magnus Hagander
The European PGDay 2008 is now open for registration. We are working on
finalizing the English schedule, but we already have an impressive list
of speakers available at http://www.pgday.org/en/speakers. The schedule
will have at least one English talk at all times, as well as a number of
Italian ones. There will also be social activities after the conference
- stay tuned to http://www.pgday.org/en/ for details.

While the full schedule is being worked on, attendees can now register
their participation in the conference at https://register.pgday.org.
Although the conference will charge no entry fee, we do ask all
attendees to register at this site as soon as possible, to help us plan
the schedule to best suit your needs. It is also recommended that you
start looking at your travel arrangements - information about travel and
accommodation is available at http://www.pgday.org/en/.

We would also like to thank our sponsors who make it possible to make
this conference the best in Europe, while keeping it free:

Gold Sponsor - EnterpriseDB
Bronze Sponsors - 2ndQuadrant, Continuent, Dalibo, Sonologic and Sun
Microsystems

See http://www.pgday.org/en/sponsors for a full list of sponsors.

There are still openings in the sponsorship program for all interested
parties - see http://www.pgday.org/en/sponsors/campaign

-- 
The PGDay.eu organization team,
ITPUG and PostgreSQL Europe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-11 Thread Tomasz Ostrowski
On 2008-09-10 16:46, Jack Orenstein wrote:

 Application requirement. We need to do something for each row retrieved from 
 BIG 
 and the something is expensive. We do the scan slowly (30 second sleep inside 
 the loop) to amortize the cost.

Then do the processing in separate transactions like this (in pseudocode):

$last_id = -1;
do {
begin transaction;
$result = select * from bigtable
where id$last_id
and processed=false
order by id limit 1;
if ( empty($result) ) {
rollback;
break;
}
do_something_expensive_with($result[0]);
update bigtable set processed=true where id=$result[0][id];
commit;
sleep 30;
} while (true);

Always avoid long running transactions. This is recommended for any
transactional database.

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No error when column doesn't exist

2008-09-11 Thread Peter Eisentraut

Tom Lane wrote:

The ideas I had involved not considering the cast interpretation when
the actual syntax is table.column and some-set-of-other-conditions.
While this is certainly possible to implement, any variant of it will
break the existing 100% equivalence of foo.bar and bar(foo); which
seems to me to be a nice principle, though I grant you won't find it
anywhere in the SQL standard.


I think if we say that functions can be used as table attributes, and 
types can be used as (cast) functions, and tables are types, then we are 
simply stuck with the current behavior.  Individually, these all make 
sense, so you can't break that chain without some really complicated warts.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Peter Eisentraut

Harald Fuchs wrote:

In article [EMAIL PROTECTED],
Greg Smith [EMAIL PROTECTED] writes:


On Tue, 9 Sep 2008, Artacus wrote:

Can psql access environmental variables or command line params?



$ cat test.sql
select :TEST as input;
$ psql -v TEST=16 -f test.sql
 input
---
16
(1 row)


Nice trick, but when I try the following variant:

  psql -v TEST=16 -c 'select :TEST as input'

I get

  ERROR:  syntax error at or near :
  LINE 1: select :TEST as input

This seems to be contrary to the psql manual page:

 These assignments are done during a very early stage of start-up...




psql manual page:

   -c command

   --command command
  Specifies that psql is to execute one command string, command,
  and then exit. This is useful in shell scripts.

  command must be either a command string that is completely
  parsable by the server (i.e., it  contains no psql specific
  features), or a single backslash command. Thus you cannot mix
  SQL and psql meta-commands with this option.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 1:35 AM, Chris Velevitch
[EMAIL PROTECTED] wrote:
 In 7.4, how do I declare that a column in a table is to be
 automatically set to the value of some fixed expression whenever a row
 is updated?

 Eg column last_modified is always set to current_timestamp

A trigger as Pavel said.  writing them in plpgsql seems a bit hard at
first, but it's a simple language and it's pretty easy to write stuff
like this in.

Here's a simple example of last modified trigger using plpgsql from way back:

-- FUNCTION --

CREATE FUNCTION modtime () RETURNS opaque AS '
BEGIN
new.lm :=''now'';
RETURN new;
END;
' LANGUAGE 'plpgsql';

-- TABLE --

CREATE TABLE dtest (
id int primary key,
fluff text,
lm timestamp without time zone
);


--TRIGGER --

CREATE TRIGGER dtest
  BEFORE UPDATE or INSERT ON dtest FOR EACH ROW EXECUTE PROCEDURE
modtime(lm);

-- SQL TESTS --

INSERT INTO dtest (id, fluff) VALUES (1,'this is a test');
INSERT INTO dtest (id, fluff) VALUES (2,'this is another test');
SELECT * FROM dtest;
  1 | this is a test   | 2003-04-02 10:33:12.577089
  2 | this is another test | 2003-04-02 10:33:18.591148
UPDATE dtest SET id=3 WHERE id=1;
  3 | this is a test | 2003-04-02 10:34:52.219963  [1]
UPDATE dtest SET fluff='now is the time' WHERE id=2;
SELECT * FROM dtest WHERE id=2;
  2 | now is the time | 2003-04-02 10:38:06.259443 [2]
UPDATE dtest SET lm='2003-04-02 08:30:00' WHERE id=3;
SELECT * FROM dtest WHERE id=3;
  3 | this is a test | 2003-04-02 10:36:15.45687 [3]

[1] The timestamp has changed for this record when we changed the id field.
[2] The timestamp also changes for the fluff field.
[3] We tried to set lm, but the trigger on that field in dtest
intercepted the change and forced it















 Chris
 --
 Chris Velevitch
 Manager - Adobe Platform Users Group, Sydney
 m: 0415 469 095
 www.apugs.org.au

 Adobe Platform Users Group, Sydney
 September meeting: It's Going To Be Brilliant
 Date: Mon 29th September 6pm for 6:30 start
 Details and RSVP on http://apugs2008september.eventbrite.com

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Alvaro Herrera
Peter Eisentraut escribió:

 psql manual page:

-c command

--command command
   Specifies that psql is to execute one command string, command,
   and then exit. This is useful in shell scripts.

   command must be either a command string that is completely
   parsable by the server (i.e., it  contains no psql specific
   features), or a single backslash command. Thus you cannot mix
   SQL and psql meta-commands with this option.

Doesn't say about variable expansion ...  And it seems to be in a
different realm, because the point is that the command is going to have
a single destination (either \-processing or sending it to the server).

Is psql being just lazy here and avoiding parsing the command?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 1:35 AM, Chris Velevitch
[EMAIL PROTECTED] wrote:
 In 7.4, how do I declare that a column in a table is to be
 automatically set to the value of some fixed expression whenever a row
 is updated?

Oh yeah, if you can upgrade from 7.4 that would be a good idea, it's
getting a bit old and the newer versions of pgsql like 8.3 are
literally many times faster at most things.  We just upgraded from 8.1
to 8.3 and resolved a lot of performance issues, I can't imagine how
slow it would be running 7.4 nowadays.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Kevin Hunter
At 7:13am -0400 on Thu, 11 Sep 2008, Harald Fuchs wrote:
 Nice trick, but when I try the following variant:
 
   psql -v TEST=16 -c 'select :TEST as input'
 
 I get [a syntax error]

 This seems to be contrary to the psql manual page:

Nope.  Take a look at the -c option.  Specifically Thus you cannot mix
SQL and psql meta-commands with this option.

You might try shell interpretation:

$ TEST=16; psql -c select $TEST as \input1\;
$ TEST=16; echo select $TEST as \input1\; | psql

Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-11 Thread Tomasz Ostrowski
On 2008-09-11 17:21, Jack Orenstein wrote:

 Then do the processing in separate transactions like this (in pseudocode):
 The id  last_id trick doesn't work for me -- I don't have an index that 
 would 
 support it efficiently.
 
 Turning on autocommit seems to work, I'm just not clear on the reason why.

Not knowing would bite you some time.

Please provide some (pseudo-)code on what you do. Do you mark rows as
processed? Do you save output of processing to a database? IMHO without
it it is hard to solve a mystery but I'll try below.

 I played around with a JDBC test program, and so far cannot see how
 the autocommit mode causes variations in what is seen by the
 scan. The behavior I've observed is consistent with the SERIALIZABLE
 isolation level, but 1) I thought the default was READ COMMITTED

When you do:
result = query(select something from sometable)
then all rows of a result will be cached by a client program. To see
effects of serialization modes you have to issue another query in the
same transaction or use a cursor.

Check memory usage of your client program - you'll see that it needs a
lot of memory for query results.

 2) why does the 
 accumulation of row versions have anything to do with autocommit mode (as 
 opposed to isolation level) on a connection used for the scan?

I think after caching a result of a query you start processing your
rows. When you finish processing your first row you update your database
to save results. In autocommit mode a transaction in which you do this
update is automatically commited and ended. When autocommit is turned
off a transaction is not ended so from now on vacuum is not working
until you finish processing all rows.

Regards
Tometzky

PS. Please keep a CC to the list.
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-11 Thread Jack Orenstein

Tomasz Ostrowski wrote:

On 2008-09-11 17:21, Jack Orenstein wrote:


Then do the processing in separate transactions like this (in pseudocode):
The id  last_id trick doesn't work for me -- I don't have an index that would 
support it efficiently.


Turning on autocommit seems to work, I'm just not clear on the reason why.


Not knowing would bite you some time.

Please provide some (pseudo-)code on what you do. Do you mark rows as
processed? Do you save output of processing to a database? IMHO without
it it is hard to solve a mystery but I'll try below.


No, it's really as simple as what I said in earlier email. The scan just
walks through BIG very slowly. On another connection, we're inserting/updating
the same table, and in each transaction also updating TINY.




I played around with a JDBC test program, and so far cannot see how
the autocommit mode causes variations in what is seen by the
scan. The behavior I've observed is consistent with the SERIALIZABLE
isolation level, but 1) I thought the default was READ COMMITTED


When you do:
result = query(select something from sometable)
then all rows of a result will be cached by a client program. 


I am very sure this is not happening. Maybe some rows are being cached 
(specifying fetch size), but certainly not all of them. It used to, with older 
drivers, (7.4?) but I've been using 8.1 drivers (at least) for a long time. 
Maybe some result set options you're using cause such memory usage?


Jack

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] keep alive losing connections

2008-09-11 Thread johnf
Hi,
I have read several of the posting on the list and I'm guessing I have a 
router issue because I get disconnected from the database after some idle 
time.  I'm connecting remotely to a postgres 8.3.1 on openSUSE 11.  My 
question is how can I determine what the real cause of dropping the 
connection.  Is it my router or the firewall on the remote server, or 
something else?
-- 
John Fabiani

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] keep alive losing connections

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 10:07 AM, johnf [EMAIL PROTECTED] wrote:
 Hi,
 I have read several of the posting on the list and I'm guessing I have a
 router issue because I get disconnected from the database after some idle
 time.  I'm connecting remotely to a postgres 8.3.1 on openSUSE 11.  My
 question is how can I determine what the real cause of dropping the
 connection.  Is it my router or the firewall on the remote server, or
 something else?

Hard to say really without running some kind of network analyzer like
wireshark (I think that's the new name) on both ends and watching for
RST packets.

But, you can usually overcome this problem by setting a lower
tcp_keepalivetime, something like 900 (15 minutes) or 300 (5 minutes)
will usually do the trick, and has the added bonus of harvesting
connections left behind by processes that didn't properly disconnect
(crashed, lost network connection) more often than once every 2 hours.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Windows ODBC Driver

2008-09-11 Thread Greg Lindstrom
Hello,

I would like to connect to Postgres from Python running on a Windows box.  I
need the ODBC driver to create a windows ODBC datasource.  I've been looking
for two days and have found lots of dead links, but no drivers.  Can someone
please help me locate a driver so I can connect?

Thanks,
--greg


Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Stephen Frost
Greg,

* Greg Lindstrom ([EMAIL PROTECTED]) wrote:
 I would like to connect to Postgres from Python running on a Windows box.  I
 need the ODBC driver to create a windows ODBC datasource.  I've been looking
 for two days and have found lots of dead links, but no drivers.  Can someone
 please help me locate a driver so I can connect?

I would guess this is what you're looking for?:

http://www.postgresql.org/ftp/odbc/versions/

Enjoy,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Dave Page
On Thu, Sep 11, 2008 at 5:13 PM, Greg Lindstrom [EMAIL PROTECTED] wrote:
 Hello,

 I would like to connect to Postgres from Python running on a Windows box.  I
 need the ODBC driver to create a windows ODBC datasource.  I've been looking
 for two days and have found lots of dead links, but no drivers.  Can someone
 please help me locate a driver so I can connect?

http://www.postgresql.org/ftp/odbc/versions/


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 8:56 AM, Bill Moran
[EMAIL PROTECTED] wrote:
 In response to Alvaro Herrera [EMAIL PROTECTED]:

 Bill Moran wrote:
  In response to Gauthier, Dave [EMAIL PROTECTED]:
 
   I might be able to answer my own question...
  
   vacuum FULL (analyze is optional)
 
  CLUSTER _may_ be a better choice, but carefully read the docs regarding
  it's drawbacks first.  You may want to do some benchmarks to see if it's
  really needed before you commit to it as a scheduled operation.

 What drawbacks?

 There's the whole there will be two copies of the table on-disk thing
 that could be an issue if it's a large table.

I've also found cluster to be pretty slow, even on 8.3.  On a server
that hits 30-40Megs a second write speed for random access during
pgbench, it's writing out at 1 to 2 megabytes a second when it runs,
and takes the better part of a day on our biggest table.  vacuumdb -fz
+ reindexdb ran in about 6 hours which means we could fit it into our
maintenance window.  vacuum moves a lot more data per second than
cluster.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] keep alive losing connections

2008-09-11 Thread johnf
On Thursday 11 September 2008 09:13:14 am Scott Marlowe wrote:
 On Thu, Sep 11, 2008 at 10:07 AM, johnf [EMAIL PROTECTED] wrote:
  Hi,
  I have read several of the posting on the list and I'm guessing I have a
  router issue because I get disconnected from the database after some idle
  time.  I'm connecting remotely to a postgres 8.3.1 on openSUSE 11.  My
  question is how can I determine what the real cause of dropping the
  connection.  Is it my router or the firewall on the remote server, or
  something else?

 Hard to say really without running some kind of network analyzer like
 wireshark (I think that's the new name) on both ends and watching for
 RST packets.

 But, you can usually overcome this problem by setting a lower
 tcp_keepalivetime, something like 900 (15 minutes) or 300 (5 minutes)
 will usually do the trick, and has the added bonus of harvesting
 connections left behind by processes that didn't properly disconnect
 (crashed, lost network connection) more often than once every 2 hours.

In my case the program is doing a long file transfer (nothing to do with the 
database) and when it returns I discover the connection has been closed.  So 
I was really hoping to find an easy way of testing where the problem is 
happening.  Like I said I think it has something to do with the router 
(wireless).  But it is hard to tell what the cause is from the information 
provided by the list.  Of course I understand the list is about popstgres and 
not routers.  But the problem is a related subject I think.  Spounds like I'm 
stuck without a solution for the moment.  Thanks for the help.

-- 
John Fabiani

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] index on id and created_at

2008-09-11 Thread Marcus Engene

Hi,

If I have a table like...

CREATE TABLE apa
(
objectid   SERIAL PRIMARY KEY NOT NULL
,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
,somedata   text
)
WITHOUT OIDS;

...where if rowX har higher objectid than rowY, it is implied that rowX 
has a later created_at then rowY.


select
   objectid
from
   apa
where
   created_at  now() - interval '1 day'
order by
   objectid desc;

In this select, it would have been nice to dodge the full table scan 
without adding an index to created_at. Is this possible somehow?


Best regards,
Marcus


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] keep alive losing connections

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 10:24 AM, johnf [EMAIL PROTECTED] wrote:
 On Thursday 11 September 2008 09:13:14 am Scott Marlowe wrote:
 On Thu, Sep 11, 2008 at 10:07 AM, johnf [EMAIL PROTECTED] wrote:
  Hi,
  I have read several of the posting on the list and I'm guessing I have a
  router issue because I get disconnected from the database after some idle
  time.  I'm connecting remotely to a postgres 8.3.1 on openSUSE 11.  My
  question is how can I determine what the real cause of dropping the
  connection.  Is it my router or the firewall on the remote server, or
  something else?

 Hard to say really without running some kind of network analyzer like
 wireshark (I think that's the new name) on both ends and watching for
 RST packets.

 But, you can usually overcome this problem by setting a lower
 tcp_keepalivetime, something like 900 (15 minutes) or 300 (5 minutes)
 will usually do the trick, and has the added bonus of harvesting
 connections left behind by processes that didn't properly disconnect
 (crashed, lost network connection) more often than once every 2 hours.

 In my case the program is doing a long file transfer (nothing to do with the
 database) and when it returns I discover the connection has been closed.  So
 I was really hoping to find an easy way of testing where the problem is
 happening.  Like I said I think it has something to do with the router
 (wireless).  But it is hard to tell what the cause is from the information
 provided by the list.  Of course I understand the list is about popstgres and
 not routers.  But the problem is a related subject I think.  Spounds like I'm
 stuck without a solution for the moment.  Thanks for the help.

My experience with wireless routers has been that the only ones I
trust are running one of the open source packages, like dd-wrt
openwrt, etc...  The factory firmware on almost all the others is just
crap designed to get the cheapest product out the door with the
biggest margin with no concern for quality.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Alan Scott
I installed PostGres Adv Server locally from enterprisedb.com and had
everything needed to create an ODBC connection for our data modeling
tools...

On Thu, Sep 11, 2008 at 12:13 PM, Greg Lindstrom [EMAIL PROTECTED]wrote:

 Hello,

 I would like to connect to Postgres from Python running on a Windows box.
 I need the ODBC driver to create a windows ODBC datasource.  I've been
 looking for two days and have found lots of dead links, but no drivers.  Can
 someone please help me locate a driver so I can connect?

 Thanks,
 --greg



[GENERAL] connection timeouts and killing users

2008-09-11 Thread Gauthier, Dave
Hi:

 

Here's the problem...

 

I have a read-only DB  that gets reloaded from scratch every night.
This takes several hours and I don't want any late night users to have
to wait for this process to complete, so I have 2 DBs.  The first DB is
the one the users access.  Call it main_db.  I load a second DB which
has an identical architecture at night.  Call it standby_db.  When the
load finishes, I rename main_db to tmp, then rename standby_db to
main_db, then rename tmp to standby_db.  So, the users should have
access to a main_db all the time (except for a second when the renames
happen).  And standby_db serves as a full backup which I can use
should I need it.

 

Here's the problem...

 

Sometimes the renames fail because people are still attached to either
main_db or standby_db.  The error messages indicate this is the
problem anyway.  Someof those users (most of them) are probably fast
asleep at home and forgot to exit the interactive session that was
connected to the DB.  

 

Q: Is there a way I can set a timeout where, if a user is inactive for
say an hour, they get disconnected?

Q Is there a way to kill all active users without having to cycle the
DB server with something like pg_ctl stop -m fast -D ... ?

Q: (the best option)... Is there a way I can leave those users attached
to their DB regardless of the fact that it's name changed while they
were attached?

 

Thanks in ADvance for any help.

 

 

 

 



Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Roderick A. Anderson

Kevin Hunter wrote:

At 7:13am -0400 on Thu, 11 Sep 2008, Harald Fuchs wrote:

Nice trick, but when I try the following variant:

  psql -v TEST=16 -c 'select :TEST as input'

I get [a syntax error]



This seems to be contrary to the psql manual page:


Nope.  Take a look at the -c option.  Specifically Thus you cannot mix
SQL and psql meta-commands with this option.

You might try shell interpretation:

$ TEST=16; psql -c select $TEST as \input1\;
$ TEST=16; echo select $TEST as \input1\; | psql


Whatever happened to pgbash?  I see the last update was Feb 2003 but 
that was for Pg v7.3.



Rod
--


Kevin




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Adrian Klaver
-- Original message --
From: Roderick A. Anderson [EMAIL PROTECTED]
 Kevin Hunter wrote:
  At 7:13am -0400 on Thu, 11 Sep 2008, Harald Fuchs wrote:
  Nice trick, but when I try the following variant:
 
psql -v TEST=16 -c 'select :TEST as input'
 
  I get [a syntax error]
  
  This seems to be contrary to the psql manual page:
  
  Nope.  Take a look at the -c option.  Specifically Thus you cannot mix
  SQL and psql meta-commands with this option.
  
  You might try shell interpretation:
  
  $ TEST=16; psql -c select $TEST as \input1\;
  $ TEST=16; echo select $TEST as \input1\; | psql
 
 Whatever happened to pgbash?  I see the last update was Feb 2003 but 
 that was for Pg v7.3.
 
 
 Rod
 -- 
  
  Kevin
  
 
 

See also:
http://www.edlsystems.com/shellsql/
http://pgfoundry.org/projects/plsh/
--
Adrian Klaver
[EMAIL PROTECTED]

 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Alan Hodgson
On Thursday 11 September 2008, Gauthier, Dave [EMAIL PROTECTED] 
wrote:
 I have a job that loads a large table, but then has to update about
 half the records for various reasons.  My perception of what happens on
 update for a particular recors is...

 - a new record will be inserted with the updated value(s).

 - The old record is marked as being obselete.


What you might consider doing is loading the data into a temp table, 
updating it there, then copying that data into the final destination. 
Depending on the indexes involved, you might even find this to be faster.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Healing a table after massive updates

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 11:15 AM, Alan Hodgson [EMAIL PROTECTED] wrote:
 On Thursday 11 September 2008, Gauthier, Dave [EMAIL PROTECTED]
 wrote:
 I have a job that loads a large table, but then has to update about
 half the records for various reasons.  My perception of what happens on
 update for a particular recors is...

 - a new record will be inserted with the updated value(s).

 - The old record is marked as being obselete.


 What you might consider doing is loading the data into a temp table,
 updating it there, then copying that data into the final destination.
 Depending on the indexes involved, you might even find this to be faster.

Especially if you can drop then recreate them on the real table before
reimporting them to it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index on id and created_at

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 10:08 AM, Marcus Engene [EMAIL PROTECTED] wrote:
 select
   objectid
 from
   apa
 where
   created_at  now() - interval '1 day'
 order by
   objectid desc;

 In this select, it would have been nice to dodge the full table scan without
 adding an index to created_at. Is this possible somehow?

Not really.  It's a choice of one or the other.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] connection timeouts and killing users

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 10:42 AM, Gauthier, Dave
[EMAIL PROTECTED] wrote:
 Hi:



 Here's the problem...



 I have a read-only DB  that gets reloaded from scratch every night.  This
 takes several hours and I don't want any late night users to have to wait
 for this process to complete, so I have 2 DBs.  The first DB is the one the
 users access.  Call it main_db.  I load a second DB which has an identical
 architecture at night.  Call it standby_db.  When the load finishes, I
 rename main_db to tmp, then rename standby_db to main_db, then
 rename tmp to standby_db.  So, the users should have access to a
 main_db all the time (except for a second when the renames happen).  And
 standby_db serves as a full backup which I can use should I need it.



 Here's the problem...



 Sometimes the renames fail because people are still attached to either
 main_db or standby_db.  The error messages indicate this is the problem
 anyway.  Someof those users (most of them) are probably fast asleep at home
 and forgot to exit the interactive session that was connected to the DB.

 Q: Is there a way I can set a timeout where, if a user is inactive for say
 an hour, they get disconnected?

Not that I know of.

 Q Is there a way to kill all active users without having to cycle the DB
 server with something like pg_ctl stop –m fast –D ... ?

Yes,  issue a kill on the pid from the command line as either postgres
or root.  note I didn't say kill -9 there.

 Q: (the best option)... Is there a way I can leave those users attached to
 their DB regardless of the fact that it's name changed while they were
 attached?

I don't think so.  What might work best is to have two pg_hba.conf
files, and link to each one.  so one is pg_hba.conf.lockout and one is
pg_hba.conf.normal, let's say.  lockout is set to only answer to the
postgres user.  Switch the pg_hba.conf files, and do a pg_ctl
stop;pg_ctl start or equivalent (/etc/init.d/postgresql stop / start)
and then do your processing.  switch them back and restart pgsql
again.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Psql command for rowcount

2008-09-11 Thread Markova, Nina
Hi,

Is there a psql or other command that I can use to list tables and their
rows? All I found is this:
http://archives.postgresql.org/pgsql-hackers/2004-09/msg00876.php

The describe (\d) command (or \td+ ) doesn't show them (I  expected
tuples count to be  there too) - I can only see name, type, owner.

 Schema |   Name| Type  |  Owner   
+---+---+--
 public | a0factor  | table | postgres  - tuples count
 public | a0factor2 | table | postgres
 public | actionlog | table | postgres
 public | deployment_access | table | postgres
 public | fircoef   | table | postgres
 public | firfilt   | table | postgres
 public | instrument| table | postgres
 public | netmap| table | postgres
 public | ops_initials  | table | postgres
 public | opslog| table | postgres
 public | opslog2actionlog  | table | postgres
 public | participation | table | postgres
 public | pazdesc   | table | postgres
 public | pazvals   | table | postgres
 public | snetsta   | table | postgres
(15 rows)

Thanks in advance,
Nina


Re: [GENERAL] index on id and created_at

2008-09-11 Thread hubert depesz lubaczewski
On Thu, Sep 11, 2008 at 06:08:15PM +0200, Marcus Engene wrote:
 In this select, it would have been nice to dodge the full table scan  
 without adding an index to created_at. Is this possible somehow?

yes.
1. drop condition on created_at
2. run the query as cursor
3. fetch rows from cursor until you will get row that has unsatisfactory
created_at.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: [EMAIL PROTECTED] / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql command for rowcount

2008-09-11 Thread David Wilson
On Thu, Sep 11, 2008 at 3:19 PM, Markova, Nina [EMAIL PROTECTED] wrote:
 Is there a psql or other command that I can use to list tables and their
 rows? All I found is this:
 http://archives.postgresql.org/pgsql-hackers/2004-09/msg00876.php

select tablename,reltuples from pg_class inner join pg_tables on
tablename=relname where tablename !~* 'pg_*' and tablename !~*
'sql_*';

Remember that the reltuples count is an *estimate* and won't be 100%
an accurate- only a count(*) will get you that.

-- 
- David T. Wilson
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] connection timeouts and killing users

2008-09-11 Thread Gauthier, Dave
Yes,  issue a kill on the pid from the command line as either postgres
or root.  note I didn't say kill -9 there.

How do I get the pids? Is there something specific I should look for in
the executable name I can see in ps?

 Will I break any remote server processes which are handeling remote
attaches if I do this?


-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 11, 2008 3:03 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] connection timeouts and killing users

On Thu, Sep 11, 2008 at 10:42 AM, Gauthier, Dave
[EMAIL PROTECTED] wrote:
 Hi:



 Here's the problem...



 I have a read-only DB  that gets reloaded from scratch every night.
This
 takes several hours and I don't want any late night users to have to
wait
 for this process to complete, so I have 2 DBs.  The first DB is the
one the
 users access.  Call it main_db.  I load a second DB which has an
identical
 architecture at night.  Call it standby_db.  When the load finishes,
I
 rename main_db to tmp, then rename standby_db to main_db, then
 rename tmp to standby_db.  So, the users should have access to a
 main_db all the time (except for a second when the renames happen).
And
 standby_db serves as a full backup which I can use should I need it.



 Here's the problem...



 Sometimes the renames fail because people are still attached to either
 main_db or standby_db.  The error messages indicate this is the
problem
 anyway.  Someof those users (most of them) are probably fast asleep at
home
 and forgot to exit the interactive session that was connected to the
DB.

 Q: Is there a way I can set a timeout where, if a user is inactive for
say
 an hour, they get disconnected?

Not that I know of.

 Q Is there a way to kill all active users without having to cycle
the DB
 server with something like pg_ctl stop -m fast -D ... ?

Yes,  issue a kill on the pid from the command line as either postgres
or root.  note I didn't say kill -9 there.

 Q: (the best option)... Is there a way I can leave those users
attached to
 their DB regardless of the fact that it's name changed while they were
 attached?

I don't think so.  What might work best is to have two pg_hba.conf
files, and link to each one.  so one is pg_hba.conf.lockout and one is
pg_hba.conf.normal, let's say.  lockout is set to only answer to the
postgres user.  Switch the pg_hba.conf files, and do a pg_ctl
stop;pg_ctl start or equivalent (/etc/init.d/postgresql stop / start)
and then do your processing.  switch them back and restart pgsql
again.


Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Dann Corbit
Commercial Windows  Unix based PostgreSQL ODBC drivers:

http://www.connx.com

http://www.openlinksw.com 

 

For the Mac:

http://www.actualtechnologies.com/product_opensourcedatabases.php

 

Free:

http://pgfoundry.org/softwaremap/trove_list.php?form_cat=310

http://www.postgresql.org/ftp/odbc/versions/

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Greg Lindstrom
Sent: Thursday, September 11, 2008 9:14 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Windows ODBC Driver

 

Hello,

I would like to connect to Postgres from Python running on a Windows
box.  I need the ODBC driver to create a windows ODBC datasource.  I've
been looking for two days and have found lots of dead links, but no
drivers.  Can someone please help me locate a driver so I can connect?

Thanks,
--greg



Re: [GENERAL] connection timeouts and killing users

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 2:27 PM, Gauthier, Dave [EMAIL PROTECTED] wrote:

 How do I get the pids? Is there something specific I should look for in the
 executable name I can see in ps?

You can either use a combination of ps and grep:

ps ax|grep postgres|grep dbname

or use the pg_stat_activity table;

  Will I break any remote server processes which are handeling remote
 attaches if I do this?

Yes, they would lose their connection.  It's a choice you don't have
anymore if you're renaming databases underneath them.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql command for rowcount

2008-09-11 Thread Chander Ganesan

David Wilson wrote:

On Thu, Sep 11, 2008 at 3:19 PM, Markova, Nina [EMAIL PROTECTED] wrote:
  

Is there a psql or other command that I can use to list tables and their
rows? All I found is this:
http://archives.postgresql.org/pgsql-hackers/2004-09/msg00876.php



select tablename,reltuples from pg_class inner join pg_tables on
tablename=relname where tablename !~* 'pg_*' and tablename !~*
'sql_*';

Remember that the reltuples count is an *estimate* and won't be 100%
an accurate- only a count(*) will get you that.
  
And it may not be accurate at all unless you or autovacuum has done a 
recent ANALYZE ...  Also keep in mind that in PostgreSQL, a count(*) 
will actually do a table scan, and could be time (and I/O) consuming if 
you are looking at a large table.


--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com
Ask me about Expert PostgreSQL and PostGIS training - delivered worldwide.



Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Bill Todd

Stephen Frost wrote:

Greg,

* Greg Lindstrom ([EMAIL PROTECTED]) wrote:
  

I would like to connect to Postgres from Python running on a Windows box.  I
need the ODBC driver to create a windows ODBC datasource.  I've been looking
for two days and have found lots of dead links, but no drivers.  Can someone
please help me locate a driver so I can connect?



I would guess this is what you're looking for?:

http://www.postgresql.org/ftp/odbc/versions/

Enjoy,

Stephen
  
FWIW I cannot get the ODBC driver to work correctly with ADO and the OLE 
DB provider for ODBC. It sees TEXT fields as VARCHAR instead of 
LONGVARCHAR. I do not know if the problem is at the ODBC level or the 
ADO level but test carefully if you are going to use TEXT fields.


Bill


Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-11 Thread Ron Mayer

Tom Lane wrote:
interval '1' year.  


...is SQL spec syntax, but it's not fully implemented in Postgres...

Or someone could try to make it work, but given that no one has taken
the slightest interest since Tom Lockhart left the project, I wouldn't
hold my breath waiting for that.


I have interest. For 5 years I've been maintaining a patch for a client
that allows the input of ISO-8601 intervals (like 'P1YT1M') rather than
the nonstandard shorthand ('1Y1M') that postgresql supports[1].

I'd be interested in working on this. Especially if supporting SQL
standard interval syntax could improve the chances of getting my
ISO-8601-interval-syntax replacing nonstandard-postgres-shorthand-intervals
patch accepted again, I'd be quite happy work on it.

Tom in 2003 said my code looked cleaner than the current code[2], and
the patch was accepted[3] for a while before being rejected - I believe
because Peter said he'd like to see the SQL standard intervals first.
I see it's still a TODO, though.


the grammar supports it but the info doesn't get propagated to
interval_in, and interval_in wouldn't know what to do even if it did
have the information that there was a YEAR qualifier after the literal.


Any hints on how best to propagate the needed info from the grammar?
Or should it be obvious to me from reading the code?

[1] http://archives.postgresql.org/pgsql-patches/2003-09/msg00119.php
[2] http://archives.postgresql.org/pgsql-patches/2003-09/msg00121.php
[3] http://archives.postgresql.org/pgsql-patches/2003-12/msg00253.php

  Ron Mayer
  (formerly [EMAIL PROTECTED] who
  posted those ISO-8601 interval patches)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb memory segment creation error

2008-09-11 Thread questions
I have few other applications running on my machine but they all don't use
more than 60MB. I checked and SHMALL is in pages. I also reduced
shared_buffers and max_connections but that didn't help. Also I did ipcs
after I got the error and following is what I got -

T   ID  KEY   MODEOWNER GROUP  NATTCHSEGSZ
CPID LPID ATIMEDTIMECTIME
m   1310722 --rw---pgsqlpgsql0
62423046705567055 14:13:34 14:13:34 14:13:34
m   1310733 --rw---pgsqlpgsql0
52346886705767057 14:14:02 14:14:02 14:14:02
m   1310744 --rw---pgsqlpgsql0
42352646705967059 14:14:30 14:14:30 14:14:30
m   1310755 --rw---pgsqlpgsql0
32030726706167061 14:14:57 14:14:57 14:14:57
m   1310766 --rw---pgsqlpgsql0
22036486706367063 14:15:25 14:15:25 14:15:25
m   1310777 --rw---pgsqlpgsql0
16875526707267072 14:15:54 14:15:54 14:15:54
m   1310788 --rw---pgsqlpgsql0
364789766707467074 14:16:22 14:16:22 14:16:22
m   1310799 --rw---pgsqlpgsql0
95928326708667086 14:19:10 14:19:10 14:19:10
m   131080   10 --rw---pgsqlpgsql0
16875526713767137 14:23:50 14:23:50 14:23:50
m   1311191 --rw---pgsqlpgsql0
113213446705367053 14:13:07 14:13:07 14:13:07

Postgres did create, attach, detach shared segments but did'nt delete them.
I think these processes were forked by postmaster but why so many were
created? and why those segments not reported?

Thanks,
Fahad

On Thu, Sep 11, 2008 at 5:20 AM, Tom Lane [EMAIL PROTECTED] wrote:

 questions [EMAIL PROTECTED] writes:
  I am getting this error with initdb while creating shared segment -
  memory segment exceeded available memory or swap space.  To reduce the
  request size (currently 1785856 bytes), reduce PostgreSQL's
 shared_buffers
  parameter (currently 50)) and/or its max_connections parameter (currently
  13).
  Total memory is 256 MB and memory available to user processes is 178 MB.
  It's not an issue with shared segment exceeding SHMMAX but value of
 SHMMAX
  is 67108864 and SHMALL is 8192. It is not exactly a Postgresql problem
 but
  what am I missing here?

 Hm, what else is running on the box?  PG itself should fit in 256MB but
 if you've got other stuff hogging memory, you might have little choice
 but to buy more RAM.

 (You should also double-check what SHMALL is measured in on your
 platform; maybe that 8192 isn't pages but bytes ...)

regards, tom lane



Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-11 Thread Ron Mayer

Ron Mayer wrote:

Tom Lane wrote:

Or someone could try to make it work, but given that no one has taken
the slightest interest since Tom Lockhart left the project, I wouldn't
hold my breath waiting for that.


I have interest. For 5 years I've been maintaining a patch for a client


Doh.  Now that I catch up on emails I see Tom has a patch
in a different thread.  I'll follow up there...


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows ODBC Driver

2008-09-11 Thread Stephen Frost
* Bill Todd ([EMAIL PROTECTED]) wrote:
 FWIW I cannot get the ODBC driver to work correctly with ADO and the OLE  
 DB provider for ODBC. It sees TEXT fields as VARCHAR instead of  
 LONGVARCHAR. I do not know if the problem is at the ODBC level or the  
 ADO level but test carefully if you are going to use TEXT fields.

There's an option in the ODBC configuration settings to flip that back
and forth, I believe...  'Text as LongVarChar'.

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] initdb memory segment creation error

2008-09-11 Thread Tom Lane
questions [EMAIL PROTECTED] writes:
 I have few other applications running on my machine but they all don't use
 more than 60MB. I checked and SHMALL is in pages. I also reduced
 shared_buffers and max_connections but that didn't help. Also I did ipcs
 after I got the error and following is what I got -
 [ lots of pre-existing segments ]

Well, there's your problem.  You need to clean those up.  See ipcrm,
or it might be easier just to reboot the machine.

 Postgres did create, attach, detach shared segments but did'nt delete them.

So it would seem.  What Postgres version is this, on what platform
*exactly* (a kernel version number would be good)?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] external query VS user function

2008-09-11 Thread Merlin Moncure
On Thu, Sep 11, 2008 at 5:38 AM, Joao Ferreira gmail
[EMAIL PROTECTED] wrote:
 Hello all,

 I'm trying to optimize the execution of a query which deletes a big
 amount of records based on time

 I need to remove from 100.000 to 1.000.000 records from my table once a
 day, and I'dd like to make that removal as fast as possible. This is the
 idea:

 DELETE FROM tt WHERE time  $1;


 Would it be considerably faster if I declare that query inside a user
 function, let's say function_delete(integer), and invoque it instead

 SELECT function_delete($max_time);


 Would this second approach be faster ? I imagine there could be some
 internal mechanism that would allow pg to have that query pre-optimized
 somehow ?

This scenario might be a good candidate for a partitioning/rotation
strategy.  You might want to read up on this in the docs...but the
basic idea is that the database presents a bunch of small tables with
identical structure as a single table to the app...and when it's time
to dump some records you instead issue 'drop table'.

There's some setup work to do and some tradeoffs in terms of how you
write queries that touch the table but it's usually a good strategy
for tables that basically log data, grow quickly, and have to be
rotated.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] connection timeouts and killing users

2008-09-11 Thread Gauthier, Dave
OK, killing the remote users is fine.  Just want ot make sure I'm not
killing some sort of shared remote server process(es) that would prevent
future remotes to connect.  

-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 11, 2008 5:35 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] connection timeouts and killing users

On Thu, Sep 11, 2008 at 2:27 PM, Gauthier, Dave
[EMAIL PROTECTED] wrote:

 How do I get the pids? Is there something specific I should look for
in the
 executable name I can see in ps?

You can either use a combination of ps and grep:

ps ax|grep postgres|grep dbname

or use the pg_stat_activity table;

  Will I break any remote server processes which are handeling remote
 attaches if I do this?

Yes, they would lose their connection.  It's a choice you don't have
anymore if you're renaming databases underneath them.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Artacus



How is it that you can declare the default value of a column on insert
but not on update?


You can do this without triggers (at least on 8.3).

UPDATE foo
SET field1  = 2,
   field2 = default
WHERE field3 = 22

I just tested it and it will set the value back to the default. The 
caveat here is that it won't FORCE the value like it would with a 
trigger. So while the trigger would happen automatically, using this 
approach, you'd have to remember to also update that field any time you 
did an update.


Artacus


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Chris Velevitch
On Fri, Sep 12, 2008 at 12:50 PM, Artacus [EMAIL PROTECTED] wrote:
 You can do this without triggers (at least on 8.3).

 UPDATE foo
 SET field1  = 2,
   field2 = default
 WHERE field3 = 22

That means I have to then go through all my code and make sure I set
the fields value. If I forget to modify one statement, things will
break. That's why I'm looking for a declarative way of doing and let
database handle it like it handles setting the default value of a
column on insert.



Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
September meeting: It's Going To Be Brilliant
Date: Mon 29th September 6pm for 6:30 start
Details and RSVP on http://apugs2008september.eventbrite.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Chris Velevitch
On Fri, Sep 12, 2008 at 1:53 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
 Here's a simple example of last modified trigger using plpgsql from way back:

 -- FUNCTION --

 CREATE FUNCTION modtime () RETURNS opaque AS '
BEGIN
new.lm :=''now'';
RETURN new;
END;
 ' LANGUAGE 'plpgsql';

This does work in 7.4. It doesn't like 'opaque', whatever that is. It
doesn't like language plpgsql. I'm using a shared hosted database, so
I'm probably not allowed to createlang. And it complains about 'new'.



Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
September meeting: It's Going To Be Brilliant
Date: Mon 29th September 6pm for 6:30 start
Details and RSVP on http://apugs2008september.eventbrite.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-11 Thread Jignesh K. Shah
Moving this thread to Performance alias as it might make more sense for 
folks searching on this topic:




Greg Smith wrote:

On Tue, 9 Sep 2008, Amber wrote:

I read something from 
http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html 
saying that PostgreSQL can't give the correct result of the some 
TPC-H queries


Jignesh Shah at Sun ran into that same problem.  It's mentioned 
briefly in his presentation at 
http://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_postgresql 
on pages 26 and 27.  5 of the 22 reference TCP-H queries (4, 5, 6, 10, 
14) returned zero rows immediately for his tests.  Looks like the 
MonetDB crew is saying it does that on queries 4,5,6,10,12,14,15 and 
that 20 takes too long to run to generate a result.  Maybe 12/15/20 
were fixed by changes in 8.3, or perhaps there were subtle errors 
there that Jignesh didn't catch--it's not like he did a formal 
submission run, was just kicking the tires.  I suspect the difference 
on 20 was that his hardware and tuning was much better, so it probably 
did execute fast enough.


I redid a quick test with the same workload on one of my systems with SF 
10 which is about 10GB

(I hope it comes out properly displayed)

JigneshFrom Monet (8.3T/8.2.9)

Q  Time PG8.3.3Time PG8.2.9 Ratio

1429.01  5100.84

2  3.65   540.07

3 33.49  7980.04

4  6.53Empty  35  (E)   0.19

5  8.45Empty   5.5(E)   1.54

6 32.84Empty 172  (E)   0.19

7477.95  4391.09

8 58.55  2510.23

9781.96 22400.35

10 9.03Empty   6.1(E)   1.48

11 3.57Empty  250.14

1256.11Empty 179  (E)   0.31

1361.01  1400.44

1430.69Empty 169  (E)   0.18

1532.81Empty 168  (E)   0.2

1623.98  1150.21

17Did not finish Did not finish

1858.93  8820.07

1971.55  2180.33

20Did not finish Did not finish

21   550.51  4771.15

22 6.21 Did not finish   




All time is in seconds (sub seconds where availabe)
Ratio  1 means 8.3.3 is slower and 1 means 8.3.3 is faster

My take on the results:

* I had to tweak the statement of Q1 in order to execute it.
 (TPC-H kit does not directly support POSTGRESQL statements)

* Timings with 8.3.3 and bit of tuning gives much better time overall
 This was expected (Some queries finish in 7% of the time than what
 MonetDB reported. From the queries that worked only Q7  Q21 seem to
 have regressed)

* However Empty rows results is occuring consistently
 (Infact Q11 also returned empty for me while it worked in their test)
 Queries: 4,5,6,10,11,12,14,15
 (ACTION ITEM: I will start separate threads for each of those queries in
  HACKERS alias to figure out the problem since it looks like Functional
  problem to me and should be interesting to hackers alias)

* Two queries 17,20 looks like will not finish (I let Q17 to run for 18 
hrs and
 yet it had not completed. As for Q20 I killed it as it was approaching 
an hour.)
 (ACTION ITEM: Not sure whether debugging for these queries will go in 
hackers or

  perform alias but I will start a thread on them too.)

* Looks like bit of tuning is required for Q1, Q7, Q9, Q21 to improve their
 overall time. Specially understanding if PostgreSQL is missing a more 
efficient

 plan for them.
 (ACTION ITEM: I will start separate threads on performance alias to 
dig into

  those queries)


I hope to start separate threads for each queries so we can track them 
easier. I hope to provide explain analyze outputs for each one of them 
and lets see if there are any problems.


Feedback welcome on what you want to see for each threads.

Regards,
Jignesh


--
Jignesh Shah   http://blogs.sun.com/jkshah  
Sun Microsystems,Inc   http://sun.com/postgresql


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 9:10 PM, Chris Velevitch
[EMAIL PROTECTED] wrote:
 On Fri, Sep 12, 2008 at 1:53 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
 Here's a simple example of last modified trigger using plpgsql from way back:

 -- FUNCTION --

 CREATE FUNCTION modtime () RETURNS opaque AS '
BEGIN
new.lm :=''now'';
RETURN new;
END;
 ' LANGUAGE 'plpgsql';

 This does work in 7.4. It doesn't like 'opaque', whatever that is. It
 doesn't like language plpgsql. I'm using a shared hosted database, so
 I'm probably not allowed to createlang. And it complains about 'new'.

Do you have a superuser account?  Pretty sure you gotta have that to
create lang.  OTOH, plpgsql is a safe language once installed, so
you should be able to ask your hosting provider to install it.  Can't
hurt to ask.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Scott Marlowe
On Thu, Sep 11, 2008 at 8:50 PM, Artacus [EMAIL PROTECTED] wrote:

 How is it that you can declare the default value of a column on insert
 but not on update?

 You can do this without triggers (at least on 8.3).

 UPDATE foo
 SET field1  = 2,
   field2 = default
 WHERE field3 = 22

 I just tested it and it will set the value back to the default. The caveat
 here is that it won't FORCE the value like it would with a trigger. So while
 the trigger would happen automatically, using this approach, you'd have to
 remember to also update that field any time you did an update.

Right, but now you've moved the complexity of timestamping updates
into the application layer, where it has to be propagated to all
update queries. Miss one and it won't get updated.

i wonder if you could do it with a rule?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare column update expression

2008-09-11 Thread Artacus

 That means I have to then go through all my code and make sure I set
 the fields value. If I forget to modify one statement, things will
 break.

Right, that's why the right answer for what you want to do is to have a 
trigger. I was just giving you an alternative since you didn't like the 
trigger answer.




Do you have a superuser account?  Pretty sure you gotta have that to
create lang.  OTOH, plpgsql is a safe language once installed, so
you should be able to ask your hosting provider to install it.  Can't
hurt to ask.


For plpgsql I don't believe it actually installs anything. The docs 
make it sound more like it activates it. So as long as you own your 
database you should be able to


createlang plpgsql


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Artacus

$ TEST=16; psql -c select $TEST as \input1\;
$ TEST=16; echo select $TEST as \input1\; | psql


Yep that works. My coworker also suggested using EOF to simulate a 
psql script.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general