Re: [GENERAL] PostgreSQL process architecture question.

2008-09-10 Thread Scott Marlowe
On Tue, Sep 9, 2008 at 11:17 PM, 小波 顾 [EMAIL PROTECTED] wrote:
 That's it, we have 4 CPUs, each of which has 4 cores, that is we have 16
 cores in total, but we have only 4  to 8 concurrent users, who regularly run
 complex queries. That is we can't use all our CPU resources in such a
 situation to speed up response time.

Unless you have either a small data set or a very powerful RAID array,
most the time you won't be CPU bound anyway.  But it would be nice to
see some work come out to parallelize some of the work done in the
back end.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread Artacus

Artacus wrote:
I'd like to learn a little more about writing psql scripts does anyone 
know of any resources outside of the manual?


Ok then. Does anyone have any tips or best practices for scripting psql? 



I'll probably write some bash scripts to pull csv files over then script 
psql to do a COPY into an import schema then run a pl/pgsql procedure, 
er function, to do all ETL.


Who else is doing something like this? Can psql access environmental 
variables or command line params? Or do I have to have my bash script 
write a psql script every time?


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to upload data to postgres

2008-09-10 Thread Artacus

Markova, Nina wrote:

Hi again,

I need to load data from Ingres database to Postgres database.  What's 
the easiest way?


Thanks,
Nina


Easiest way would be to export to CSV and import using COPY. Slickest 
way would be to use something like dblink.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server installation problem using freebsd ports

2008-09-10 Thread Artis Caune
On Tue, Sep 9, 2008 at 9:21 AM, Mohd Fahadullah [EMAIL PROTECTED] wrote:
 Hi,

 This might be a very small problem but I am stuck. When I try to install
 postgresql server 8.3.3 on freebsd using ports, I am getting -
  postgresql-server-8.3.3 cannot install: unknown PostgreSQL version: 83
 I was able to install client. This has something to do with pgsql_ver.
 What's wrong here?

 Thanks

Hi,

update your ports tree (cvsup/portsnap) and try again.
If not working, show your /etc/make.conf


p.s. this is FreeBSD specific question and should go to freebsd-questions@




-- 
regards,
Artis Caune

. CCNA
|
' didii FreeBSD

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Various intermittent bugs/instability - how to debug?

2008-09-10 Thread Mark Cave-Ayland

Frederik Ramm wrote:

Dear 
PostgreSQL community,


   I hope you can help me with a problem I'm having - I'm stuck and 
don't know how to debug this further.


I have a rather large nightly process that imports a lot of data from 
the OpenStreetMap project into a PostGIS database, then proceeds doing 
all sorts of things - creating spatial indexes, computing bounding 
boxes, doing simplification of geometries, that kind of stuff. The whole 
job usually takes about five hours.


I'm running this on a Quad-Core Linux (Ubuntu, PostgreSQL 8.3) machine 
with 8 GB RAM.


Every other night, the process aborts with some strange error message, 
and never at the same position:


ERROR:  invalid page header in block 166406 of relation node_tags

ERROR:  could not open segment 2 of relation 1663/24253056/24253895 
(target block 1421295656): No such file or directory


ERROR:  Unknown geometry type: 10

When I continue the process after the failure, it will usually work.

I know you all think hardware problem now. Of course this was my first 
guess as well. I ran a memory test for a night, no results; I downgraded 
do failsafe defaults for all BIOS timings, again no change. Ran 
cpuburn and all sorts of other things to grill the hardware - nothing.


Then I bought an entirely new machine; similar setup, but using a 
Gigabyte instead of Asus mainboard, different chipset, slightly faster 
Quad-Core processor, and again 8 GB RAM and Ubuntu Hardy with 
PostgresSQL 8.3 and matching PostGIS.


Believe it or not, this machine shows the *same* problems. It is not 
100% reproducible, sometimes the job works fully, but every other day it 
just breaks down with one of the funny messages like above. No memtest 
errors here either.


Both machines are consumer quality, i.e. normal Intel processors and 
not the server (Xeon) stock.


I am at a loss - how can I proceed? This looks like a hardware problem 
alright, but so simliar problems on two so different machines? Is there 
something wrong with Intel's Quad-Core CPUs?


What could I do to have a better chance of reproducing the error and 
ultimately identifying the component responsible? Is there some kind of 
PostgresSQL load test, something like cpuburn for PostgreSQL?


Have there been other reports of intermittent problems like mine, and 
does anybody have any blind guesses...?


Thanks
Frederik



Hi Frederik,

We did find a memory clobber in the PostGIS ANALYZE routine a while 
back, but the fix hasn't yet made it into a release.


If you are building from source, please can you try applying the patch 
here: http://code.google.com/p/postgis/issues/detail?id=43 and reporting 
back whether it helps or not?



ATB,

Mark.

--
Mark Cave-Ayland
Sirius Corporation - The Open Source Experts
http://www.siriusit.co.uk
T: +44 870 608 0063

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-10 Thread Simon Riggs

On Tue, 2008-09-09 at 16:26 -0400, Tom Lane wrote:

 That's probably not good because it *looks* like we support the syntax,
 but in fact produce non-spec-compliant results.  I think it might be
 better if we threw an error.

Definitely. If we accept SQL Standard syntax like this but then not do
what we should, it is clearly an ERROR. Our reputation will be damaged
if we don't, since people will think that we are blase about standards
compliance and about query correctness. Please lets move swiftly to plug
this hole, as if it were a data loss bug (it is, if it causes wrong
answers to queries for unsuspecting users).

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stuck query

2008-09-10 Thread Tommy Gildseth
I have a query on a database that has been running for nearly 24 hours 
at the moment. The query itself doesn't seem like it should take very 
long to run, so it seems like there's something else going on here.


The output from pg_stat_activity looks like this:
SELECT procpid, waiting, current_query, query_start, backend_start FROM 
pg_stat_activity WHERE current_query  'IDLE' AND usename  'postgres';
 procpid | waiting |  current_query  | 
 query_start  | backend_start

-+-+-+---+---
   17504 | f   | SELECT cam.netboxid,| 
2008-09-09 13:44:01.035885+02 | 2008-09-09 13:43:58.613948+02

   :ifindex,
   :arp.ip,
   :REPLACE(mac::text, ':', '') AS portname,
   :cam.start_time,
   :cam.end_time,
   :vlan
   : FROM cam
   : JOIN netbox USING (netboxid)
   : JOIN arp USING (mac)
   : JOIN prefix ON (arp.prefixid = prefix.prefixid)
   : JOIN vlan USING (vlanid)
   : WHERE cam.end_time='infinity'
   :   AND arp.end_time='infinity'
   :   AND vlan IS NOT NULL
   :

SELECT pg_cancel_backend(17504) has no effect, neither does kill 17504 
from the shell.


I tried strace -p17504, and this gave me just the following output:
sendto(7, \7\0\0\0\003771\0\0\0\00224\0\0\0\017127.120.213.18..., 968, 
0, NULL, 0 unfinished ...



Does anyone have any further troubleshooting suggestions that I can do, 
to figure out why this query have crashed?


The pg version is 8.2.9 on RHEL4


--
Tommy Gildseth

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_start_backup() takes too long

2008-09-10 Thread Ivan Zolotukhin
Hello,

What is the reason for

select pg_start_backup('label');

taking 10 minutes on not so loaded system even right after manual checkpoint?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ms windows: 40% space taken double by ../pg_tblspc folder?

2008-09-10 Thread Willy-Bas Loos
Hi,

Tablespaces are implemented as some sort of a hard link on windows, so that
the data seem to be in the program files, but they're not.
When i look at my C drive with spacemonger though, it tells me that the data
in the tablespace IS in the program files folder - it is fooled by the
hard link.
That's no  big problem, but it seems that Windows itself is fooled too. The
space left on my C drive is not correct, i'm missing some space.
It's not quite as much as the size of the tablespace. 4 GiB of space is
missing on the C drive, the tablespace is 10 GiB.
-  Missing means: when i calculate the sum of the file sizes, except the
ones in C:\ProgramFiles\PostgreSQL\8.3\data\pg_tblspc\ , there should be 24
GiB Free, but DIR and all the dialogs say that there are only 20 GiB free.
If the whole space for ../pg_tblspc were taken double, there would be only
14 GiB free.
-  The command used was C:\dir  /a /s /-c  c.txt
-  There are minor diferences between dir *, dir *.* and dir without an
asterix (surprisingly *.* gets the most)
-  The difference is not explained by the size on disk vs filesize (DIR
gives the filesize, but the difference is less than 1 GiB)
-  The filesize sum in the properties dialog of Program files does NOT
include the tablespace files
What happens when the size of the tablespace surpasses the space that is
left on my C drive? Windows might find that the disc is full (even though it
isn't), and writing to it would be impossible.

Anyone know more about this?

Thx,

WBL
-- 
Patriotism is the conviction that your country is superior to all others
because you were born in it. - George Bernard Shaw


Re: [GENERAL] Stuck query

2008-09-10 Thread Tommy Gildseth

Richard Huxton wrote:

Tommy Gildseth wrote:

SELECT pg_cancel_backend(17504) has no effect, neither does kill 17504
from the shell.


Strange.


I tried strace -p17504, and this gave me just the following output:
sendto(7, \7\0\0\0\003771\0\0\0\00224\0\0\0\017127.120.213.18..., 968,
0, NULL, 0 unfinished ...


Looks like part of your query results being sent. Is it hung in that one
system-call?



Yes, I left it there for about ~1 hour, and that was all that ever came.





Does anyone have any further troubleshooting suggestions that I can do,
to figure out why this query have crashed?


Is the client locked/crashed too?
If it's connected over a network, is the connection still there?



We stopped the client application to see if that would make any 
difference, but the connection and the query on the server side is still 
there.


A strace of the client application before it was shut down simply showed:

strace -p6721
Process 6721 attached - interrupt to quit
accept(18,  unfinished ...

so, not very helpfull.





The pg version is 8.2.9 on RHEL4


Fairly standard setup. I've seen various problems reported by selinux
oddities, but nothing quite like this.


We don't use selinux.


--
Tommy Gildseth

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ms windows: 40% space taken double by ../pg_tblspc folder?

2008-09-10 Thread Magnus Hagander
Willy-Bas Loos wrote:
 Hi,
 
 Tablespaces are implemented as some sort of a hard link on windows, so
 that the data seem to be in the program files, but they're not.

No, they are implemented using softlinks, AKA NTFS Junctions.


 When i look at my C drive with spacemonger though, it tells me that the
 data in the tablespace IS in the program files folder - it is fooled
 by the hard link.

In that case, it's a bug in that product. It's not uncommon - using
junctions isn't very common on windows. But one core system exapmle is
that the SYSVOL stuff uses junctions.


 That's no  big problem, but it seems that Windows itself is fooled too.
 The space left on my C drive is not correct, i'm missing some space.
 It's not quite as much as the size of the tablespace. 4 GiB of space is
 missing on the C drive, the tablespace is 10 GiB.
 -  Missing means: when i calculate the sum of the file sizes, except
 the ones in C:\ProgramFiles\PostgreSQL\8.3\data\pg_tblspc\ , there
 should be 24 GiB Free, but DIR and all the dialogs say that there are
 only 20 GiB free. If the whole space for ../pg_tblspc were taken double,
 there would be only 14 GiB free.
 -  The command used was C:\dir  /a /s /-c  c.txt
 -  There are minor diferences between dir *, dir *.* and dir without an
 asterix (surprisingly *.* gets the most)
 -  The difference is not explained by the size on disk vs filesize
 (DIR gives the filesize, but the difference is less than 1 GiB)
 -  The filesize sum in the properties dialog of Program files does NOT
 include the tablespace files

It could be the fact that the size of files that are currently being
modified doesn't show up properly.

 What happens when the size of the tablespace surpasses the space that is
 left on my C drive? Windows might find that the disc is full (even
 though it isn't), and writing to it would be impossible.

If the tablespace is on a different drive, it's completely unaffected by
the size of the actual drive C. The windows APIs, that postgresql uses,
certainly know about this, and won't be fooled.

//Magnus


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stuck query

2008-09-10 Thread Tom Lane
Tommy Gildseth [EMAIL PROTECTED] writes:
 Richard Huxton wrote:
 Looks like part of your query results being sent. Is it hung in that one
 system-call?

 Yes, I left it there for about ~1 hour, and that was all that ever came.

Seems like you have got a network issue.  What does netstat show for the
status of that connection?

I don't think that a query cancel will blow PG off the send; you'd
probably have to resort to kill -9 on that process (with a consequent
restart of other sessions).  It's odd that the kernel hasn't given up
on the connection yet ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread John DeSoi


On Sep 10, 2008, at 2:46 AM, Artacus wrote:

Who else is doing something like this? Can psql access environmental  
variables or command line params? Or do I have to have my bash  
script write a psql script every time?



The psql \! command can execute shell commands. You can also use ``,  
e.g.


= \echo `date`
Wed Sep 10 08:07:19 EDT 2008


John DeSoi, Ph.D.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stuck query

2008-09-10 Thread Tommy Gildseth

Tom Lane wrote:

Tommy Gildseth [EMAIL PROTECTED] writes:

Richard Huxton wrote:

Looks like part of your query results being sent. Is it hung in that one
system-call?



Yes, I left it there for about ~1 hour, and that was all that ever came.


Seems like you have got a network issue.  What does netstat show for the
status of that connection?

I don't think that a query cancel will blow PG off the send; you'd
probably have to resort to kill -9 on that process (with a consequent
restart of other sessions).  It's odd that the kernel hasn't given up
on the connection yet ...



Netstat showed:

netstat -a --tcp -p | grep 49004
tcp  0  44660 dbserver:postgres clientserver:49004  ESTABLISHED 
17504/postgres: nav


I went back to the server the client was running on to double check, and 
it seems the client process hadn't been killed off when the application 
was restarted.


We've got some scheduled downtime tomorrow, so I think I'll just leave 
it till then, since it's not causing any problems as far as I can tell.



--
Tommy Gildseth

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server installation problem using freebsd ports

2008-09-10 Thread Bill Moran
In response to Artis Caune [EMAIL PROTECTED]:

 On Tue, Sep 9, 2008 at 9:21 AM, Mohd Fahadullah [EMAIL PROTECTED] wrote:
  Hi,
 
  This might be a very small problem but I am stuck. When I try to install
  postgresql server 8.3.3 on freebsd using ports, I am getting -
   postgresql-server-8.3.3 cannot install: unknown PostgreSQL version: 83
  I was able to install client. This has something to do with pgsql_ver.
  What's wrong here?
 
  Thanks
 
 Hi,
 
 update your ports tree (cvsup/portsnap) and try again.
 If not working, show your /etc/make.conf
 
 
 p.s. this is FreeBSD specific question and should go to freebsd-questions@

Or freebsd-ports, but either way it's working fine for me here on
FreeBSD 6 with PG 8.3 port from a recent ports tree.  It's likely
something out of sync or otherwise hosed in your ports or your
make.conf.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Ricardo Antonio Yepez Jimenez
 

Hi, 

As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1
enterprise edition, I need to know the steps to comfigurarlo, if someone
owns a manual. 

Thank you.

 



.


[GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-10 Thread Jack Orenstein
I'm trying to understand the effect of autocommit on vacuum behavior (postgres 
8.3, if it matters). Let's suppose you have two tables, BIG and TINY in a 
database accessed through JDBC.  BIG has lots of rows. There are inserts, 
updates, and every so often there is a scan of the entire table. The scan is 
slow, e.g. one row every 30 seconds. TINY has one row, which contains summary 
information from BIG. It is updated every time that BIG is inserted or updated. 
BIG is vacuumed weekly, and TINY is vacuumed every 1000 updates.


What I'm observing is that as my test program runs, transactions (insert/update 
BIG; update TINY) gets slower and slower, and the file storing the TINY table 
gets very big. I'm guessing that the long-running scan of BIG forces versions of 
the one row in TINY to accumulate, (just in case the TINY table is viewed, the 
connection has to have the correct view). As these accumulate, each update to 
TINY takes more and more time, and everything slows down.


I wrote a little JDBC test program to test this theory.  Long scans (with the 30 
second sleep) and with autocommit = false produces the problem described. 
Shorter scans (e.g. no sleep between rows of the BIG scan) produce better 
results. Also, if the scan is done on a connection with autocommit = true, 
everything works fine -- no slowdown, and no bloat of the TINY file.


Am I on the right track -- does autocommit = false for the BIG scan force 
versions of TINY to accumulate? I played around with a JDBC test program, and so 
far cannot see how the autocommit mode causes variations in what is seen by the 
scan. The behavior I've observed is consistent with the SERIALIZABLE isolation 
level, but 1) I thought the default was READ COMMITTED, and 2) why does the 
accumulation of row versions have anything to do with autocommit mode (as 
opposed to isolation level) on a connection used for the scan?


Jack Orenstein

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-10 Thread Martijn van Oosterhout
On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
 Am I on the right track -- does autocommit = false for the BIG scan force 
 versions of TINY to accumulate? I played around with a JDBC test program, 
 and so far cannot see how the autocommit mode causes variations in what is 
 seen by the scan. The behavior I've observed is consistent with the 
 SERIALIZABLE isolation level, but 1) I thought the default was READ 
 COMMITTED, and 2) why does the accumulation of row versions have anything 
 to do with autocommit mode (as opposed to isolation level) on a connection 
 used for the scan?

Vacuum can only clean up stuff older than the oldest open transaction.
So if you have a transaction which is open for hours then stuff made
since then it can't be vacuumed. The solution is: don't do that.

What I don't understand from your description is why your scan is slow
and how the autocommit relates to this. Postgresql only cares about
when you start and commit transactions, and I can't get from your
description when exactly that happens.

Rule of thumb: don't hold transaction open unnessarily long.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-10 Thread Andrew Sullivan
On Tue, Sep 09, 2008 at 05:42:50PM -0400, Greg Smith wrote:

 While some of the MonetDB bashing in this thread was unwarranted, 

What bashing?  I didn't see any bashing of them.  

A
-- 
Andrew Sullivan
[EMAIL PROTECTED]
+1 503 667 4564 x104
http://www.commandprompt.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Markova, Nina
 
Thanks Adrian.

I have read the Postgres 'copy' - the problem is that Postgres doesn't
understand Ingres format. This is I think where the failure comes from.
If I don't find a tool, I have to write scripts to convert data to
something postgres understand.

In the Ingres file with data for each varchar field, before the field is
the real size :

48070   820010601   82003021041.890
-80.811   0.000 1U
3A16819871030   047.471 -70.006
0.015 1R  0 

In the example above: 
 3A16 - means for varchar(5) field there are only characters, i.e. A16
 48070 - means for varchar(5) field there are only 4 characters, i.e.
8070
819871030 - 8 characters, i.e. 19871030

 When I created the same table in Postgres, inserted some test data  and
later copied it to a file, this is how it looks like:

A16 1987103047.471  -70.006 0.015   R
KLNO1980102847.473  -70.006 0.016   R
MLNO198010281999010147.413  -70.006 0.016   R 
   
   Column|  Type  |   Modifiers

-++-
---
 sta | character varying(5)   | not null
 ondate  | character varying(8)   | not null
 offdate | character varying(8)   | not null
 lat | double precision   | not null
 lon | double precision   | not null
 elev| double precision   | not null default 0
 regist_code | character(1)   | not null default ' '::bpchar


Nina

-Original Message-
From: Adrian Klaver [mailto:[EMAIL PROTECTED] 
Sent: September 9, 2008 22:43
To: pgsql-general@postgresql.org
Cc: Markova, Nina
Subject: Re: [GENERAL] FW: How to upload data to postgres

On Tuesday 09 September 2008 1:54:12 pm Markova, Nina wrote:
 So far I tried;

 1)  I have copied data from Ingres in ASCII (using Ingres copydb 
 command).
 2)  created a  table in a Postgres database
 3)  tried loading data into Potgres table - encounter problems.

 For 1) (the Ingres part)
 =
 Ingres used the following copy commands:

 copy site(
 sta= varchar(0)tab,
 ondate= varchar(0)tab,
 offdate= varchar(0)tab,
 lat= c0tab,
 lon= c0tab,
 elev= c0tab,
 regist_code= varchar(0)tab,
 vault_cond= varchar(0)tab,
 geology= varchar(0)tab,
 comment= varchar(0)tab,
 initials= varchar(0)tab,
 lddate= c0nl,
 nl= d0nl)
 into '/tmp/site.dba'

 Normally Ingres will use this command to copy data from a file:
 copy site(
 sta= varchar(0)tab,
 ondate= varchar(0)tab,
 offdate= varchar(0)tab,
 lat= c0tab,
 lon= c0tab,
 elev= c0tab,
 regist_code= varchar(0)tab,
 vault_cond= varchar(0)tab,
 geology= varchar(0)tab,
 comment= varchar(0)tab,
 initials= varchar(0)tab,
 lddate= c0nl,
 nl= d0nl)
 from '/vm04-0/home/postgres/test/site.dba'

 For 3)
 =
 - I got error when I tried to copy with Ingres-like copy command.
 - Then I tried to copy with simple 'copy site from 
 '/vm04-0/home/postgres/test/site-c.dba' - ERROR:  value too long for 
 type character varying(5)

The ERROR explains it. The value you are bringing over from the Ingres
database is to long for a varchar(5) field.

Instead of rehashing the documentation I will point you to the relevant
section that pertains to Postgres COPY:
http://www.postgresql.org/docs/8.3/interactive/sql-copy.html


 - I had no luck either when used binary copying - postgres complained 
 about signature:
 copy site from  '/vm04-0/home/postgres/test/site.dba'   with binary

 ERROR:  COPY file signature not recognized

 
 I have couple of questions as well.
 
 Q1: is there an equivalent of copydb in postgres (in Ingres copydb 
 creates copy statements for all database tables in a single file)

See pg_dump:
http://www.postgresql.org/docs/8.3/interactive/app-pgdump.html

 Q2:  how to say in postgres that a field has no default values (in 
 Ingres 'not default' is used - and this produced an error in postgres 
 CREATE TABLE command)

The CREATE TABLE only takes a DEFAULT clause. If you want no default
don't specify anything:

 lat float not null,

Since you specified NOT NULL you will have to specify some value on
INSERT.


 Create table site (
 sta varchar(5) not null,
 ondate varchar(8) not null,
 offdate varchar(8) not null,
 lat float not null not default, -
 lon float not null not default
 )

 Q3:  How to specify storage structure of a table (again in Ingres 
 'modify' statement is used to specify btree, isam or hash structure). 
 In the Postgres documentation I only saw  how to create an index with 
 a specific structure.

As far as I know this cannot be done in Postgres. The 

[GENERAL] plpgsql return select from multiple tables

2008-09-10 Thread Artis Caune
Hi,

What is the correct way of writing plpgsql function which needs return
columns from multiple tables?

e.x.:
SELECT email FROM emails WHERE id = 1
SELECT backend FROM backends WHERE id = 1

I need plpgsql function return both email and backend in one line, like:
SELECT email, backend FROM ...


I do like this:

CREATE OR REPLACE FUNCTION get_user_data( INT )
RETURNS SETOF RECORD AS $$
DECLARE
v_email RECORD;
v_backend RECORD;
BEGIN
SELECT email
  INTO v_email
  FROM emails
 WHERE id = $1;

SELECT backend
  INTO v_backend
  FROM backends
 WHERE id = $1;

RETURN QUERY SELECT v_email AS email,
v_backend AS backend;
END;
$$ LANGUAGE 'plpgsql' SECURITY DEFINER;


and then doing selects:
SELECT * FROM get_user_data('${id}') AS (email VARCHAR, backend VARCHAR)


Is it okay, there will be a lot of those queries?




-- 
regards,
Artis Caune

. CCNA
|
' didii FreeBSD

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-10 Thread Alvaro Herrera
Martijn van Oosterhout wrote:

 Vacuum can only clean up stuff older than the oldest open transaction.
 So if you have a transaction which is open for hours then stuff made
 since then it can't be vacuumed. The solution is: don't do that.

Actually it's worse than that: older than the oldest transaction that
was active at the time when the current oldest transaction created its
snapshot.

As for autocommit, my guess is that the driver is doing COMMIT; BEGIN.
This should not cause much of a problem in 8.3 compared to previous
releases, because the transaction gets its Xid at the time the first
command write command is run (previously it was grabbed when the
transaction started).  Also, I thought recent versions of the JDBC
driver did not issue the BEGIN right after COMMIT, so I'm surprised that
there's any visible difference at all.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Filip Rembiałkowski
2008/9/10 Ricardo Antonio Yepez Jimenez [EMAIL PROTECTED]:


 Hi,

 As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1 enterprise
 edition, I need to know the steps to comfigurarlo, if someone owns a manual.

nothing fancy; just install libxml2 (on Debian I needed libxml2-dev,
djust this to RH5.1),
 and reconfigure postgres sources  --with-libxml


http://www.postgresql.org/docs/8.3/static/install-procedure.html



-- 
Filip Rembiałkowski

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-10 Thread Alvaro Herrera
Artis Caune escribió:
 Hi,
 
 What is the correct way of writing plpgsql function which needs return
 columns from multiple tables?
 
 e.x.:
 SELECT email FROM emails WHERE id = 1
 SELECT backend FROM backends WHERE id = 1
 
 I need plpgsql function return both email and backend in one line, like:
 SELECT email, backend FROM ...

Hmm, maybe

select email, backend from emails, backends where email.id = 1 and
backend.id = 1;
?

You don't need a plpgsql function for this ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-10 Thread Artis Caune
On Wed, Sep 10, 2008 at 5:26 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
 Hmm, maybe

 select email, backend from emails, backends where email.id = 1 and
 backend.id = 1;
 ?

 You don't need a plpgsql function for this ...

Ops, forget to mention that this function is not so simple and use
some plpgsql features.
Here is one of them:
http://dpaste.com/hold/77192/




-- 
regards,
Artis Caune

. CCNA
|
' didii FreeBSD

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Adrian Klaver
On Wednesday 10 September 2008 7:14:50 am Markova, Nina wrote:
 Thanks Adrian.

 I have read the Postgres 'copy' - the problem is that Postgres doesn't
 understand Ingres format. This is I think where the failure comes from.
 If I don't find a tool, I have to write scripts to convert data to
 something postgres understand.

 In the Ingres file with data for each varchar field, before the field is
 the real size :

 48070   820010601   82003021041.890
 -80.811   0.000 1U
 3A16819871030   047.471 -70.006
 0.015 1R  0

 In the example above:
  3A16 - means for varchar(5) field there are only characters, i.e. A16
  48070 - means for varchar(5) field there are only 4 characters, i.e.
 8070
 819871030 - 8 characters, i.e. 19871030

That would be the problem. The COPY from Postgres does not understand the 
metadata associated with the field data and would try to insert the complete 
string. I can see three options:
1) As has been suggested in another other post, export the Ingres data as data 
only CSV i.e 'A16' not '3A16'
2) Your suggestion of cleaning up data via a script.
3) Create holding table in Postgres that has varchar() fields (varchar with no 
length specified) and import into and then do your data cleanup before moving 
over to final table.


  When I created the same table in Postgres, inserted some test data  and
 later copied it to a file, this is how it looks like:

 A16 1987103047.471  -70.006 0.015   R
 KLNO1980102847.473  -70.006 0.016   R
 MLNO198010281999010147.413  -70.006 0.016   R

Column|  Type  |   Modifiers

 -++-
 ---
  sta | character varying(5)   | not null
  ondate  | character varying(8)   | not null
  offdate | character varying(8)   | not null
  lat | double precision   | not null
  lon | double precision   | not null
  elev| double precision   | not null default 0
  regist_code | character(1)   | not null default ' '::bpchar


 Nina






-- 
Adrian Klaver
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-10 Thread Filip Rembiałkowski
2008/9/10 Artis Caune [EMAIL PROTECTED]:
 Hi,

 What is the correct way of writing plpgsql function which needs return
 columns from multiple tables?

 e.x.:
 SELECT email FROM emails WHERE id = 1
 SELECT backend FROM backends WHERE id = 1

 I need plpgsql function return both email and backend in one line, like:
 SELECT email, backend FROM ...


in principle, you don't need procedural language for this:

SELECT
(SELECT email FROM emails WHERE id = 1) as email,
(SELECT backend FROM backends WHERE id = 1) as backend;



 I do like this:

 CREATE OR REPLACE FUNCTION get_user_data( INT )
 RETURNS SETOF RECORD AS $$
 DECLARE
v_email RECORD;
v_backend RECORD;
 BEGIN
SELECT email
  INTO v_email
  FROM emails
 WHERE id = $1;

SELECT backend
  INTO v_backend
  FROM backends
 WHERE id = $1;

RETURN QUERY SELECT v_email AS email,
v_backend AS backend;
 END;
 $$ LANGUAGE 'plpgsql' SECURITY DEFINER;

nothing wrong here but this can also be rewritten to pure SQL function
(can be few percent faster and optimizable by planner)

CREATE OR REPLACE FUNCTION get_user_data( INT )
RETURNS SETOF RECORD AS $$
SELECT
(SELECT email FROM emails WHERE id = $1) as email,
(SELECT backend FROM backends WHERE id = $1) as backend
$$ LANGUAGE 'sql' STABLE STRICT SECURITY DEFINER;


one question, why SETOF? this is supposed to always return one row
always, right?
you could create a TYPE and return this. queries would be a bit simpler:

SELECT * FROM get_user_data('${id}');


finally, I am *almost* sure (maybe someone will correct me)  that if
you encapsulate this in a function, you will always have some
performance penalty because
SELECT email FROM get_user_data('${id}');
will always scan backends table, even if it's not needed.

for such usage, VIEWs are nicer.

create view user_data as
select u.id, e.email, b.backend
from users u [left?] join emails e on e.id=u.id [left?] join backends
b on b.id = u.id;

and

select * from user_data where id=1;






 and then doing selects:
 SELECT * FROM get_user_data('${id}') AS (email VARCHAR, backend VARCHAR)


 Is it okay, there will be a lot of those queries?




 --
 regards,
 Artis Caune

 . CCNA
 |
 ' didii FreeBSD

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Filip Rembiałkowski

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-10 Thread Jack Orenstein

Martijn van Oosterhout wrote:

On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
Am I on the right track -- does autocommit = false for the BIG scan force 
versions of TINY to accumulate? I played around with a JDBC test program, 
and so far cannot see how the autocommit mode causes variations in what is 
seen by the scan. The behavior I've observed is consistent with the 
SERIALIZABLE isolation level, but 1) I thought the default was READ 
COMMITTED, and 2) why does the accumulation of row versions have anything 
to do with autocommit mode (as opposed to isolation level) on a connection 
used for the scan?


Vacuum can only clean up stuff older than the oldest open transaction.
So if you have a transaction which is open for hours then stuff made
since then it can't be vacuumed. The solution is: don't do that.

What I don't understand from your description is why your scan is slow


Application requirement. We need to do something for each row retrieved from BIG 
and the something is expensive. We do the scan slowly (30 second sleep inside 
the loop) to amortize the cost.



and how the autocommit relates to this. Postgresql only cares about
when you start and commit transactions, and I can't get from your
description when exactly that happens.


If the slow scan is done with autocommit = true, then the transactions updating 
BIG and TINY run with no degradation in performance (as long as TINY is vacuumed 
frequently).


If the slow scan is done with autocommit = false, then the transactions updating 
BIG and TINY get slower and slower and the TINY table's file bloats.


I guess the question is this: What are the transaction boundaries for a scan 
done with autocommit = false? (The connection has autcommit false, and the 
connection is used for nothing but the scan.)


Jack

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

2008-09-10 Thread Zdenek Kotala

Randal T. Rioux napsal(a):

On Tue, September 9, 2008 5:25 am, Zdenek Kotala wrote:

Randal T. Rioux napsal(a):

I've battled this for a while. I'm finally breaking down and asking for
help.

If you're answer to this is why 64-bit then don't answer. You wouldn't
understand. Same if you say why don't you use packages.

Here is my scenerio:

 - Sun 420R x450Mhz UltraSPARC-II / 4GB RAM
 - Solaris 10 05/08
 - OpenSSL 0.9.8h
 - PostgreSQL 8.3.3
 - GCC 3.4.6
 - GNU Make 3.81

Three questions (yeah, you forbided ask, but ...)


grumble grumble grumble...


1) Why 64

64bit code on SPARC is slower, because SPARC uses 4byte instructions and
processing 64bit data needs more instructions. It is good only if you
need more then 4GB share memory. When you use sunstudio compiler with
best optimization 64bit application has 1%-5% performance degradation.


A. Many databases use more than 4GB share memory.


Of course but you mention that you have only 4GB RAM.


B. Re: SunStudio - that's why I'm using GCC.


I don't understand you there. Sunstudio generates better code on SPARC 
and it is faster than code produced with GCC.



2) Why you don't use package

You can use Solaris'es packages, which are integrated and optimized for
Solaris.


Which are bloated with stuff I don't need and missing stuff I do. Not to
mention terribly outdated.


Could you be more specific? If is there something what you missing or 
what is wrong in Solaris'es packages let me know. Maybe I can improve it.



3)  Why you don't use build-in libssl?

Integrated libssl is not only copy of original open ssl. It has lot of
improvements and it uses crypto hardware accelerator if you have it (for
example Niagara 2).


But it is 32-bit.


No, You have 64bit version in /usr/sfw/lib/64.

Zdenek



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql return select from multiple tables

2008-09-10 Thread Alvaro Herrera
Artis Caune escribió:

 Ops, forget to mention that this function is not so simple and use
 some plpgsql features.

Ah, right, you only forgot to mention that other 99% of the
requirements.

What's wrong with your first example?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's size of your PostgreSQL Database?

2008-09-10 Thread Amber
 8. We have a master and a replica.  We have plans to move to a
 cluster/grid Soon(TM).  It's not an emergency and Postgres can easily
 handle and scale to a 3TB database on reasonable hardware ($30k).
 

I'd like to know what's your progress of choosing the cluster/grid solution, we 
are also looking for
an appropriate one, following is the our major factors of the ideal solution.

1. Some kind of MPP.
2. No single point of failure.
3. Convenient and multiple access interfaces.

And following the is the solutions we have examined:

1. Slony-I: Not a MPP solution, and using triggers to detect changes, which 
defects performance.
2. pgpool-II: Some kind of MPP, but join operations can't be done on multiple 
machines parallelly, that is it can't scale out well.
3. Sequoia : The same problem as pgpool-II, and the major access interface is 
JDBC.
 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's size of your PostgreSQL Database?

2008-09-10 Thread Amber
 Yahoo has a 2PB Postgres single instance Postgres database (modified
 engine), but the biggest pure Pg single instance I've heard of is 4TB.
 The 4TB database has the additional interesting property in that they've
 done none of the standard scalable architecture changes (such as
 partitioning, etc).  To me, this is really a shining example that even
 naive Postgres databases can scale to as much hardware as you're willing
 to throw at them.  Of course, clever solutions will get you much more
 bang for your hardware buck.

Can you share some ideas of the particular design of the 4T db, it sounds very 
interesting :) 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's size of your PostgreSQL Database?

2008-09-10 Thread Joshua Drake
On Wed, 10 Sep 2008 23:17:40 +0800
Amber [EMAIL PROTECTED] wrote:
 
 1. Some kind of MPP.
 2. No single point of failure.
 3. Convenient and multiple access interfaces.
 
 And following the is the solutions we have examined:

http://www.greenplum.com/

Joshua D. Drake


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Markova, Nina
 
I also plan to try to export data in XML format (from Ingres) and import
it to Postgres.

I didn't find any utility for importing XML data into Postgres. Or just
looking at the wrong document?
I run Postgres 8.2.4

Thanks,
Nina

-Original Message-
From: Adrian Klaver [mailto:[EMAIL PROTECTED] 
Sent: September 10, 2008 10:39
To: pgsql-general@postgresql.org
Cc: Markova, Nina; [EMAIL PROTECTED]
Subject: Re: [GENERAL] FW: How to upload data to postgres

On Wednesday 10 September 2008 7:14:50 am Markova, Nina wrote:
 Thanks Adrian.

 I have read the Postgres 'copy' - the problem is that Postgres doesn't

 understand Ingres format. This is I think where the failure comes
from.
 If I don't find a tool, I have to write scripts to convert data to 
 something postgres understand.

 In the Ingres file with data for each varchar field, before the field 
 is the real size :

 48070   820010601   82003021041.890
 -80.811   0.000 1U
 3A16819871030   047.471 -70.006
 0.015 1R  0

 In the example above:
  3A16 - means for varchar(5) field there are only characters, i.e. A16

 48070 - means for varchar(5) field there are only 4 characters, i.e.
 8070
 819871030 - 8 characters, i.e. 19871030

That would be the problem. The COPY from Postgres does not understand
the metadata associated with the field data and would try to insert the
complete string. I can see three options:
1) As has been suggested in another other post, export the Ingres data
as data only CSV i.e 'A16' not '3A16'
2) Your suggestion of cleaning up data via a script.
3) Create holding table in Postgres that has varchar() fields (varchar
with no length specified) and import into and then do your data cleanup
before moving over to final table.


  When I created the same table in Postgres, inserted some test data  
 and later copied it to a file, this is how it looks like:

 A16 1987103047.471  -70.006 0.015   R
 KLNO1980102847.473  -70.006 0.016   R
 MLNO198010281999010147.413  -70.006 0.016   R

Column|  Type  |   Modifiers

 -++---
 -++--
 ---
  sta | character varying(5)   | not null
  ondate  | character varying(8)   | not null
  offdate | character varying(8)   | not null
  lat | double precision   | not null
  lon | double precision   | not null
  elev| double precision   | not null default 0
  regist_code | character(1)   | not null default ' '::bpchar


 Nina






--
Adrian Klaver
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's size of your PostgreSQL Database?

2008-09-10 Thread Amber
Yes, we know both Greenplum and Netezza are  PostgreSQL based MPP solutions, 
but they are commercial packages.
I'd like to know are there open source ones, and I would suggest the PostgreSQL 
Team to start a MPP version of PostgreSQL.

--
From: Joshua Drake [EMAIL PROTECTED]
Sent: Wednesday, September 10, 2008 11:27 PM
To: Amber [EMAIL PROTECTED]
Cc: Mark Roberts [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [GENERAL] What's size of your PostgreSQL Database?

 On Wed, 10 Sep 2008 23:17:40 +0800
 Amber [EMAIL PROTECTED] wrote:
 
 1. Some kind of MPP.
 2. No single point of failure.
 3. Convenient and multiple access interfaces.
 
 And following the is the solutions we have examined:
 
 http://www.greenplum.com/
 
 Joshua D. Drake
 
 
 -- 
 The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
 PostgreSQL Community Conference: http://www.postgresqlconference.org/
 United States PostgreSQL Association: http://www.postgresql.us/
 Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
 
 
 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

2008-09-10 Thread Randal T. Rioux
On Wed, September 10, 2008 10:54 am, Zdenek Kotala wrote:
 Three questions (yeah, you forbided ask, but ...)

 grumble grumble grumble...

 1) Why 64

 64bit code on SPARC is slower, because SPARC uses 4byte instructions
 and processing 64bit data needs more instructions. It is good only if
 you need more then 4GB share memory. When you use sunstudio compiler
 with best optimization 64bit application has 1%-5% performance
 degradation.

 A. Many databases use more than 4GB share memory.

 Of course but you mention that you have only 4GB RAM.

This is my test machine :-)

 B. Re: SunStudio - that's why I'm using GCC.

 I don't understand you there. Sunstudio generates better code on SPARC
 and it is faster than code produced with GCC.

I read your statement too fast. Nevermind my response to that.

 2) Why you don't use package

 You can use Solaris'es packages, which are integrated and optimized
 for Solaris.

 Which are bloated with stuff I don't need and missing stuff I do. Not
 to mention terribly outdated.

 Could you be more specific? If is there something what you missing or
 what is wrong in Solaris'es packages let me know. Maybe I can improve it.

I just don't like the Solaris package system in general. It is, dare I
say, worse than RPM. But this is a PostgreSQL list, so I'll save the rant!

 3)  Why you don't use build-in libssl?

 Integrated libssl is not only copy of original open ssl. It has lot
 of improvements and it uses crypto hardware accelerator if you have
 it (for example Niagara 2).

 But it is 32-bit.

 No, You have 64bit version in /usr/sfw/lib/64.

I did not know that! I need to check it out later. Thanks for the tip.

Randy



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Devrim GÜNDÜZ
On Wed, 2008-09-10 at 08:57 -0430, Ricardo Antonio Yepez Jimenez wrote:
 As I compile postgresql 8.3.2 

If this is not a typo, please use 8.3.3 .

 to support sql / xml, Red hat 5.1
 enterprise edition, I need to know the steps to comfigurarlo

Why don't you use precompiled packages for RHEL + PostgreSQL 8.3.3,
which include xml support?

http://yum.pgsqlrpms.org

Regards
-- 
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
   http://www.gunduz.org


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Stuck query

2008-09-10 Thread Erik Jones


On Sep 10, 2008, at 5:57 AM, Tommy Gildseth wrote:


Tom Lane wrote:

Tommy Gildseth [EMAIL PROTECTED] writes:

Richard Huxton wrote:
Looks like part of your query results being sent. Is it hung in  
that one

system-call?
Yes, I left it there for about ~1 hour, and that was all that ever  
came.
Seems like you have got a network issue.  What does netstat show  
for the

status of that connection?
I don't think that a query cancel will blow PG off the send; you'd
probably have to resort to kill -9 on that process (with a consequent
restart of other sessions).  It's odd that the kernel hasn't given up
on the connection yet ...



Netstat showed:

netstat -a --tcp -p | grep 49004
tcp  0  44660 dbserver:postgres clientserver:49004  ESTABLISHED  
17504/postgres: nav


I went back to the server the client was running on to double check,  
and it seems the client process hadn't been killed off when the  
application was restarted.


We've got some scheduled downtime tomorrow, so I think I'll just  
leave it till then, since it's not causing any problems as far as I  
can tell.


For what it's worth, I've run into a situation similar to this with a  
client a couple time in the last week or two (I can't say identical as  
I don't know all of the details about the client end of your  
connection).  Using the client port # you can use lsof in addition to  
netstat (lsof -i tcp:49004) to track down the client process.  In our  
case, the client process was a connection made via an ssh tunnel and  
was sitting in FIN_WAIT2 status.  Killing the client process  
individually made everything go away nicely without any kind of extra  
downtime necessary.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
(415) 963-4410 x 260
Location: US/Pacific
IRC: mage2k




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Adrian Klaver

--- Original message --
From: Markova, Nina [EMAIL PROTECTED]
  
 I also plan to try to export data in XML format (from Ingres) and import
 it to Postgres.
 
 I didn't find any utility for importing XML data into Postgres. Or just
 looking at the wrong document?
 I run Postgres 8.2.4
 
 Thanks,
 Nina
 

I don't know how to do that. Back to your original problem, below is your copy 
statement from Ingres:

copy site(
sta= varchar(0)tab,

offdate= varchar(0)tab,
lat= c0tab,
lon= c0tab,
elev= c0tab,
regist_code= varchar(0)tab,
vault_cond= varchar(0)tab,
geology= varchar(0)tab,
comment= varchar(0)tab,
initials= varchar(0)tab,
lddate= c0nl,
nl= d0nl)
into '/tmp/site.dba' 

According to the information below you should be able to change the varchar 
datatypes to char(0)tab and eliminate the length specifier. Seems this can be 
done for all data types and will produce a file with string representations of 
the data. The  downside is the strings are padded to width of the column.

http://docs.ingres.com/sqlref/ColumnFormats#o1232

 ---
 --
 Adrian Klaver
 [EMAIL PROTECTED]


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's size of your PostgreSQL Database?

2008-09-10 Thread Joshua Drake
On Wed, 10 Sep 2008 23:33:44 +0800
Amber [EMAIL PROTECTED] wrote:

 Yes, we know both Greenplum and Netezza are  PostgreSQL based MPP
 solutions, but they are commercial packages. I'd like to know are
 there open source ones, and I would suggest the PostgreSQL Team to
 start a MPP version of PostgreSQL.

To my knowledge there are no open source MPP versions of PostgreSQL,
further AFAIK MPP is not on the OSS PostgreSQL roadmap.

Joshua D. Drake


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

2008-09-10 Thread Andy Colson

Randal T. Rioux wrote:


Found a kludgy fix!

cp /usr/local/lib/sparcv9/libgcc_s.so.1 /usr/sfw/lib/sparcv9/

Now, both OpenSSL and PostgreSQL work great. In 64-bit mode.

If anyone has a less hack-ish solution, please share.

Thanks!
Randy


Not sure if this'll make it to the list or not, I'm not currently 
registered, so apologies...


The linker and the loader use two different things.  The linker use 
LD_LIBRARY_PATH env.  The loader does not.  (and never never type 
'ldconfig' on solaris cuz it'll really mess things up).


use 'crle -u -l /usr/local/lib/sparcv9/'

to add the path to the loaders search path.


more here:

http://blogs.sun.com/rie/entry/changing_search_paths_with_crle


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No error when column doesn't exist

2008-09-10 Thread Dean Rasheed

I've just spent a couple of hours tracking down a bug which turned out
to be a typo in my code. What surprises me is that the SQL in question
didn't immediately produce an error. Here's a simplified example:

CREATE TABLE foo(a int, b int);
INSERT INTO foo VALUES(1,2);
SELECT foo.text FROM foo;

I expected that to generate an error: column foo.text does not exist.
Instead it treats foo.text as foo.*::text AS text:

SELECT foo.text FROM foo;
 text  
---
 (1,2)
(1 row)

If foo actually does have a column called text, this works as expected,
selecting just that column.

Is this a feature or a bug?

Dean

_
Make a mini you and download it into Windows Live Messenger
http://clk.atdmt.com/UKM/go/111354029/direct/01/
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can interval take a value from a field?

2008-09-10 Thread Jeff Ross

Jeff Davis wrote:

On Tue, 2008-09-09 at 17:03 -0600, Jeff Ross wrote:

select
 ts_date as Transcript Date,
 ts_expiration_date as Current Expiration Date,
 expiration_value as Expiration Interval
 from transcript, training_expiration_value where
 ts_training_id = trgexpd_trg_id and
 ts_training_id in (select cda_training_number from cdas) and
 ts_expiration_date != ts_date + interval 'expiration_value';

and I'm getting the following error:

ERROR:  invalid input syntax for type interval: expiration_value


This error is saying that it is trying to convert the string
'expiration_value' to an interval.

What you really want it to convert the string value held inside a
variable named expiration_value to an interval.

For that, you need to do expiration_value::interval

Regards,
Jeff Davis



Thank you!  That combined with the subselect wrapper trick I learned 
last time I visited the list hat in hand worked wonderfully.


Jeff

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Markova, Nina
 
Adrian,

The trick seems to work. Thanks!

Nina
-Original Message-
From: Adrian Klaver [mailto:[EMAIL PROTECTED] 
Sent: September 10, 2008 11:58
To: Markova, Nina
Cc: pgsql-general@postgresql.org; [EMAIL PROTECTED]
Subject: RE: [GENERAL] FW: How to upload data to postgres


--- Original message --
From: Markova, Nina [EMAIL PROTECTED]
  
 I also plan to try to export data in XML format (from Ingres) and 
 import it to Postgres.
 
 I didn't find any utility for importing XML data into Postgres. Or 
 just looking at the wrong document?
 I run Postgres 8.2.4
 
 Thanks,
 Nina
 

I don't know how to do that. Back to your original problem, below is
your copy statement from Ingres:

copy site(
sta= varchar(0)tab,

offdate= varchar(0)tab,
lat= c0tab,
lon= c0tab,
elev= c0tab,
regist_code= varchar(0)tab,
vault_cond= varchar(0)tab,
geology= varchar(0)tab,
comment= varchar(0)tab,
initials= varchar(0)tab,
lddate= c0nl,
nl= d0nl)
into '/tmp/site.dba' 

According to the information below you should be able to change the
varchar datatypes to char(0)tab and eliminate the length specifier.
Seems this can be done for all data types and will produce a file with
string representations of the data. The  downside is the strings are
padded to width of the column.

http://docs.ingres.com/sqlref/ColumnFormats#o1232

 ---
 --
 Adrian Klaver
 [EMAIL PROTECTED]


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No error when column doesn't exist

2008-09-10 Thread Tom Lane
Dean Rasheed [EMAIL PROTECTED] writes:
 CREATE TABLE foo(a int, b int);
 INSERT INTO foo VALUES(1,2);
 SELECT foo.text FROM foo;

 I expected that to generate an error: column foo.text does not exist.
 Instead it treats foo.text as foo.*::text AS text:
 Is this a feature or a bug?

Hmm.  It's a feature, but maybe a dangerous one.  The expression is
being treated as text(foo), which is intentional in order to allow
use of functions as if they were virtual columns.  However, then it
decides that what you've got there is a cast request.  There wasn't
any ability to cast composite types to text before 8.3, so this fails
in the expected way in 8.2 and before; but in 8.3 the cast
interpretation succeeds, and away we go.

foo.char and foo.varchar have similarly unexpected behavior; I think
that's probably the end of it, though, since those are the only types
that CoerceViaIO will take as targets.

Maybe we could/should restrict things so that the syntax continues to
fail, but I can't think of any restrictions that don't seem like warts.
What's worse, they might break stuff that used to work.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

2008-09-10 Thread Zdenek Kotala

Randal T. Rioux napsal(a):

On Wed, September 10, 2008 10:54 am, Zdenek Kotala wrote:




I just don't like the Solaris package system in general. It is, dare I
say, worse than RPM. But this is a PostgreSQL list, so I'll save the rant!


Community solaris package on postgresql download website is only tarbal.

Zdenek


--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64-bit Compile Failure on Solaris 10 with OpenSSL

2008-09-10 Thread Zdenek Kotala

Randal T. Rioux napsal(a):

On Mon, September 8, 2008 9:38 am, Randal T. Rioux wrote:



Found a kludgy fix!

cp /usr/local/lib/sparcv9/libgcc_s.so.1 /usr/sfw/lib/sparcv9/

Now, both OpenSSL and PostgreSQL work great. In 64-bit mode.

If anyone has a less hack-ish solution, please share.


try to look on original Makefile for solaris fro inspiration:

http://src.opensolaris.org/source/xref/sfw/usr/src/cmd/postgres/postgresql-8.3/

http://src.opensolaris.org/source/xref/sfw/usr/src/cmd/postgres/postgresql-8.3/Makefile.sfw


Zdenek

--
Zdenek Kotala  Sun Microsystems
Prague, Czech Republic http://sun.com/postgresql


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread Scott Marlowe
On Wed, Sep 10, 2008 at 12:46 AM, Artacus [EMAIL PROTECTED] wrote:
 Artacus wrote:

 I'd like to learn a little more about writing psql scripts does anyone
 know of any resources outside of the manual?

 Ok then. Does anyone have any tips or best practices for scripting psql?

 I'll probably write some bash scripts to pull csv files over then script
 psql to do a COPY into an import schema then run a pl/pgsql procedure, er
 function, to do all ETL.

This sounds a lot like what I did in my last job using bash for most
things, and php for the more complicated stuff.  Wrote a simple oracle
to pgsql table replicator in php that worked pretty well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] abusing plpgsql array variables

2008-09-10 Thread Ben
If I want to pass in a text[] argument to a plpgsql function, at what 
array size am I asking for problems? 100? 10,000? 100,000?


What severity of problems might I encounter? Bad performance? Postgres 
refusing to run my query? A crashed backend?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] RV: You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Ricardo Antonio Yepez Jimenez
 

Hi, 

As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1
enterprise edition, I need to know the steps to configure, if someone
owns a manual. 

Thank you.

 



.


Re: [GENERAL] RV: You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Scott Marlowe
On Wed, Sep 10, 2008 at 2:04 PM, Ricardo Antonio Yepez Jimenez
[EMAIL PROTECTED] wrote:


 Hi,

 As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1 enterprise
 edition, I need to know the steps to configure, if someone owns a manual.

xml support is built in now isn't it?  And please tell me you are NOT
running 8.3.2 for real, and if you are update to 8.3.3 immediately, if
not sooner.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RV: You need to rebuild PostgreSQL using --with-libxml.

2008-09-10 Thread Scott Marlowe
Please configure your MTA to either just drop email from people you
don't know or let it through.  Sending off an error message saying my
email's rejected for policy is kinda rude on a public mailing list.
If everyone did it my email box would double in size.

On Wed, Sep 10, 2008 at 2:04 PM, Ricardo Antonio Yepez Jimenez
[EMAIL PROTECTED] wrote:


 Hi,

 As I compile postgresql 8.3.2 to support sql / xml, Red hat 5.1 enterprise
 edition, I need to know the steps to configure, if someone owns a manual.

 Thank you.



 
 .

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] FW: How to upload data to postgres

2008-09-10 Thread Ben Kim

From: Markova, Nina [EMAIL PROTECTED]

I also plan to try to export data in XML format (from Ingres) and import
it to Postgres.

I didn't find any utility for importing XML data into Postgres.


You can possibly use perl's XML::Xpath for XML import (DBIx::XML_RDB for 
export), assuming the table is already created in postgresql.


You just need to find the right node names.


HTH

Ben

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL process architecture question.

2008-09-10 Thread Chris Browne
[EMAIL PROTECTED] (Amber) writes:
We know PostgreSQL uses one dedicated server process to serve one
  client connection, what we want to know is whether PostgreSQL use
  multiple threads inside agents processes to take advantage of
  multiple CPUs. In our site we have only a few concurrent
  connections, so what occurs inside  agent process is very
  important to us.

No, PostgreSQL does not attempt to make any use of threading at this
time.  The FAQ describes this quite nicely:

http://wiki.postgresql.org/wiki/Developer_FAQ#Why_don.27t_you_use_threads.2C_raw_devices.2C_async-I.2FO.2C_.3Cinsert_your_favorite_wizz-bang_feature_here.3E.3F

Why don't you use threads, raw devices, async-I/O, insert your favorite 
wizz-bang feature here?

There is always a temptation to use the newest operating system features as 
soon as they arrive. We resist that temptation.

First, we support 15+ operating systems, so any new feature has to be
well established before we will consider it. Second, most new
wizz-bang features don't provide dramatic improvements. Third, they
usually have some downside, such as decreased reliability or
additional code required. Therefore, we don't rush to use new features
but rather wait for the feature to be established, then ask for
testing to show that a measurable improvement is possible.

As an example, threads are not currently used in the backend code because:

* Historically, threads were unsupported and buggy.
* An error in one backend can corrupt other backends.
* Speed improvements using threads are small compared to the remaining 
backend startup time.
* The backend code would be more complex. 

So, we are not ignorant of new features. It is just that we are
cautious about their adoption. The TODO list often contains links to
discussions showing our reasoning in these areas.
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/oses.html
Given  recent  events in  Florida,  the  tourism  board in  Texas  has
developed a new  advertising campaign based on the  slogan Ya'll come
to Texas, where we ain't shot a tourist in a car since November 1963.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgdump problem or question?

2008-09-10 Thread Bayless Kirtley
Early Friday morning a bad record caused me to reload the Thursday night pgdump 
backup. I performed a pgdump first to study later. In the backup I found 
several incomplete transactions all done at the end of the day. Investigating 
later I found the original bad record from the Friday dump and fixed it then 
found that those incomplete transactions were complete in that version.

The client shuts the system down nightly by first closing the application then 
performing a pgdump on the database before shutting it down then shuts down the 
computer (Windows XP PRO, PostgreSQL 8.3.3). The pgdump would have taken place 
at least a couple of minutes after the application shutdown.

I thought pgdump was suitable for a live backup of the database. This would 
seem to be wrong if it dumps partial transactions. Do I understand correctly? 
If so, is there something else I need to do before the dump? Any help will be 
greatly appreciated. This does shake my confidence in my backup procedures.

Thanks,
Bayless


Re: [GENERAL] PostgreSQL process architecture question.

2008-09-10 Thread Reece Hart
On Wed, 2008-09-10 at 00:02 -0600, Scott Marlowe wrote:

 Unless you have either a small data set or a very powerful RAID array,
 most the time you won't be CPU bound anyway.  But it would be nice to
 see some work come out to parallelize some of the work done in the
 back end.


I would have agreed with this several years ago, but many folks now buy
enough RAM to reduce the impact of IO. We're routinely CPU-bound on
small queries, and even on some large ones, on a 32GB / 16-core Opteron
box that serves a ~200GB database (on disk tables+indexes).

Does anyone know of research/references on query optimizers that include
parallelization as part of the cost estimate? I can envision how
PostgreSQL might parallelize a query plan that was optimized with an
assumption of one core. However, I wonder whether cpu and io costs are
sufficient for efficient parallel query optimization -- presumably
contention for memory (for parallel sorts, say) becomes critical.

-Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


Re: [GENERAL] pgdump problem or question?

2008-09-10 Thread Scott Marlowe
On Wed, Sep 10, 2008 at 3:31 PM, Bayless Kirtley [EMAIL PROTECTED] wrote:
 Early Friday morning a bad record caused me to reload the Thursday night
 pgdump backup. I performed a pgdump first to study later. In the backup I
 found several incomplete transactions all done at the end of the day.
 Investigating later I found the original bad record from the Friday dump and
 fixed it then found that those incomplete transactions were complete in that
 version.

 The client shuts the system down nightly by first closing the application
 then performing a pgdump on the database before shutting it down then shuts
 down the computer (Windows XP PRO, PostgreSQL 8.3.3). The pgdump would have
 taken place at least a couple of minutes after the application shutdown.

 I thought pgdump was suitable for a live backup of the database. This would
 seem to be wrong if it dumps partial transactions. Do I understand
 correctly? If so, is there something else I need to do before the dump? Any
 help will be greatly appreciated. This does shake my confidence in my backup
 procedures.

If the transaction was like this:

begin;
insert...
update..
delete...
commit;

then pg_dump will NOT get part of that transaction, it will either get
it all or none of it.

In fact, there's no need to turn off the application to get a coherent
backup as long as the transactions are in fact REALLY transactions
like above.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres user account on OSX

2008-09-10 Thread Darren Weber
There is a postgres user account on my OSX system.  I'm not clear
about how it was created.  I've installed a binary version of 8.3 in
/Library/PostgreSQL/8.3/ and built another version from source into
/usr/local/pgsql/.  When I login as root and then 'su - postgres' it
takes me to the postgres account and the user directory is at
/opt/local/var/db/postgresql83/.

Can someone explain how this user account was created?

I'm trying to start the server that I built from source but it will
not create a logfile, ie:

elegans:~ postgres$ /usr/local/pgsql/bin/pg_ctl -D
/usr/local/pgsql/data -l logfile start
server starting
sh: logfile: Permission denied
elegans:~ postgres$
elegans:~ postgres$ nohup /usr/local/pgsql/bin/postgres -D
/usr/local/pgsql/data /dev/null server.log 21 /dev/null 
[1] 28696
elegans:~ postgres$ -sh: server.log: Permission denied
elegans:~ postgres$
elegans:~ postgres$ pwd
/opt/local/var/db/postgresql83
elegans:~ postgres$
elegans:~ postgres$ ls -al ..
total 0
drwxr-xr-x  4 root  admin  136 Aug 28 12:05 .
drwxr-xr-x  8 root  admin  272 Sep  9 14:49 ..
drwxr-xr-x  3 root  admin  102 Aug 28 12:05 postgresql83
drwxr-xr-x  3 root  admin  102 Aug 26 13:06 smb


Should I remove this user somehow and replace it with a standard user
(using the system admin GUI)?

Thanks, Darren

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres user account on OSX

2008-09-10 Thread Tom Lane
Darren Weber [EMAIL PROTECTED] writes:
 There is a postgres user account on my OSX system.  I'm not clear
 about how it was created.  I've installed a binary version of 8.3 in
 /Library/PostgreSQL/8.3/ and built another version from source into
 /usr/local/pgsql/.  When I login as root and then 'su - postgres' it
 takes me to the postgres account and the user directory is at
 /opt/local/var/db/postgresql83/.

 Can someone explain how this user account was created?

I'm guessing that binary package created it for you --- check its docs,
or look at the install script if you can.

Apple includes a version of postgres in Remote Desktop, but last
I heard it was a lot older than 8.3 (7.3.x in fact :-(), and in any case
that's not where they put the database.  So I don't think that's what's
causing this.

 I'm trying to start the server that I built from source but it will
 not create a logfile, ie:

Well, that's cause you're trying to create the logfile in a root-owned
directory that you don't have write permission on.  Put the logfile
someplace you do have privileges for.  Or change the ownership of the
directory --- it definitely seems pretty weird/broken to give an account
a home directory it doesn't own.

 Should I remove this user somehow and replace it with a standard user
 (using the system admin GUI)?

You could do that too if you liked.  Postgres isn't picky about
what account it runs under, so long as it isn't root.  You just need to
be sure the ownership of the database files/directories matches.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] abusing plpgsql array variables

2008-09-10 Thread Merlin Moncure
On Wed, Sep 10, 2008 at 4:03 PM, Ben [EMAIL PROTECTED] wrote:
 If I want to pass in a text[] argument to a plpgsql function, at what array
 size am I asking for problems? 100? 10,000? 100,000?

 What severity of problems might I encounter? Bad performance? Postgres
 refusing to run my query? A crashed backend?

see for yourself:

postgres=# select length(array_to_string(array(select
generate_series(1,10)), '|'));
 length

 588894
(1 row)

Time: 107.283 ms

(array options can get inefficient if large, but it's mainly cpu bound stuff).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server installation problem using freebsd ports

2008-09-10 Thread Mohd Fahadullah
Thanks, it was a freebsd ports problem. Sorry for sending it on this list.

On Wed, Sep 10, 2008 at 6:01 AM, Bill Moran
[EMAIL PROTECTED]wrote:

 In response to Artis Caune [EMAIL PROTECTED]:

  On Tue, Sep 9, 2008 at 9:21 AM, Mohd Fahadullah [EMAIL PROTECTED]
 wrote:
   Hi,
  
   This might be a very small problem but I am stuck. When I try to
 install
   postgresql server 8.3.3 on freebsd using ports, I am getting -
postgresql-server-8.3.3 cannot install: unknown PostgreSQL version:
 83
   I was able to install client. This has something to do with pgsql_ver.
   What's wrong here?
  
   Thanks
 
  Hi,
 
  update your ports tree (cvsup/portsnap) and try again.
  If not working, show your /etc/make.conf
 
 
  p.s. this is FreeBSD specific question and should go to
 freebsd-questions@

 Or freebsd-ports, but either way it's working fine for me here on
 FreeBSD 6 with PG 8.3 port from a recent ports tree.  It's likely
 something out of sync or otherwise hosed in your ports or your
 make.conf.

 --
 Bill Moran
 Collaborative Fusion Inc.
 http://people.collaborativefusion.com/~wmoran/http://people.collaborativefusion.com/%7Ewmoran/

 [EMAIL PROTECTED]
 Phone: 412-422-3463x4023



Re: [GENERAL] postgres user account on OSX

2008-09-10 Thread Shane Ambler

Tom Lane wrote:

Darren Weber [EMAIL PROTECTED] writes:

There is a postgres user account on my OSX system.  I'm not clear
about how it was created.  I've installed a binary version of 8.3 in
/Library/PostgreSQL/8.3/ and built another version from source into
/usr/local/pgsql/.  When I login as root and then 'su - postgres' it
takes me to the postgres account and the user directory is at
/opt/local/var/db/postgresql83/.



Can someone explain how this user account was created?


I'm guessing that binary package created it for you --- check its docs,
or look at the install script if you can.



Quite sure that the EDB binary installer sets /Library/PostgreSQL8 as 
the home path, matching the install location.


The /opt/local/. home path would indicate that you used macports or 
similar at some stage and the install from there would have created it.


As the postgres user existed the EDB installer would have left it as is.

If you want a GUI to alter the home location of the existing user 
account run NetInfo Manager which is in /Applications/Utilities



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] initdb memory segment creation error

2008-09-10 Thread questions
I am getting this error with initdb while creating shared segment -
memory segment exceeded available memory or swap space.  To reduce the
request size (currently 1785856 bytes), reduce PostgreSQL's shared_buffers
parameter (currently 50)) and/or its max_connections parameter (currently
13).
Total memory is 256 MB and memory available to user processes is 178 MB.
It's not an issue with shared segment exceeding SHMMAX but value of SHMMAX
is 67108864 and SHMALL is 8192. It is not exactly a Postgresql problem but
what am I missing here?

Thanks,

Fahad


[GENERAL] Trigger for insert/update of BLOB's ?

2008-09-10 Thread kevin kempter

Hi List;

Can I create an insert/update trigger based on a table that contains  
lo_* style BLOB's ?


Thanks in advance


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres user account on OSX

2008-09-10 Thread kevin kempter
On Linux if you install postgres via RPM's and the postgres user  
account does not exist then the RPM install creates it for you and  
sets the home dir to the root for the postgres binaries (i.e. /var/lib/ 
pgsql)


Maybe the same thing happens on a Mac install ?


On Sep 10, 2008, at 5:14 PM, Darren Weber wrote:


There is a postgres user account on my OSX system.  I'm not clear
about how it was created.  I've installed a binary version of 8.3 in
/Library/PostgreSQL/8.3/ and built another version from source into
/usr/local/pgsql/.  When I login as root and then 'su - postgres' it
takes me to the postgres account and the user directory is at
/opt/local/var/db/postgresql83/.

Can someone explain how this user account was created?

I'm trying to start the server that I built from source but it will
not create a logfile, ie:

elegans:~ postgres$ /usr/local/pgsql/bin/pg_ctl -D
/usr/local/pgsql/data -l logfile start
server starting
sh: logfile: Permission denied
elegans:~ postgres$
elegans:~ postgres$ nohup /usr/local/pgsql/bin/postgres -D
/usr/local/pgsql/data /dev/null server.log 21 /dev/null 
[1] 28696
elegans:~ postgres$ -sh: server.log: Permission denied
elegans:~ postgres$
elegans:~ postgres$ pwd
/opt/local/var/db/postgresql83
elegans:~ postgres$
elegans:~ postgres$ ls -al ..
total 0
drwxr-xr-x  4 root  admin  136 Aug 28 12:05 .
drwxr-xr-x  8 root  admin  272 Sep  9 14:49 ..
drwxr-xr-x  3 root  admin  102 Aug 28 12:05 postgresql83
drwxr-xr-x  3 root  admin  102 Aug 26 13:06 smb


Should I remove this user somehow and replace it with a standard user
(using the system admin GUI)?

Thanks, Darren

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread Artacus

This sounds a lot like what I did in my last job using bash for most
things, and php for the more complicated stuff.  Wrote a simple oracle
to pgsql table replicator in php that worked pretty well.


Well we do this stuff all the time with Oracle and sql*plus. And I've 
heard people hear say about how much better psql is than sql*plus. So I 
figured surely someone would have figured that its a really slick way of 
 automating imports and scheduling with cron... and then blogged about 
it. But it looks like I'm on my own.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] using a GUI front end to postgres

2008-09-10 Thread Darren Weber
What's the best open-source front-end for rapid GUI query and report
generation using postgres?

Is it possible to use MS access as a front-end to postgres for rapid
prototyping?  Can that be done through ODBC?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread Scott Marlowe
On Wed, Sep 10, 2008 at 10:14 PM, Artacus [EMAIL PROTECTED] wrote:
 This sounds a lot like what I did in my last job using bash for most
 things, and php for the more complicated stuff.  Wrote a simple oracle
 to pgsql table replicator in php that worked pretty well.

 Well we do this stuff all the time with Oracle and sql*plus. And I've heard
 people hear say about how much better psql is than sql*plus. So I figured
 surely someone would have figured that its a really slick way of  automating
 imports and scheduling with cron... and then blogged about it. But it looks
 like I'm on my own.

Well, sadly I didn't keep a copy of the scripts when I left.  I did
something in bash that was basically to run a query, and process each
line as it came out of psql to detect system problems.  I had a stats
db with all the events the app generated aggregated by total time /
req / minute and stuffed into the db.  5 to 10 app servers making 60
to 200 inserts a minute each.  Not a lot of data each minute, but it
added up.  then I had a table with each request type and a max average
and max absolute threshold that we ran a single query to find which
rows were over their maxes and generated alerts. I used the line by
line reading techniques you can find from googling, then used read (I
think it was read) to split the line up into parts to stuff into vars
and do math.  I need to make something like that again anyway, I'll
post it when it works.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using a GUI front end to postgres

2008-09-10 Thread Artacus



What's the best open-source front-end for rapid GUI query and report
generation using postgres?

Is it possible to use MS access as a front-end to postgres for rapid
prototyping?  Can that be done through ODBC?



This question was asked about a week ago. I don't recall all of the 
answers but I bookmarked

Gedafe  http://isg.ee.ethz.ch/tools/gedafe/index.en.html
and http://www.dadabik.org/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] abusing plpgsql array variables

2008-09-10 Thread Artacus


If I want to pass in a text[] argument to a plpgsql function, at what 
array size am I asking for problems? 100? 10,000? 100,000?


What severity of problems might I encounter? Bad performance? Postgres 
refusing to run my query? A crashed backend?


Yeah, like you I was pretty worried about how it would handle using 
larger arrays. But I was surprised to find that it did a super job of 
handling even large arrays.


One warning though. If you are going to filter a table based on values 
in a large array, don't do something like:


WHERE foo = ANY some_large_array

Instead explode it using a set returning function and join it like a table:

JOIN explode(some_large_array) e ON ...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No error when column doesn't exist

2008-09-10 Thread Artacus

I expected that to generate an error: column foo.text does not exist.
Instead it treats foo.text as foo.*::text AS text:
Is this a feature or a bug?


Hmm.  It's a feature, but maybe a dangerous one.  The expression is
being treated as text(foo), which is intentional in order to allow
use of functions as if they were virtual columns.  However, then it
decides that what you've got there is a cast request.  There wasn't
any ability to cast composite types to text before 8.3, so this fails
in the expected way in 8.2 and before; but in 8.3 the cast
interpretation succeeds, and away we go.

foo.char and foo.varchar have similarly unexpected behavior; I think
that's probably the end of it, though, since those are the only types
that CoerceViaIO will take as targets.

Maybe we could/should restrict things so that the syntax continues to
fail, but I can't think of any restrictions that don't seem like warts.
What's worse, they might break stuff that used to work.


I like that functionality and think the behavior is as it should be. 
Best practice is to not name columns with reserved words. So maybe we 
could update the manual with another reason not to use SQL reserved 
words as column names.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-10 Thread Greg Smith

On Tue, 9 Sep 2008, Artacus wrote:


Can psql access environmental variables or command line params?


$ cat test.sql
select :TEST as input;
$ psql -v TEST=16 -f test.sql
 input
---
16
(1 row)

You can find out more about what you can do with variable substitution at 
http://www.postgresql.org/docs/8.3/static/app-psql.html There are some 
reserved IDs, some of which can be useful in return for the fact that you 
have to avoid their names.


Another handy trick in this area is to just have your shell script write a 
small file with \set command or similar generated code containing 
parameters or setup stuff, and then have a larger main script 
include that with \i


As for environment variables, if it's just things like COPY you want to 
automate this works:


$ F=/x/y
$ psql -c COPY x from '$F'

Other approaches:

1) Use the untrusted PL/PerlU to just handle the whole operation
2) Write something in a more mainstream programming language that you can 
hook into the database.
3) Use PL/sh to call your scripts instead to generate what you need: 
http://plsh.projects.postgresql.org/


If the main goal is to automate COPY, though, those will probably just 
slow you down.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general