[GENERAL] vacuum full taking much longer than dump + restore?

2006-03-20 Thread Dragan Matic
We have a few larger tables (~3,5 million rows largest, ~ 1 million rows 
smallest) on our production database which had at least one column 
defined as char(nn) (nn being larger or equal to 60).
I did an alter table where I changed the definition of those columns to 
varchar(nn), and after that did a 'UPDATE TABLE some_table SET 
column_name = RTRIM(column_name)'.
Since it effectively doubled the number of rows I decided to do a vacuum 
full. After some 10 hours I had to stop it since it was monday morning, 
and vacuum was blocking the querys. After thad I did a pg_dump and a 
restore and it was finished in about an hour and a half, with additional 
15-20 minutes of vacuum analyze. I'm guessing that this 
dump-restore-analyze has done effectively the same thing what vacuum 
full was supposed to do. How is it possible that vacuum full was so 
slow, are there some configuration parameters that might be 
misconfigured? I am using fedora core 4 with pre-built 8.1.3 rpms. 
Server has 2 gb of ram.


Tnx in advance

Dragan Matic


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Difference between add column and add column with default

2006-03-20 Thread Guido Neitzer

Hi.

What is the technical difference between adding a column to a table  
and then apply a set value = ... to all columns and adding a column  
with a default value = ...?


I have seen that the first duplicates all rows, I had to vacuum and  
reindex the whole table. Okay so far, I have expected this. But this  
wasn't necessary with the second option, nevertheless, fetching some  
rows showed, that the value of the new column was my default value.


So, I'm curious: what happens (not) here?

Thx for the explanation.

cug

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] vacuum full taking much longer than dump + restore?

2006-03-20 Thread Martijn van Oosterhout
On Mon, Mar 20, 2006 at 10:50:42AM +0100, Dragan Matic wrote:
 We have a few larger tables (~3,5 million rows largest, ~ 1 million rows 
 smallest) on our production database which had at least one column 
 defined as char(nn) (nn being larger or equal to 60).

snip

 15-20 minutes of vacuum analyze. I'm guessing that this 
 dump-restore-analyze has done effectively the same thing what vacuum 
 full was supposed to do. How is it possible that vacuum full was so 
 slow, are there some configuration parameters that might be 
 misconfigured? I am using fedora core 4 with pre-built 8.1.3 rpms. 
 Server has 2 gb of ram.

How many indexes did you have on that table? Once vacuum has decided to
clear out an old tuple, it needs to remove it from the index. Depending
the number and size of indexes, this can be quite expensive. so
dropping the indexes first, vacuuming and recreating may be faster.

Incidently, some changes have been made to vacuum recently to make this
a bit better, but what it have is kind of the worst case scenario. It
is known that sometimes clustering a table is faster than vacuuming it.

I think maintainence_work_mem has a significant impact on vacuum,
especially how many passes it needs to make. Upping that should help.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Slow trigger on identical DB but different machine

2006-03-20 Thread Etienne Labuschagne

Hi Tom,

That was one of the first things I tried, to no avail . . .

Regards
Etienne

Tom Lane wrote:

Etienne Labuschagne [EMAIL PROTECTED] writes:
  

The strange thing is that both databases are EXACTLY the same with
EXACTLY the same SQL query executed (I restore both DBs from the same
backup file to ensure that everything is the same - data and objects).



Have you vacuumed and analyzed in both DBs?  This sounds like either out-
of-date stats or different configuration settings in the two.

regards, tom lane

  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Difference between add column and add column with default

2006-03-20 Thread Martijn van Oosterhout
On Mon, Mar 20, 2006 at 10:52:36AM +0100, Guido Neitzer wrote:
 Hi.
 
 What is the technical difference between adding a column to a table  
 and then apply a set value = ... to all columns and adding a column  
 with a default value = ...?

What version are you using:

# alter table a add column b int4 default 0;
ERROR:  adding columns with defaults is not implemented

The latter doesn't work in a single step. The former does indeed
duplicate all the rows.

 I have seen that the first duplicates all rows, I had to vacuum and  
 reindex the whole table. Okay so far, I have expected this. But this  
 wasn't necessary with the second option, nevertheless, fetching some  
 rows showed, that the value of the new column was my default value.

The latter only affects newly inserted rows, changing the default does
not affect any existing rows. If it does, please provide examples.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Difference between add column and add column with default

2006-03-20 Thread Guido Neitzer

On 20.03.2006, at 11:41 Uhr, Martijn van Oosterhout wrote:


What is the technical difference between adding a column to a table
and then apply a set value = ... to all columns and adding a column
with a default value = ...?


What version are you using:

# alter table a add column b int4 default 0;
ERROR:  adding columns with defaults is not implemented


DB=# show server_version;
server_version

8.1.3
(1 row)

# alter table a add column b int4 default 0;

works just fine.


The latter doesn't work in a single step. The former does indeed
duplicate all the rows.


It works here.


The latter only affects newly inserted rows, changing the default does
not affect any existing rows. If it does, please provide examples.


Nope it doesn't. If I add the column with a default constraint, all  
rows have the default value.


Example:

DB=# create table test (id int4, a int4);
CREATE TABLE
DB=# insert into test values (1, 1);
INSERT 0 1
DB=# insert into test values (2, 2);
INSERT 0 1
DB=# insert into test values (3, 3);
INSERT 0 1
DB=# select * from test;
id | a
+---
  1 | 1
  2 | 2
  3 | 3
(3 rows)

DB=# alter table test add column b int4 default 0;
ALTER TABLE
DB=# select * from test;
id | a | b
+---+---
  1 | 1 | 0
  2 | 2 | 0
  3 | 3 | 0
(3 rows)

DB=# alter table test add column c int4 default 17;
ALTER TABLE
DB=# select * from test;
id | a | b | c
+---+---+
  1 | 1 | 0 | 17
  2 | 2 | 0 | 17
  3 | 3 | 0 | 17
(3 rows)


cug

--
PharmaLine, Essen, GERMANY
Software and Database Development




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-20 Thread Qingqing Zhou

Alex bahdushka [EMAIL PROTECTED] wrote

 After doing some more digging, it looks like that server was missing
 the appropriate Kpostgresql symlink in /etc/rc0.d/.  So upon shutdown
 (shutdown -h now)...  my guess is it got a sigterm (you know where it
 says Sending all processes a TERM signal or whatever), then it (init)
 waited 5 seconds or whatever the timeout is and sent a sigkill.

 If postgresql took longer to shutdown than that timeout and so was
 then given a sigkill and then server turned off Could that do it?


I don't believe in this explaination actually. According the startup
message, the error heap_update_redo: no block could most possibly happen
when PostgreSQL tried to read an existing block but found that the file
length is not long enough to have it. How could a SIGKILL truncate a data
file like that?

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Encountering NULLS in plpgsql

2006-03-20 Thread Richard Huxton

Randy Yates wrote:

dev@archonet.com (Richard Huxton) writes:


Randy Yates wrote:

I wrote a simple pl to compute running sums, but when it
encountered a null on a float4 value it caused pgadminIII to
crash (exited abruptly). Is this intended behavior?

No, but we'll need more information to figure out what is going on.



Sure - see below.


What language did you use for the procedural code? Can you show us
that code?



outrow.fBookBalance := outrow.fBookBalance + inprow.fAmount;


I'm assuming that fAmount is the column that can be null.


CREATE OR REPLACE VIEW vewChecking AS SELECT * FROM fcnCheckingRow();


If you just do SELECT * FROM vewChecking in psql I assume it all works OK?


What version of postgreSQL?


PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC
i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)


There should be a more recent RPM available for you  - 8.0.7 is the latest.


What version of pgAdmin III?


1.4.1 (under FC4/i386)


Was there any error message?


Not that I saw.


Hmm - seems to work OK for me with pgAdmin III (v 1.4.0) installed via 
deb-src-file on Ubuntu. At least a view data on the view seems to work.



Is there anything in the logs?


Here's pgsql/data/pg_log/postgresql-Thu.log, when the error would've
happened:



LOG:  unexpected EOF on client connection LOG:  unexpected EOF on
client connection


That certainly looks like it's the client application that's the problem.

Version 1.4.2 of pgadmin is out, but I don't think RPMs are available
yet for FC4. Worth checking the change-log and mailing lists for pgadmin 
though:

  http://www.pgadmin.org/development/changelog.php

I'd see if anyone has FC4 RPMs of the new version and see if that solves 
your problem.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] OpenSuse10.0 and postgresql

2006-03-20 Thread Richard Huxton

Hrishikesh Deshmukh wrote:

The README doesn't talk about what next to do after installation from rpms,
there is no file which talks about in the /usr/share/doc/packages/postgresql
. It only says look at INSTALL file but there is no install with the rpms!!


There should be a complete copy of the HTML manuals available too. They 
are online here: http://www.postgresql.org/docs/


Might be worth making sure everything is installed - check you're not 
missing a package:  rpm -qa | grep postg


A filename you can search for is plpgsql.html - that's probably only 
installed by PG.

  locate plpgsql.html
  find /usr/share -name 'plpgsql.html'

HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Wal -long transaction

2006-03-20 Thread Ron Mayer

Greg Stark wrote:


Well it's worse than that. If you have long-running transactions that would
cause rollback-segment-overflow in Oracle then the equivalent price in
Postgres would be table bloat *regardless* of how frequently you vacuum.


Isn't that a bit pessimistic?  In tables which mostly grow (as opposed
to deletes and updates) and where most inserts succeed (instead of
rolling back), I would have expected postgresql not to bloat
tables no matter how long my transactions last.

And it's been a while; but I thought transactions like that could
overflow rollback segments in that other database.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Urgent !!! Please Help Me

2006-03-20 Thread Michelle Konzack
Am 2006-03-13 23:58:40, schrieb r irussel:
 Hello Every body:

   I have implemented psql version 7.4.2 on Debian linux version 3.2

Where does this PostgreSQL version come from?

And there is NO version 3.2 of Debian GNU/Linux.
Only 3.0 (Woody), 3.1 (Sarge) and maybe 4.0 (Etch).

You should install at least to Sarge 3.1 and PostgreSQL 7.4.7-6sarge1.

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Updating in multiple tables

2006-03-20 Thread Luuk Jansen
I have a quick newbee question:

Is is possible to update in multiple tables at the same time.
I am working with PHP scripts and will use the following example to
explain what I want to do:

I have a generic_user tables, which forms a INNER JOIN with the
logon_user table (with a join using id).

generic_user:
- id
- name
- passowrd

logon_user:
- id
- last_logon
- nickname

As I load all the fields at once in an array, and want to update the
same, so just something like 

UPDATE generic_user INNER JOIN logon_user USING (id) SET name='test',
nickname='test2' WHERE id = 1;

Is there anybody who can explain how to do this?

Thanks in advance,
Regards,
Luuk


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Updating in multiple tables

2006-03-20 Thread Tino Wildenhain

Luuk Jansen schrieb:

I have a quick newbee question:

Is is possible to update in multiple tables at the same time.
I am working with PHP scripts and will use the following example to
explain what I want to do:

I have a generic_user tables, which forms a INNER JOIN with the
logon_user table (with a join using id).

generic_user:
- id
- name
- passowrd

logon_user:
- id
- last_logon
- nickname

As I load all the fields at once in an array, and want to update the
same, so just something like 


UPDATE generic_user INNER JOIN logon_user USING (id) SET name='test',
nickname='test2' WHERE id = 1;

Is there anybody who can explain how to do this?


You either update both tables in subsequent update statements
(in one exec call or inside your transaction)
or create an updateable view.
(Just like aregular view and then add a rule for update - maybe
using a call to a stored function)
or just use the stored function directly.

HTH
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] question about postgresql time intervals

2006-03-20 Thread Linda
On Friday 17 March 2006 10:20 pm, Michael Glaesemann wrote:
 I've been having some email problems, so my apologies if this is a  
 duplicate.
 
 On Mar 16, 2006, at 22:49 , Linda wrote:
 
 
  Thanks for your reply.  I guess you missed the original email.  I  
  have an
  application that is retrieving uptime (an integer number of  
  seconds since
  reboot) and recasting it as varchar and then interval type.
 
 
 Rather than perform this cast, you might want to make your own  
 function to handle this. Here are a couple (one in PL/pgSQL, the  
 other SQL). You should be able to use these functions any relatively  
 modern PostgreSQL installation.
 
 (I find the x_int * interval some_int construct a bit cleaner than  
 forcing a cast as well.)
 
 create or replace function secs_to_interval(integer)
 returns interval
 strict
 immutable
 language plpgsql as '
 declare
  secs alias for $1;
  secs_per_day constant integer default 86400;
 begin
  return secs / secs_per_day * interval ''1 day'' + secs %  
 secs_per_day * interval ''1 second'';
 end;
 ';
 
 create or replace function secs_to_interval_sql(integer) returns  
 interval
 strict
 immutable
 language sql as '
 select $1 / 86400 * interval ''1 day'' + $1 % 86400 * interval ''1  
 second'';
 ';
 
 test=# select secs_to_interval(1824459), secs_to_interval_sql(1824459);
 secs_to_interval | secs_to_interval_sql
 --+--
 21 days 02:47:39 | 21 days 02:47:39
 (1 row)
 
 test=# select secs_to_interval(86400), secs_to_interval_sql(86400);
 secs_to_interval | secs_to_interval_sql
 --+--
 1 day| 1 day
 (1 row)
 
 test=# select secs_to_interval(302), secs_to_interval_sql(302);
 secs_to_interval | secs_to_interval_sql
 --+--
 00:05:02 | 00:05:02
 (1 row)
 
 test=# select secs_to_interval(1824459 * 2), secs_to_interval_sql 
 (1824459 * 2);
 secs_to_interval | secs_to_interval_sql
 --+--
 42 days 05:35:18 | 42 days 05:35:18
 (1 row)
 
 Hope this helps.
 
 Michael Glaesemann
 grzm myrealbox com
 
 
 

Hi, Michael

Thanks for the suggestion!  This approach will work on both older and newer 
version of PostgreSQL.

Thanks,
Linda

-- 
Linda Gray
Unitrends Corporation
803.454.0300 ext. 241

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Updating in multiple tables

2006-03-20 Thread Luuk Jansen
Thanks for the quick reply,

Using a view and rules seems to be the way to do it as it saves me from
rewriting all classes in my application that save data.

Is there any simple way to create a rule that on the update of the view
'detects' which field belongs to which table, and updates them
accordingly? 
The application now just parses the fields of the joined tables to the
query without any info on which table it came from (as it is the same
array returned from the select function, which only gives the field
names).

I assume one might have to use functions for it, but I never used them.

Luuk

On Mon, 2006-03-20 at 15:34 +0100, Tino Wildenhain wrote:
 Luuk Jansen schrieb:
  I have a quick newbee question:
  
  Is is possible to update in multiple tables at the same time.
  I am working with PHP scripts and will use the following example to
  explain what I want to do:
  
  I have a generic_user tables, which forms a INNER JOIN with the
  logon_user table (with a join using id).
  
  generic_user:
  - id
  - name
  - passowrd
  
  logon_user:
  - id
  - last_logon
  - nickname
  
  As I load all the fields at once in an array, and want to update the
  same, so just something like 
  
  UPDATE generic_user INNER JOIN logon_user USING (id) SET name='test',
  nickname='test2' WHERE id = 1;
  
  Is there anybody who can explain how to do this?
 
 You either update both tables in subsequent update statements
 (in one exec call or inside your transaction)
 or create an updateable view.
 (Just like aregular view and then add a rule for update - maybe
 using a call to a stored function)
 or just use the stored function directly.
 
 HTH
 Tino
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Double checking my logic?

2006-03-20 Thread Ted Byers



I have to deal with the following situation.

I get a daily feed of data for several tables. This 
data is retrieved from the supplier by ftp in the form of CSV 
textfiles. I know I can load the files using "load data". The 
relational structure between the files and tables is simple: one file contains 
data for a lookup table. But the data in the lookup table is dynamic, 
potentially changing from one day to the next (though in practice, the rate of 
change is likely to be very slow). I will also need to maintain a history 
in such a way that any and all changesin the data can be fully 
audited. This means that, for auditing purposes, I will need to be able to 
extract valid time for each item in the lookup table. The only likely 
change in the data will be that occassionally items will be added or removed 
from the lookup table. None of the data already present in the database 
will be editable (data validation happens before I get the feed). The only 
analysis of which I have been told is done daily, as the feed is received. 
At present, no-one looks at earlier data (although I expect that will change as 
I create a better structure for the database to support audits).

I am trying to create a flexible design so that 
refactoring will be simple if and when the assumptions or practices need to be 
changed.

I know how to handle all this using a brute force 
approach, but I expect that approach will be painfully slow. So here is 
what I am planning.

1) create a suite of tables corresponding to the files in 
the feed, with the addition of a date/time stamp containing the date and time on 
which the data being processed was received.
2) create a suite of temporary tables corresponding to the 
tables created in step 1 (but without the date/time stamp)
3) load the data into the temporary tables
4) analyse the data while it is in the temporary tables, 
storing the result of the analysis in new tables
5) copy the data into the permanent tables, and add the 
date and time stamp for the data (this date/time stamp is not present in the 
files retrieved).
6) free the temporary tables

Now, this secondlast step is brute force, adequate 
for all but one of the tables: the look up table. If I stick with the 
brute force approach, the lookup table will waste a significant amount of 
space. This won't be much initially, but it is guaranteed to get worse as 
time passes and I'd expect the lookup performance to degrade as the amount of 
data in the lookup table increases.

Each record in the lookup table represents a product, and 
both the name and the product will have valid time intervals that may not be 
related. The name may change because the product has been renamed for 
whatever reason (and we don't care why), or the name may disappear altogether 
because the product has been discontinued. We can distinguish the two 
cases because each product has an ID that remains valid while the product 
exists, and the ID won't be in the data at all if the product is 
discontinued. 

I am considering creating an additional table just to 
lookup product names, but with two date and time stamps. The first would 
represent the first time the product name appears in the data and the last 
would represent the last time the product nameis present in the 
data. The first of these would be edited only once, and that is on the 
first day for which we have data. I am torn between updating the last of 
these every day, until the name disappears, or leave it null until the name 
disappears. leaving it null would save on space, but updating it with the 
current data and time should save time since it would not be necessary to 
execute a complex conditional on every product ever included in the 
database. If we update it only for those items in today's data, those that 
were terminated before today will not have their date/time stamp updated, so the 
two fields will always represent the time interval for which the name is valid. 
A similar logic applies to the product ID.

Have I overlooked anything that is obvious to you? 
Any gotchas I should be aware of?

What opportunities for improving performance do you 
see?

The ultimate objective is to have this fully automated 
from a shell script that is configured by our sysop to execute at a specific 
time every day Monday through Friday. Can all of the steps I describe 
above be implemented using ANSI standard SQL, and thus put into a stored 
procedure, or should I look at doing some of it in a Perl script or java 
application? I suppose I will have to have at least a basic Java 
application, or perl script, if only to connect to the database and invoke any 
functions I have created to do this.

Any suggestions would be appreciated.

Thanks,

Ted

R.E. (Ted) Byers, Ph.D., Ed.D.R  D Decision 
Support Solutionshttp://www.randddecisionsupportsolutions.com/


Re: [GENERAL] PANIC: heap_update_redo: no block

2006-03-20 Thread Alex bahdushka
On 3/20/06, Qingqing Zhou [EMAIL PROTECTED] wrote:

 Alex bahdushka [EMAIL PROTECTED] wrote
 
  After doing some more digging, it looks like that server was missing
  the appropriate Kpostgresql symlink in /etc/rc0.d/.  So upon shutdown
  (shutdown -h now)...  my guess is it got a sigterm (you know where it
  says Sending all processes a TERM signal or whatever), then it (init)
  waited 5 seconds or whatever the timeout is and sent a sigkill.
 
  If postgresql took longer to shutdown than that timeout and so was
  then given a sigkill and then server turned off Could that do it?
 

 I don't believe in this explaination actually. According the startup
 message, the error heap_update_redo: no block could most possibly happen
 when PostgreSQL tried to read an existing block but found that the file
 length is not long enough to have it. How could a SIGKILL truncate a data
 file like that?


Hrm... well i obviously have restored the database by now (using
pg_resetxlog; pg_dump; initdb; pg_restore).  However i did make a
backup of the broken directory before I created the new database.  If
anyone has any thing they would like me to try to possibly help track
down this possible bug.  I would be more than glad to do it.

Since it sounds like its something wrong with the xlog here is the
contents of the dir... Im not sure how useful this is but here it is
anyways.

pg_xlog# du -ak
16404   ./0001000D0022
16404   ./0001000D001E
16404   ./0001000D0019
16404   ./0001000D001A
16404   ./0001000D001D
16404   ./0001000D001C
16404   ./0001000D0020
16404   ./0001000D0021
16404   ./0001000D001B
4   ./archive_status
16404   ./0001000D0023
16404   ./0001000D001F
16404   ./0001000D0018
196856  .

They are all the same size, so it does not look like a truncated
file... Or am i just misinterpreting the error message and its one of
the files elsewhere?

The file system is ext3 and it fscked fine, and nothing is the the
lost+found dir/.  As far as i know the computer was allowed to sync
the buffers to disk before the reboot (the plug was not pulled or
anything).

Any Ideas?

Otherwise it sounds like ill just have to chalk this one up to the
gods, and hope its fixed in 8.1.4

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] db sever seems to be dropping connections

2006-03-20 Thread Rushabh Doshi
Rushabh Doshi wrote:
 I'm facing an issue with PostgreSQL .. The server seems to drop the
 connection to my script after a couple of hours (sometimes, not always).
 
 DBD::Pg::db selectrow_array failed: could not receive data from server:
 
 
 :server closed the connection unexpectedly.

Do you get a better error in the postgresql logs?

--- Unfortunately there's nothing in the postgresql logs.

 And in another scenario it failed with the message:
 DBD::Pg::db selectrow_array failed: could not receive data from server:
 Connection timed out

Sounds more like a network issue. How is the script connecting to
postgres? using a socket? Through tcpip?

--- Using tcpip. I've tried to re-establish connection in my script in case
of invalid handles or network issues. But this seems to be happening quite
often 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] OpenSuse10.0 and postgresql

2006-03-20 Thread Brent Wood


On Sat, 18 Mar 2006, Adrian Klaver wrote:

 On Saturday 18 March 2006 09:15 am, Hrishikesh Deshmukh wrote:
  Hi All,
 
  Can somebody please tell me where i can find rpms for postgesql on OpenSuse
  10.0?
 
  Sincerely,
  Hrishi
 Try this site and enter postgresql. I found rpm's for OpenSuSE .
 http://rpm.pbone.net/index.php3


I haven't found the SUSE 10 rpms are very current, so have been compiling
from source. It has been a pretty straightforward exercise (but not as
easy as installing from RPM :-)

I do hope to learn how to build RPMS  start providing Postgres/PostGIS 
related GIS/mapping packages but for someone with my technical skills, it
might take a while :-)

If anyone here is interested in mentoring me, all help gratefully
accepted!


Cheers,

  Brent Wood

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Connecting

2006-03-20 Thread Ben Trewern



I'd try zeoslib (http://forum.zeoslib.net.ms/or http://sourceforge.net/projects/zeoslib/)instead 
of ODBC. The 6.1.5 version (with patches)works with Delphi 4 and 
always worked well for me.

Regards,

Ben

  "Bob Pawley" [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]...
  I'm appealing for help from this list as the ODBC list 
  has few people andno answers.
  
  I have built a database in Postgresql version 8.0. I 
  want to connect it to Delphi version 4.
  
  I have attempted to connect using the Postgresql ODBC 
  with no success.
  
  Is what I am attempting to do possible considering that 
  Postrgresql version is a lot newer than Delphi?
  
  If it is possible, can someone point me to a tutorial 
  that can guide me through the steps. I am new to interspecies 
  connections?
  
  Is there a better method of making this 
  comnnection?
  
  Bob Pawley
  
  


Re: [GENERAL] SSL or other libraries for Windows-to-Linux PostgreSQL connection?

2006-03-20 Thread Bart Golda
For future generations: It is solved. Do whatever
http://developer.postgresql.org/docs/postgres/ssl-tcp.html  says, than
turn on ssl in postgresql.conf and restart PostgreSQL.

Regards,
Bart Golda


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Wal -long transaction

2006-03-20 Thread Ian Harding

 And it's been a while; but I thought transactions like that could
 overflow rollback segments in that other database.


ORA-01555:  snapshot too old: rollback segment number string with name
string too small
Cause:  Rollback records needed by a reader for consistent read are
overwritten by other writers.
Action: If in Automatic Undo Management mode, increase the setting of
UNDO_RETENTION. Otherwise, use larger rollback segments.

In 10g you can do

ALTER TABLESPACE UNDO_TS2 RETENTION GUARANTEE;

which will automgically grow the undo tablespace until you run out of
disk space or the transaction ends.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Licensing of .DLL files

2006-03-20 Thread Vladimira Nitrova

Hello,

We'd like to distribute the following .DLL files within our software so 
as our users don't have to install these files separately:
comerr32.dll, krb5_32.dll, libeay32.dll, libintl-2.dll, libiconv-2.dll 
and ssleay32.dll.


However, the problem is that we don't know who we shall contact to get 
the permission or learn about the conditions for distribution.


We know about BSD license to the libpq.dll file, however does it include 
also the above mentioned libraries? Are the above mentioned files 
covered by the PostgreSQL copyright as well?


Could you please advise? Any tips will be greatly appreciated.

Thanks very much.

Kind regards,

Vladimira Nitrova
--
CHARONWARE s.r.o.
Ulehlova 267/5, 70030 Ostrava, Czech Republic
http://www.casestudio.com - Data modeling tools
E-mail: [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Licensing of .DLL files

2006-03-20 Thread Dave Page



On 20/3/06 13:17, Vladimira Nitrova [EMAIL PROTECTED] wrote:

 Hello,
 
 We'd like to distribute the following .DLL files within our software so
 as our users don't have to install these files separately:
 comerr32.dll, krb5_32.dll, libeay32.dll, libintl-2.dll, libiconv-2.dll
 and ssleay32.dll.
 
 However, the problem is that we don't know who we shall contact to get
 the permission or learn about the conditions for distribution.
 
 We know about BSD license to the libpq.dll file, however does it include
 also the above mentioned libraries? Are the above mentioned files
 covered by the PostgreSQL copyright as well?
 
 Could you please advise? Any tips will be greatly appreciated.

Unfortunately they are not covered by the BSD licence

comerr32.dll  krb5_32.dll are part of MIT Kerberos, released under the MIT
licence iirc - http://web.mit.edu/kerberos/dist/index.html

libintl-2.dll  libiconv-2.dll are from GNU Gettext, released under the LGPL
licence (iirc) - http://www.gnu.org/software/gettext/

libeay32.dll and ssleay32.dll are from OpenSSL, released under the OpenSSL
licence (again, IIRC) - http://www.openssl.org.

All these components are technically optional though - if you don't need one
or more, you can compile your own build of PostgreSQL without any one of
them - see the --with-openssl, --enable-nls and --with-kerberos configure
options.

Regards, Dave.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Licensing of .DLL files

2006-03-20 Thread Josh Rovero

None of these licenses prohibit distribution, AFAIK.

You may have to offer source, etc.  We normally just
bundle all source tar files for similar components that
we distribute with our code.

Dave Page wrote:


On 20/3/06 13:17, Vladimira Nitrova [EMAIL PROTECTED] wrote:

  

Hello,

We'd like to distribute the following .DLL files within our software so
as our users don't have to install these files separately:
comerr32.dll, krb5_32.dll, libeay32.dll, libintl-2.dll, libiconv-2.dll
and ssleay32.dll.

However, the problem is that we don't know who we shall contact to get
the permission or learn about the conditions for distribution.

We know about BSD license to the libpq.dll file, however does it include
also the above mentioned libraries? Are the above mentioned files
covered by the PostgreSQL copyright as well?

Could you please advise? Any tips will be greatly appreciated.



Unfortunately they are not covered by the BSD licence

comerr32.dll  krb5_32.dll are part of MIT Kerberos, released under the MIT
licence iirc - http://web.mit.edu/kerberos/dist/index.html

libintl-2.dll  libiconv-2.dll are from GNU Gettext, released under the LGPL
licence (iirc) - http://www.gnu.org/software/gettext/

libeay32.dll and ssleay32.dll are from OpenSSL, released under the OpenSSL
licence (again, IIRC) - http://www.openssl.org.

All these components are technically optional though - if you don't need one
or more, you can compile your own build of PostgreSQL without any one of
them - see the --with-openssl, --enable-nls and --with-kerberos configure
options.

Regards, Dave.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org
  




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Licensing of .DLL files

2006-03-20 Thread Magnus Hagander
 None of these licenses prohibit distribution, AFAIK.
 
 You may have to offer source, etc.  We normally just bundle 
 all source tar files for similar components that we 
 distribute with our code.

If you do that, there's definitl no problem. We wouldn't bundle anything
that doesn't uphold *that* requirement. 

The GPL is the worst case, because it may have effects on your
software. PostGIS is the only GPL part of pginstaller. If you're
worried, don't distribute it. Or talk to the PostGIS people about it :-)

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Double checking my logic?

2006-03-20 Thread Adrian Klaver
On Monday 20 March 2006 08:25 am, Ted Byers wrote:
 I have to deal with the following situation.

 I get a daily feed of data for several tables.  This data is retrieved from
 the supplier by ftp in the form of CSV text files.  I know I can load the
 files using load data.  The relational structure between the files and
 tables is simple: one file contains data for a lookup table.  But the data
 in the lookup table is dynamic, potentially changing from one day to the
 next (though in practice, the rate of change is likely to be very slow).  I
 will also need to maintain a history in such a way that any and all changes
 in the data can be fully audited.  This means that, for auditing purposes,
 I will need to be able to extract valid time for each item in the lookup
 table.  The only likely change in the data will be that occassionally items
 will be added or removed from the lookup table.  None of the data already
 present in the database will be editable (data validation happens before I
 get the feed).  The only analysis of which I have been told is done daily,
 as the feed is received.  At present, no-one looks at earlier data
 (although I expect that will change as I create a better structure for the
 database to support audits).

 I am trying to create a flexible design so that refactoring will be simple
 if and when the assumptions or practices need to be changed.

 I know how to handle all this using a brute force approach, but I expect
 that approach will be painfully slow.  So here is what I am planning.

 1) create a suite of tables corresponding to the files in the feed, with
 the addition of a date/time stamp containing the date and time on which the
 data being processed was received. 2) create a suite of temporary tables
 corresponding to the tables created in step 1 (but without the date/time
 stamp) 3) load the data into the temporary tables
 4) analyse the data while it is in the temporary tables, storing the result
 of the analysis in new tables 5) copy the data into the permanent tables,
 and add the date and time stamp for the data (this date/time stamp is not
 present in the files retrieved). 6) free the temporary tables

 Now, this second last step is brute force, adequate for all but one of the
 tables: the look up table.  If I stick with the brute force approach, the
 lookup table will waste a significant amount of space.  This won't be much
 initially, but it is guaranteed to get worse as time passes and I'd expect
 the lookup performance to degrade as the amount of data in the lookup table
 increases.

 Each record in the lookup table represents a product, and both the name and
 the product will have valid time intervals that may not be related.  The
 name may change because the product has been renamed for whatever reason
 (and we don't care why), or the name may disappear altogether because the
 product has been discontinued.  We can distinguish the two cases because
 each product has an ID that remains valid while the product exists, and the
 ID won't be in the data at all if the product is discontinued.

 I am considering creating an additional table just to lookup product names,
 but with two date and time stamps.  The first would represent  the first
 time the product name appears in the data and the last would represent the
 last time the product name is present in the data.  The first of these
 would be edited only once, and that is on the first day for which we have
 data.  I am torn between updating the last of these every day, until the
 name disappears, or leave it null until the name disappears.  leaving it
 null would save on space, but updating it with the current data and time
 should save time since it would not be necessary to execute a complex
 conditional on every product ever included in the database.  If we update
 it only for those items in today's data, those that were terminated before
 today will not have their date/time stamp updated, so the two fields will
 always represent the time interval for which the name is valid. A similar
 logic applies to the product ID.

 Have I overlooked anything that is obvious to you?  Any gotchas I should be
 aware of?

 What opportunities for improving performance do you see?

 The ultimate objective is to have this fully automated from a shell script
 that is configured by our sysop to execute at a specific time every day
 Monday through Friday.  Can all of the steps I describe above be
 implemented using ANSI standard  SQL, and thus put into a stored procedure,
 or should I look at doing some of it in a Perl script or java application? 
 I suppose I will have to have at least a basic Java application, or perl
 script, if only to connect to the database and invoke any functions I have
 created to do this.

 Any suggestions would be appreciated.

 Thanks,

 Ted

 R.E. (Ted) Byers, Ph.D., Ed.D.
 R  D Decision Support Solutions
 http://www.randddecisionsupportsolutions.com/

I faced a similar problem where I was trying to 

Re: [GENERAL] Double checking my logic?

2006-03-20 Thread John D. Burger

Adrian Klaver wrote:

I faced a similar problem where I was trying to keep track of changes 
to a
FoxPro database that I only had indirect access to.  My solution 
followed

your proposal to a degree. I imported the new data on a daily basis to
holding tables. I then ran a series of functions to compare the data 
in the

holding tables to the data in my 'real'  tables. The differences
(added,deleted,changed) were written to audit tables with a timestamp. 
The
'real' tables where TRUNCATED and the new data transferred into them 
and then

the holding tables were cleaned out. This way my 'real'  tables only
contained the minimum data necessary. The audit tables grew but where 
not
queried as much as the 'real' tables so the apparent speed of the 
lookup

process stayed relatively stable.


I do something similar, but because I do not require precise update 
timestamps on each row, my setup has the following wrinkle:  Instead of 
the audit tables having a timestamp column, I have a separate 
updateSessions table, with start and end timestamps.  My audit tables 
then just have a foreign key into this sessions table.  A minor 
advantage of this is that the session ID (possibly) takes up less space 
than a full timestamp.  A more important advantage, from my point of 
view, is that the session table has columns for who is running the 
update, the filename on which the update is based, an MD5 digest of the 
update, etc.  My update scripts fill these in, as well as a general 
comment string that they take as a parameter.


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] question about pg_hba.conf

2006-03-20 Thread Tony Caduto
Does anyone know if the server would have problems reading pg_hba.conf 
if the number of spaces where changed between fields or tabs where added?



thanks,

Tony

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] question about pg_hba.conf

2006-03-20 Thread mike
Never been a problem for me.

On Mon, 2006-03-20 at 15:29 -0600, Tony Caduto wrote:
 Does anyone know if the server would have problems reading pg_hba.conf 
 if the number of spaces where changed between fields or tabs where added?
 
 
 thanks,
 
 Tony
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How I can get the real data type result instead of

2006-03-20 Thread Sigurdur Gunnlaugsson
On Mon, 2006-03-20 at 21:10, Wei Wei wrote:
 In a query, there is something like 
 
 order by count(id)/age
 
 where both id and age are the integer data type. 
 
 From a query result, I believe the operation count(id)/age yields a integer. 
 I need it in real data type. After searching the online document, I haven't 
 found any related information. Can someone help me out on this problem, 
 please.
 
 Thanks,
 
 w

Try:

order by count(id)/age::float

regards,
Sig.Gunn


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] db sever seems to be dropping connections

2006-03-20 Thread Tom Lane
Chris [EMAIL PROTECTED] writes:
 Rushabh Doshi wrote:
 I'm facing an issue with PostgreSQL .. The server seems to drop the
 connection to my script after a couple of hours (sometimes, not always).

 Sounds more like a network issue.

Yeah --- in particular, it sounds like a connection timeout imposed by
a router or firewall.  A lot of NAT-capable routers will drop idle TCP
connections after a certain period of inactivity (typically an hour or
so ... if you're lucky, the router will let you adjust the timeout).
There is no connection timeout built into Postgres itself, so you should
be looking for network-related limitations.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ambuild parameters

2006-03-20 Thread Tom Lane
[EMAIL PROTECTED] writes:
 Thanks for answering, it was a good guess, I really didn't mark it,
 but unfortunately it didn't solve my problem. It still falls down,
 when I try to access the argument.
 But it seems strange to me, that converting to Relation is OK:

 Relation index_rel = (Relation) PG_GETARG_POINTER(1);

 and also that comparing to NULL is OK:

 if (index_rel == NULL).

Neither of those prove a thing (except that you don't have a null
pointer).  I'd still guess that you don't have the V1 parameter marking
correct, and so what the function thinks it's picking up is garbage
because the backend is not passing the parameters the way the function
expects.

You might try using gdb to see exactly what parameter values the
function thinks it's getting, or print them out to the log before you
use them.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Difference between add column and add column with default

2006-03-20 Thread Tom Lane
Guido Neitzer [EMAIL PROTECTED] writes:
 What is the technical difference between adding a column to a table  
 and then apply a set value = ... to all columns and adding a column  
 with a default value = ...?

ADD COLUMN DEFAULT ... is implemented via a full-table rewrite,
so you end up with a version of the table that has no dead space.
Unfortunately this requires an exclusive table lock while the rewrite
happens, so you lock out other processes from the table for a
considerably longer period of time than the UPDATE approach.  IIRC it's
also not completely MVCC-safe --- committed-dead rows will get removed
even if there are old open transactions that should still see those rows
as current.  Bottom line: there's no free lunch.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How I can get the real data type result instead of integer data type?

2006-03-20 Thread Bruno Wolff III
On Mon, Mar 20, 2006 at 13:10:51 -0800,
  Wei Wei [EMAIL PROTECTED] wrote:
 In a query, there is something like 
 
 order by count(id)/age
 
 where both id and age are the integer data type. 
 
 From a query result, I believe the operation count(id)/age yields a integer. 
 I need it in real data type. After searching the online document, I haven't 
 found any related information. Can someone help me out on this problem, 
 please.

You can cast the expressions. Something like:
order by count(id)::float/age::float

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] What's a good default encoding?

2006-03-20 Thread Vivek Khera


On Mar 16, 2006, at 3:36 AM, Martijn van Oosterhout wrote:


Umm, you should choose an encoding supported by your platform and the
locales you use. For example, UTF-8 is a bad choice on *BSD because
there is no collation support for UTF-8 on those platforms. On
Linux/Glibc UTF-8 is well supported but you need to make sure the


Shouldn't postgres be providing the collating routines for UTF8  
anyhow?  How else can we guarantee identical behavior across platforms?



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] What's a good default encoding?

2006-03-20 Thread Peter Eisentraut
Vivek Khera wrote:
 Shouldn't postgres be providing the collating routines for UTF8
 anyhow?

Start typing ...

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] What's a good default encoding?

2006-03-20 Thread Vivek Khera


On Mar 20, 2006, at 6:04 PM, Peter Eisentraut wrote:


Vivek Khera wrote:

Shouldn't postgres be providing the collating routines for UTF8
anyhow?


Start typing ...


So, if I use a UTF8 encoded DB on FreeBSD, all hell will break loose  
or what?  Will things not compare correctly?  Where from does the  
code to do the collating come, then?



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] What's a good default encoding?

2006-03-20 Thread Tom Lane
Vivek Khera [EMAIL PROTECTED] writes:
 Shouldn't postgres be providing the collating routines for UTF8  
 anyhow?  How else can we guarantee identical behavior across platforms?

We don't make any such guarantee.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] About when we should setup index?

2006-03-20 Thread Francisco Reyes

Emi Lu writes:

One more thing to consider. If you have a column with lots of repeated 
values and a handfull of selective values, you could use a partial index.


http://www.postgresql.org/docs/8.0/interactive/indexes-partial.html

For example imagine you have an accounts table like
Accounts
account_id  integer
namevarchar
special_custboolean

Where special_cust are customers that subscribe to some premiun and 
expensive service.. but there are very few customers that do... you could do 
an index like:


CREATE INDEX accuonts_special_cust_idx ON accounts (special_cust)
   WHERE special_cust;

In that case if you wanted to see a list of premiun accounts, that index 
should bring those records quickly.


See the link above for examples and links to a couple of papers explaining 
why/when you want to use a partial index. In particular the document The 
case for partial indexes , pages 3 and up. 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Long running transactions

2006-03-20 Thread Wayne Schroeder
I have a quite heavily accessed database that is vacuumed nightly.
Attached to that database are processes, that at times, due to the way
they are coded, stay idle in transaction for long periods of time
(client lib implementation issues a BEGIN).  I've been reading up on
XIDs and the like and I'm trying to determine if this is going to cause
problems.

The reason I ask is one of these processes blocked a slony replication
set from doing it's initial copy with a warning about an old transaction
id.  It made me wonder if long running transactions could some how hurt
things in other areas.  Obviously the slony copy is not an issue for the
postgres list -- I am more interested in the effects of long running
transactions in general.

Wayne

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Long running transactions

2006-03-20 Thread Tom Lane
Wayne Schroeder [EMAIL PROTECTED] writes:
 I have a quite heavily accessed database that is vacuumed nightly.
 Attached to that database are processes, that at times, due to the way
 they are coded, stay idle in transaction for long periods of time
 (client lib implementation issues a BEGIN).  I've been reading up on
 XIDs and the like and I'm trying to determine if this is going to cause
 problems.

Yup, it is; you should try to fix those apps to not send BEGIN until
they are ready to do some work.  Idle-in-transaction processes will tend
to prevent VACUUM from removing dead rows, for instance, because it
can't be sure that those processes shouldn't be able to see recently-dead
rows.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] xml output.

2006-03-20 Thread Murali K. Maddali

Hello Guys,

I am trying to export the data out of the database directly to an xml 
file. Is there a way that I could do this in PostgreSQL. I am looking 
for a function or something like that. Does PostgreSQL support sqlxml.


Thank you,
Murali.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] xml output.

2006-03-20 Thread Chris

Murali K. Maddali wrote:

Hello Guys,

I am trying to export the data out of the database directly to an xml 
file. Is there a way that I could do this in PostgreSQL. I am looking 
for a function or something like that. Does PostgreSQL support sqlxml.


Don't think so. A small script (perl, python, ruby, whatever) should be 
able to convert it easily enough though.


--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] xml output.

2006-03-20 Thread Murali K. Maddali

Can I use pl/pgsql or pl/tcl to write function to do that.

Murali.

Chris wrote:

Murali K. Maddali wrote:

Hello Guys,

I am trying to export the data out of the database directly to an xml 
file. Is there a way that I could do this in PostgreSQL. I am looking 
for a function or something like that. Does PostgreSQL support sqlxml.


Don't think so. A small script (perl, python, ruby, whatever) should 
be able to convert it easily enough though.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] xml output.

2006-03-20 Thread Chris

Murali K. Maddali wrote:

Can I use pl/pgsql or pl/tcl to write function to do that.


Can't see why not (I'm sure others will chime in if it's not possible or 
not a good idea) but that's not going to help for a full database dump. 
Doing a set of results it should work ok though. Depends what you need 
the xml for.




Chris wrote:


Murali K. Maddali wrote:


Hello Guys,

I am trying to export the data out of the database directly to an xml 
file. Is there a way that I could do this in PostgreSQL. I am looking 
for a function or something like that. Does PostgreSQL support sqlxml.



Don't think so. A small script (perl, python, ruby, whatever) should 
be able to convert it easily enough though.





--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] xml output.

2006-03-20 Thread Murali K. Maddali
I was trying to write out data from a table to an xml file, so that I 
could export out this file to another external application from another 
vendor. I got to export the data based on a predefined schema. I am able 
to do this through .NET using c#, but I was wondering if there is way if 
I can do this from the database itself.


I would appreciate if you can point me towards/with some examples or links.

Thank you,
Murali.

Chris wrote:

Murali K. Maddali wrote:

Can I use pl/pgsql or pl/tcl to write function to do that.


Can't see why not (I'm sure others will chime in if it's not possible 
or not a good idea) but that's not going to help for a full database 
dump. Doing a set of results it should work ok though. Depends what 
you need the xml for.




Chris wrote:


Murali K. Maddali wrote:


Hello Guys,

I am trying to export the data out of the database directly to an 
xml file. Is there a way that I could do this in PostgreSQL. I am 
looking for a function or something like that. Does PostgreSQL 
support sqlxml.



Don't think so. A small script (perl, python, ruby, whatever) should 
be able to convert it easily enough though.







---(end of broadcast)---
TIP 6: explain analyze is your friend