Re: [GENERAL] SELECT statement takes 10 minutes to answer

2006-06-21 Thread A. Kretschmer
am  19.06.2006, um  5:26:54 -0700 mailte Mustafa Korkmaz folgendes:
> Hi,
> 
> Iam using PostgreSQL 7.4.5 on a linux machine, and have the problem
> that an easy "SELECT field1, field2, field3 FROM a_table"-statement

Hey, read my answer on your question in the news, MID
<[EMAIL PROTECTED]> ;-)


Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] BackUp

2006-06-21 Thread Chris

Daniel wrote:

Hi All..
  Is there a way to do auto database bacl up in "PostgreSQL".
Please Reply..


There are different ways, the most common is to do a pg_dump / 
pg_dumpall once a night via cron.


http://www.postgresql.org/docs/8.1/static/backup.html

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] sql question; checks if data already exists before inserted

2006-06-21 Thread nuno
hi, there. i'm trying to write a SQL statement which does the following
things.

1. checks if data already exists in the database
2. if not, insert data into database otherwise skip.

for example, i'd like to insert a student called 'Michael Jordan' whose
ID is 'JORDANMICHAEL' only if the id, 'JORDANMICHAEL' does not already
exist in the database. anyway, my query looks like...

insert into student (studentid, fname, lname)
select 'JORDANMICHAEL', 'Michale', 'Jordan' from student where
studentid not in
(select studentid from student);

however, this does not seem to work. it does not insert data even if it
does not exist in the database. hmm!

any clue?

thanks.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] join on next row

2006-06-21 Thread Aaron Evans


sorry to nitpick, but I think that to get this query to do exactly  
what you want you'll need to add ordering over EventTime on your sub- 
selects to assure that you get the next event and not just some event  
later event on the given day.


-ae

On Jun 20, 2006, at 11:12 AM, Gurjeet Singh wrote:


Gurjeet Singh wrote:
> It would have been quite easy if done in Oracle's 'lateral view'
> feature. But I think it is achievable in standard SQL too; using
> subqueries in the select-clause.
>
> Try something like this:
>
> select
> Employee, EventDate,
> EventTime as e1_time,
> EventType as e1_type,
> (select
> EventTime
> from
> Events
> whereEmployee = O.Employee
> andEventDate = O.EventDate
> andEventTime > O.EventTime
> limit1
> )as e_time_1,
> (select
> EventType
> from
> Events
> whereEmployee = O.Employee
> andEventDate = O.EventDate
> andEventTime > O.EventTime
> limit1
> )
> from
> Events
>
> Hope it helps...
>
> Regards,
> Gurjeet.




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Exporting data from view

2006-06-21 Thread Brent Wood


>
> I read recently about the efforts underway to COPY from a view,
> however I was wondering what the current best-practices are for being
> able to copy out of a view and import that data into an actual table
> elsewhere.  I am currently doing psql -c "SELECT ..." and the using a
> bit of perl to transform that into something copyable (i.e. CSV), but
> is there a way to directly export the data in an easily importable
> form?
>

psql -A -t -c "select * from ";


You can use -F to set the delimiter
-o to specify an output file name (or just redirect stdout)

etc.

Try  man psql


To redirect it into a table,

"insert into table 
 select  ;"

between databases/systems


psql -d  -p  -A -t -c "select * from "; | psql ... "copy
from stdin...;"

can work, as the psql extract can be written to generate the same format
as copy from.


Brent Wood

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] OLEDB Provider for Postgres

2006-06-21 Thread amishsami
Hi all

I desperately needs OLEDB Provider for Postgres. Can any one help me?

Thanks and Regards
A.M.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Bitwise OR in a group statement

2006-06-21 Thread Greg Gazanian
I was wondering if anyone happens to know whether there is an Aggregate 
function available in Postgres that can do an OR on a column of the bit varying 
type. For example I would like to do something as follows:

bitstring
*
1110
0100



SELECT bitwise_or(bitstring) FROM table;

Resulting in:


Any thoughts? Thanks.

- Greg Gazanian
Network Systems Analyst
Technology and Information Services
Arcadia Unified School District


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Help from Havana

2006-06-21 Thread Giraldo Leon
HelloI want to know how to print a database eschema with the information of tables, fields,keys,etc     Thanks     Giraldo 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.

[GENERAL] Form builder?

2006-06-21 Thread webb . sprague
I don't want to revisit or be redundant... but is there a quick and
dirty and cross-platform system for developing user input forms for
Postgres?  Ideally, I am interested in something such that you can give
it ("it" being something like a Python function) a table name,
resulting in a magically appearing input form.  It should be able to
deal with getting the format more or less correct for a datatype, not
displaying system columns, etc, using the system tables to get the
necessary info.

I thought first about hacking xdialog, but they don't have multiple
field forms.  I have looked at wx, but real gui programing is complex
and not general.  Right now I have an ugly hack that writes out a text
file with colon separated lines for each row, with a defaults option so
that you can chain together a set of One-Many forms (eg you enter a
person, then you can cycle through with library books each referencing
that person in a for loop).

I would rather trade ease of use for functionality, if it can get it
90% correct just by the table name and the defaults.  And I refuse to
use XML.  And I want pure Linux.  If I have to develop the darn thing,
of course I am happy to share.

I want to develop an anthropological fieldwork database, but the form
based data entry is important for ease of use.

Cheers.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] VACUUM hanging on idle system

2006-06-21 Thread Clarence
I have a completely idle postgresql system (all backends "idle", none
"in transaction"); every time I attempt to vacuum a particular table,
it hangs
after a while. Here is the output of vacuumdb:

INFO:  vacuuming "public.ledgerdetail"
INFO:  index "ledgerdetail_pkey" now contains 11574842 row versions in
33032
pages
DETAIL:  1 index row versions were removed.
42 index pages have been deleted, 42 are currently reusable.
CPU 1.23s/1.57u sec elapsed 12.13 sec.

The table has another index besides that one. In the system error log
there
is this line:
   could not write block 32756 of relation 1663/134097/385649401:
   No space left on device
The referenced block is in the table's index file. The disk has plenty
of space.
I've seen that semaphore operations can give an "out of space" error. I
wrote
a program to dump the state of the semaphores, and ran it with the
system
idle and with the vacuum hanging. There are about 25 semaphore sets in
the
system, but only one changes. Here is the before and after:

0 1 2 3 4 5 6 7 8 91011
   1213141516
pid  ...
value ...
ncnt  ...
zcnt  ...

Set 131091
47542 47395 47376 47375 47374 47373 47372 47371 84426 47370 47541 45292
46849 46810 46809 46808   530
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0   537
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0

Set 131091
47542 47395 47376 47375 47374 47373 47372 47371 84426 47370 47541 45381
46849 46810 46809 46808   530
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0   537
0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0

The pid of the backend doing the vacuum is 45381, and it seems to be
waiting
on semaphore #11. (I have the complete dump if it's of interest.)

Someone tried to drop the index and got this:
ERROR:  FlushRelationBuffers("idx_ld1", 0): block 32756 is referenced
(private
0, global 1)

I'm going to restart postgresql and see if any possible semaphore
problem
goes away.


Any ideas about what the problem is here?

Thanks.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] XML Support

2006-06-21 Thread Essam Mansour
Hi Everybody,In my research project, I need a DBMS that supports XML storage and retrieval, and  provides ECA rule support.Currently, I am using Oracle 10g. However, I am interested in using Open Sources DB.
I am looking for a document addressing the XML support in the current Postgresql release (8.1.4).I would like to know  - whether Postgresql DB could be used as XML Repository or not? - Whether Postgresql DB support the XQuery or SQL/XML or not?
 - if Yes, could I write triggers combined with SQL/XML statement or XQuery statement?-- Regards,Essam Mansour


Re: [GENERAL] Why my cursor construction is so slow?

2006-06-21 Thread Szymic1

> CREATE OR REPLACE FUNCTION alias(
>v_mask alias.mask%TYPE,
>) RETURNS INT8 AS

Sorry my mistake it should be:

CREATE TABLE  alias (
   alias_id   BIGSERIAL PRIMARY KEY,
   mask  VARCHAR(20) NOT NULL DEFAULT '',
);


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Why my cursor construction is so slow?

2006-06-21 Thread [EMAIL PROTECTED]
Hi
I have following table:

CREATE OR REPLACE FUNCTION alias(
   v_mask alias.mask%TYPE,
   ) RETURNS INT8 AS
with index:
CREATE INDEX alias_mask_ind ON alias(mask);

and this table has about 1 million rows.

In DB procedure I execute:

LOOP
   <>
OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out
ORDER BY mask;
i:=0;
LOOP
   i:=i+1;
 FETCH cursor1 INTO alias_row;
   EXIT WHEN i=10;
END LOOP;
  CLOSE cursor1;
 EXIT WHEN end_number=1;
END LOOP;

Such construction is very slow but when I modify SQL to:
OPEN cursor1 FOR SELECT * FROM alias WHERE mask>=alias_out
ORDER BY mask LIMIT 100;

it works very fast. It is strange for me becuase I've understood so far
that when cursor is open select is executed but Postgres does not
select all rows - only cursor is positioned on first row, when you
execute fetch next row is read. But this example shows something
different.

Can somebody clarify what is wrong with my example? I need select
without LIMIT 100 part.

Regards
Michal Szymanski
http://blog.szymanskich.net


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] SELECT statement takes 10 minutes to answer

2006-06-21 Thread Mustafa Korkmaz
Hi,

Iam using PostgreSQL 7.4.5 on a linux machine, and have the problem
that an easy "SELECT field1, field2, field3 FROM a_table"-statement
takes 10 minutes to give an answer. The table has 750.000 datarows. I
also made an VACUUM a_table before the statement, but it doesnt help at
all. The table has an index to a timestamp field..
What could be the problem, or is it normal that postgresql cant handle
with so many data in an acceptable response-time?

Regards,
M.Korkmaz


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Interface Guidance and Opinions Needed

2006-06-21 Thread Chris Golden








Hello to all,

 

This is my first time posting to this forum and I am very
new to PostgreSQL.  I am very excited about using it.  I have set up
a database and just need a point in the right direction on
interfacing.   I have an orders and messages database.  One of
the providers I would like to interface with has sent me an XML spec to make
HTTPS posts to.  I am just wondering what would be the easiest thing to use
to implement the two.  I would like to scan my database for new orders
then do an HTTPS post to the provider.  I would also like to periodically
check for new orders/messages from the provider via an HTTPS post and bring
them into my database (a constant exchange of information).  I am
wondering if something like PHP would be a universal solution, or something
similar?

 

Sorry if the information I have given is basic, I have been
doing research on the internet trying to figure out which direction to go and
starting off is a little overwhelming.

 

Thanks in advance for any information

 

Chris Golden

 








[GENERAL] BackUp

2006-06-21 Thread Daniel
Hi All..
  Is there a way to do auto database bacl up in "PostgreSQL".
Please Reply..
Thanks,
Daniel


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Unable to start pg service

2006-06-21 Thread Vladimir Leban
Title: Message



Hello,
 
I'm curently in 
version 7.3
 
I know, this is an 
old version and It would be a good idea to migrate.
Before doing that, I 
would like to make a dump of my base. That's why I must running this service at 
any coast.
 
Thank's for your 
help !! :)
 

2006-06-19 10:31:55 LOG:  database system shutdown was interrupted 
at 2006-06-19
 10:23:06
2006-06-19 10:31:55 LOG:  checkpoint record is at 
0/17B1FC0
2006-06-19 10:31:55 LOG:  redo record is at 0/17B1FC0; undo record 
is at 0/0; sh
utdown TRUE
2006-06-19 10:31:55 LOG:  next transaction id: 105375; next oid: 
28662
2006-06-19 10:31:55 LOG:  database system was not properly shut 
down; automatic
recovery in progress
2006-06-19 10:31:55 LOG:  ReadRecord: invalid magic number  in 
log file 0, s
egment 1, offset 8069120
2006-06-19 10:31:55 LOG:  redo is not 
required
2006-06-19 10:31:57 PANIC:  XLogWrite: write request 0/17B2000 is 
past end of lo
g 0/17B2000
2006-06-19 10:31:57 DEBUG:  reaping dead 
processes
2006-06-19 10:31:57 LOG:  startup process (pid 2304) was 
terminated by signal 6
2006-06-19 10:31:57 LOG:  aborting startup due to startup process 
failure
2006-06-19 10:31:57 DEBUG:  proc_exit(1)
2006-06-19 10:31:57 DEBUG:  shmem_exit(1)
IpcMemoryDetach: shmdt(0xd0) failed: 
Invalid argument
2006-06-19 10:31:57 DEBUG:  exit(1)
 
 


[GENERAL] getting comment about constraint

2006-06-21 Thread canfieldsteve
How do you retrieve a comment on a constraint?  For example, consider
the following table and comment:

 create table people (
person_id serialprimary key,
uid   varchar(25)   not null,
constraint uid_alphanumeric check (uid ~ '^[a-z0-9_]+$')
 );

 comment on constraint uid_alphanumeric on people
   is 'UID may only contain letters, numerals, and underscores';

That code code creates a constraint within the "people" table named
"uid_alphanumeric".  Furthermore, it creates a comment on that
constraint.

Given the name of the table and constraint, how would you retrieve the
comment?  "Chapter 9. Functions and Operators" says that
obj_description(object_oid, catalog_name) returns a comment given the
object's OID and class name.  How do I get the OID for the constraint?
It doesn't seem to be stored in pg_constraint.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Changing array subscripting to zero-offset

2006-06-21 Thread Erin Sheldon

Hello everyone -

Array columns are, by default, 1-offset in their
subscripting.  Since I usually am calling postgres
from a language with zero-offset, I would prefer
that postgres conform to that.  The online
documentation hints that this may be configurable
but I haven't been able to find how this is done.

Any help is greatly appreciated,
Erin Sheldon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] join on next row

2006-06-21 Thread Sim Zacks

Harold,
That's brilliant.
Sim

Harald Fuchs wrote:

In article <[EMAIL PROTECTED]>,
Sim Zacks <[EMAIL PROTECTED]> writes:


I want my query resultset to be
Employee,EventDate(1),EventTime(1),EventType(1),EventTime(2),EventType(2)
Where Event(2) is the first event of the employee that took place
after the other event.



Example
EventIDEmployeeEventDateEventTimeEventType
1John6/15/20067:00A
2Frank6/15/20067:15B
3Frank6/15/20067:17C
4John6/15/20067:20C
5Frank6/15/20067:25D
6John6/16/20067:00A
7John6/16/20068:30R



Expected Results
John, 6/15/2006, 7:00, A, 7:20, C
Frank, 6/15/2006, 7:15, B, 7:17, C
Frank, 6/15/2006, 7:17, C, 7:25, D
John, 6/16/2006, 7:00, A, 8:30, R



To get this result set it would have to be an inner join on employee
and date where the second event time is greater then the first. But I
don't want the all of the records with a greater time, just the first
event after.


You can filter the others out by an OUTER JOIN:

  SELECT e1.Employee, e1.EventDate,
 e1.EventTime, e1.EventType,
 e2.EventTime, e2.EventType
  FROM events e1
  JOIN events e2 ON e2.Employee = e1.Employee
AND e2.EventDate = e1.EventDate
AND e2.EventTime > e1.EventTime
  LEFT JOIN events e3 ON e3.Employee = e1.Employee
 AND e3.EventDate = e1.EventDate
 AND e3.EventTime > e1.EventTime
 AND e3.EventTime < e2.EventTime
  WHERE e3.EventID IS NULL
  ORDER BY e1.EventDate, e1.EventTime


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Missing domain socket after reboot.

2006-06-21 Thread Bill Moseley
After a reboot today Postgresql 8.1 came back up and started
accepting connections over TCP but the unix socket file was missing.

This is on Debian Stable, and I can't imagine what might of removed
the file.

Running psql I get:

$ psql test
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket 
"/var/run/postgresql/.s.PGSQL.5432"?

Yep, missing:


$ ls -la /var/run/postgresql
total 8
drwxrwsr-x   2 postgres postgres 4096 2006-06-21 17:03 .
drwxr-xr-x  16 root root 4096 2006-06-21 21:10 ..

Config looks ok:

/etc/postgresql/8.1/main$ fgrep unix_socket_dir postgresql.conf 
unix_socket_directory = '/var/run/postgresql'

Startup option:

$ ps ux -u postgres | grep unix_socket
postgres  1512  0.0  0.3  17564  3476 ?S17:02   0:00 
/usr/lib/postgresql/8.1/bin/postmaster -D /var/lib/postgresql/8.1/main -c 
unix_socket_directory=/var/run/postgresql -c 
config_file=/etc/postgresql/8.1/main/postgresql.conf -c 
hba_file=/etc/postgresql/8.1/main/pg_hba.conf -c 
ident_file=/etc/postgresql/8.1/main/pg_ident.conf

Hum.  lsof knows about the file.

$ lsof -p 1512 | grep /var/run
postmaste 1512 postgres4u  unix 0xf78b5980   1631 
/var/run/postgresql/.s.PGSQL.5432


Any ideas what happened to the socket?


I had to stop and start the postmaster to get the socket back.



-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea

Hi Tim,

Thanks for the advice, it saves me continuing to dig in the help files 
and my reference books any longer. I don't know how much help I could be 
in adding features but I am glad to participate in any way I can in the 
community. I will follow your link to the TODO pages.


Thanks again.

Regards

John

Tim Allen wrote:

John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At 
the time I perform the INSERT command I need to retrieve the value of 
the serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea


It's not supported now, however it has been discussed several times, 
and there is a TODO entry for it at


http://www.postgresql.org/docs/faqs.TODO.html

using syntax along the lines of INSERT ... RETURNING ...

Search for the word "returning" in the todo list and you'll find the 
entry. Your options include waiting for someone to make it happen (no 
telling how long that will be), or helping to make it happen (for 
which we would all thank you :-) ). In the meantime you'll have to 
work around it, as you suggested.


Tim



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread Tim Allen

John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At the 
time I perform the INSERT command I need to retrieve the value of the 
serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea


It's not supported now, however it has been discussed several times, and 
there is a TODO entry for it at


http://www.postgresql.org/docs/faqs.TODO.html

using syntax along the lines of INSERT ... RETURNING ...

Search for the word "returning" in the todo list and you'll find the 
entry. Your options include waiting for someone to make it happen (no 
telling how long that will be), or helping to make it happen (for which 
we would all thank you :-) ). In the meantime you'll have to work around 
it, as you suggested.


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea
Sorry, I just realised this should have gone to the SQL list... (Bloody 
Newbie's) :-[


John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At 
the time I perform the INSERT command I need to retrieve the value of 
the serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At the 
time I perform the INSERT command I need to retrieve the value of the 
serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-21 Thread Qingqing Zhou


On Wed, 21 Jun 2006, Relyea, Mike wrote:

> ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
> chunks); 355336392 used
> HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
> chunks); 290485792 used
> TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks); 1164240
> used
> HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15 chunks);
> 37032016 used
> TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks); 1076840
> used

The same problem. ExecutorState uses much more memory than we expect --
but not sure where they are from :-(

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 04:12:08PM +0200, Alban Hertroys wrote:
> Martijn van Oosterhout wrote:
> >ISTM that what would really work well is some kind of "Merge Sort" node
> >that would work by having multiple subnodes which are already sorted
> >and merging them into one sorted list.
> 
> Would... So this isn't available yet?

Not AFAIK.

> >It would push the ORDER BY down to the subqueries and then merge the
> >results. If the subqueries can be read efficiently sorted (via an index
> >for example) then you would get very quick output, especially if you
> >have a LIMIT clause.
> 
> I just realized that OFFSET kind of complicates the problem.
> 
> If PostgreSQL would handle this (for inheritance as well, I hope), it'd 
> need to keep track of how many records came from which tables to set the 
> offsets in the subqueries appropriately, which of course depends on the 
> previous query... Well, I said it complicates things...

OFFSET is not a problem at all. It's just code for "throw away first N
rows". Once you have the above node type, the executor would simply
throw away somed rows, whichever table they came from.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] SQL query question

2006-06-21 Thread Gurjeet Singh

   Another way is to use correlated-subqueries (refrencing outer
query's columns inside a subquery; hope this feature is supported):

select  *
fromFileVersionHistory H1
where   modificationDate = ( select  max(modificationDate)
fromFileVersionHistory H2
where   H2.filename = H1.filename
  );

   And if you suspect that some different versions of a file might
have same Date, then you should add DISTINCT to 'select *', else
you'll get duplicates in the result.

Regards,
Gurjeet.


On 6/18/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote:

On Sat, Jun 17, 2006 at 16:50:59 -0700,
  Kevin Jenkins <[EMAIL PROTECTED]> wrote:
>  For example
>
> filename date revision
> file110/05/06 1
> file110/05/07 2
> file210/05/08 1
>
> I want to do a query that will return the greatest date for each
> unique filename

If the revisions for a filename are guarenteed to be ordered by date, then
another alternative for you would be:

SELECT filename, max(modificationDate), max(revision)
  FROM FileVersionHistory
  GROUP BY filename
;

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] MS-SQL2PostgreSQL

2006-06-21 Thread Chris

Hrishikesh Deshmukh wrote:

Dear All,

I have a sql MS-SQL script, i would like to run the script in 
postgresql. Is there a MS-SQL2PostgreSQL converter available or does one 
has to go trouble shooting line by line changing data types etc one line 
at a time??!!!


Something on this page might help you:

http://www.postgresql.org/docs/techdocs.3

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Fwd: [GENERAL] performance tips please

2006-06-21 Thread Hugo
I assume the the task takes longer than 5 seconds to complete?
Are you running the same OS? Can pgsql distribute the load on both
Xeon processors? Is pgsql custom compiled for a specific architecture(Pentium III, for example)? How do you measure the response time? Arethere other apps involved?t.n.a.
The DELL PIII box runs  FC4 , the config is the default one.
The DELL Xeon runs SUSE 10.0, the config has some changes to improve autovacuum, thats all.I just downloaded the source and follow the tipical install instructions on both machines.The application takes the start and end time of the query, mainly it is a reporting application running on a WinXP client.
thanksHugo




Re: [GENERAL] performance tips please

2006-06-21 Thread Tomi NA

On 6/21/06, Hugo <[EMAIL PROTECTED]> wrote:

Hi, I am testing my application and DB (postgres 8.1.4 ) on a :
DELL-Power Edge 1800 with 2 Xeon 3.2ghz,  2 Gb RAM and 2 SCSI 149 Gb
each.
trouble is that the same application and DB(postgres 8.0.4) runs on a:
 DELL pentium 3 with 526MB of RAM and an IDE 20 GB

and comparing the performance of both, a get only 20-30seconds faster
responses on the new server, where can I start looking to find out why is
the second server performing this way.

any advice is very appreciated,

thanks in advance

Hugo



I assume the the task takes longer than 5 seconds to complete?
Are you running the same OS? Can pgsql distribute the load on both
Xeon processors? Is pgsql custom compiled for a specific architecture
(Pentium III, for example)? How do you measure the response time? Are
there other apps involved?

t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] performance tips please

2006-06-21 Thread Hugo
Hi, I am testing my application and DB (postgres 8.1.4 ) on a :    DELL-Power Edge
1800 with 2 Xeon 3.2ghz,  2 Gb RAM and 2 SCSI 149 Gb  each.trouble is that the same application and DB(postgres 8.0.4) runs on a: DELL pentium 3 with 526MB of   RAM and an IDE 20 GB and comparing the performance of both, a get only 20-30seconds faster responses on the new server, where can I start looking to find out why is the second server performing this way.
any advice is very appreciated,thanks in advanceHugo


[GENERAL] MS-SQL2PostgreSQL

2006-06-21 Thread Hrishikesh Deshmukh
Dear All,I have a sql MS-SQL script, i would like to run the script in postgresql. Is there a MS-SQL2PostgreSQL converter available or does one has to go trouble shooting line by line changing data types etc one line at a time??!!!
Thanks,Hrishi


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Tom Lane
"Jasbinder Bali" <[EMAIL PROTECTED]> writes:
> I've disabled my SELinux and now postgres is being able to access the shared
> library i.e test.so file.
> Don't know if thats the right way to do it or not.

It's not.  Almost certainly, SELinux is keying the rejection off the
fact that you have the .so file in the wrong place, ie, not a place that
postgres is supposed to be reading executables from.

Put it in $libdir and everything will be much better.  (You might also
need to run restorecon on it, not sure.)  "pg_config --pkglibdir" will
tell you where that is.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
I've disabled my SELinux and now postgres is being able to access the shared library i.e test.so file.Don't know if thats the right way to do it or not.PS: I'm using Fedora core 2 OSThanks,~Jas
On 6/21/06, Joe Conway <[EMAIL PROTECTED]> wrote:
Jasbinder Bali wrote:> Now the error is different. It cries something on the permissions.>> ERROR:  could not load library "/usr/include/pgsql/server/test.so":> /usr/include/pgsql/server/test.so: failed to map segment from shared
> object: Permission denied>> Can you comment on this?What does   ls -l /usr/include/pgsql/server/test.soshow?Does the postgres user (or whomever postgres is running as) have the
ability to read the file?Joe


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
well as of now my postgres is running on a trusted connection that well as of now my postgres is running on a trusted connection that i've specified in pg_hba.conf file.

ls -l /usr/include/pgsql/server/test.so shows the following

-rw-rw-rw- 1 root root 4620 Jun 21 12:00 /usr/include/pgsql/server/test.so

This means that the owner of this .so file is root and group is root.

Thanks
~JasOn 6/21/06, 
Joe Conway <[EMAIL PROTECTED]> wrote:

Jasbinder Bali wrote:> Now the error is different. It cries something on the permissions.>> ERROR:  could not load library "/usr/include/pgsql/server/test.so":> /usr/include/pgsql/server/test.so: failed to map segment from shared
> object: Permission denied>> Can you comment on this?What does   ls -l /usr/include/pgsql/server/test.soshow?Does the postgres user (or whomever postgres is running as) have the
ability to read the file?Joe



Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Joe Conway

Jasbinder Bali wrote:

Now the error is different. It cries something on the permissions.

ERROR:  could not load library "/usr/include/pgsql/server/test.so": 
/usr/include/pgsql/server/test.so: failed to map segment from shared 
object: Permission denied


Can you comment on this?


What does
  ls -l /usr/include/pgsql/server/test.so
show?

Does the postgres user (or whomever postgres is running as) have the 
ability to read the file?


Joe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] multiple statement 'instead of' rule

2006-06-21 Thread Merlin Moncure

On 6/20/06, Tom Lane <[EMAIL PROTECTED]> wrote:

"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> create or replace rule insert_fsv as on insert to frequency_service_view
>   do instead
>   (
> insert into frequency_operation
>   select new.table_name, new.frequency, old.code where new.set =
> true and old.set = false;
> delete from frequency_operation
>   where table_name = old.table_name and frequency_operation.code =
> old.code and
> frequency_operation.frequency = new.frequency and new.set = false;
> update operation
>   set code = new.code where code = old.code and old.code != new.code;
>   );

What is frequency_service_view?  Is it by any chance dependent on
frequency_operation?  If so, your changes to frequency_operation will
affect the behavior of OLD references.


right, actually that was a typo, was supposed to be 'create or replace
rule insert_fov as on insert to frequency_operation_view'.  I was
considering that old/new are invalid which is fine, but the problem is
in some cases the third (and sometimes second query) never fires at
all with any arguments.  I confirmed this by inserting into a log
table in between the rule queries (they never fired either).

I can prepare a test case if you think it's worth it.

Merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Steve Atkins


On Jun 21, 2006, at 9:42 AM, Jasbinder Bali wrote:


Hi,
I raised this problem yesterday aswell. I'm badly stuck at this point.
The problem is as follows:

I have a C function that i want to use in my postgres function.
I adopt the following steps to do that.

--- compile the C file as follows
 gcc -shared -o test_func.so test_func.c
 test_func.c is the name of the C file

--- the name of the function that i want to use from this c file is  
called 'command'


--- Postgres function is written as follows:

CREATE FUNCTION command(integer) RETURNS integer
 AS 'usr/include/pgsql/server/test_func', 'command'
 LANGUAGE C STRICT;

when i try to run this function, always gives me the follwoing error:

ERROR:  could not access file "usr/include/pgsql/server/test_func":  
No such file or directory



I tried changin the permission of the file to 666 and even tried it  
with 755 but in vein.


I checked the log file but it just prints the above error and  
doesn't give me any more information.


I have no clue why is postgres not reading test_func object file.

Any kind of help would be appreciated



IIRC the path name is relative to... dynamic_library_path and pwd, first
as given, then with ".so" appended.

Unless you've set one of those to "/" then 'usr/include/pgsql/server/ 
test_func'

is never going to resolve to where you want it to.

If you really want to keep it where it is, try using the correct  
absolute filename.
Better, though, would be to use ... AS '$libdir/test_func.so' ... and  
put the library

wherever "pg_config --pkglibdir" says - probably /usr/local/pgsql/lib.

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
Yes, that helped. I was missing that leading '/'Now the error is different. It cries something on the permissions.ERROR:  could not load library "/usr/include/pgsql/server/test.so": /usr/include/pgsql/server/test.so: failed to map segment from shared object: Permission denied
Can you comment on this?Thanks,~JasOn 6/21/06, Joe Conway <[EMAIL PROTECTED]> wrote:
Jasbinder Bali wrote:> CREATE FUNCTION command(integer) RETURNS integer
>  AS 'usr/include/pgsql/server/test_func', 'command'>  LANGUAGE C STRICT;>> when i try to run this function, always gives me the follwoing error:>> ERROR:  could not access file "usr/include/pgsql/server/test_func": No
> such file or directoryShould 'usr/include/pgsql/server/test_func' actually be'/usr/include/pgsql/server/test_func'?Note the leading '/'HTH,Joe


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Joe Conway

Jasbinder Bali wrote:

CREATE FUNCTION command(integer) RETURNS integer
 AS 'usr/include/pgsql/server/test_func', 'command'
 LANGUAGE C STRICT;

when i try to run this function, always gives me the follwoing error:

ERROR:  could not access file "usr/include/pgsql/server/test_func": No 
such file or directory


Should 'usr/include/pgsql/server/test_func' actually be 
'/usr/include/pgsql/server/test_func'?


Note the leading '/'

HTH,

Joe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
Hi,I raised this problem yesterday aswell. I'm badly stuck at this point.The problem is as follows:I have a C function that i want to use in my postgres function.I adopt the following steps to do that.
--- compile the C file as follows gcc -shared -o test_func.so test_func.c test_func.c is the name of the C file--- the name of the function that i want to use from this c file is called 'command'
--- Postgres function is written as follows:       CREATE FUNCTION command(integer) RETURNS integer AS 'usr/include/pgsql/server/test_func', 'command' LANGUAGE C STRICT;when i try to run this function, always gives me the follwoing error:
ERROR:  could not access file "usr/include/pgsql/server/test_func": No such file or directoryI tried changin the permission of the file to 666 and even tried it with 755 but in vein.I checked the log file but it just prints the above error and doesn't give me any more information.
I have no clue why is postgres not reading test_func object file.Any kind of help would be appreciatedThanks,~Jas


Re: [GENERAL] Dynamic loading of C functions

2006-06-21 Thread Jasbinder Bali
I've tried everything so that my .so file is recognized but in vein.Don't know whats going wrong.~JasOn 6/20/06, Tom Lane <
[EMAIL PROTECTED]> wrote:Bill Moran <
[EMAIL PROTECTED]> writes:> In response to "Jasbinder Bali" <[EMAIL PROTECTED]>:>> I get the follwing error>> ERROR:  could not access file "/usr/include/pgsql/server/test_func": No such
>> file or directory> Check the permissions.  Can the Postgres user read the file?The error is pretty clearly "file not found", not "no permissions".One possibility is that the complaint is not about this file itself
but about some other shared library it depends on.  Try "ldd" orlocal equivalent on the file to see if it shows any unresolvedreferences.Also, you might try looking in the postmaster log to see if any
additional info appears there --- anything the dynamic linker spit outto stderr is not going to appear on your terminal.regards, tom lane


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Martijn van Oosterhout wrote:

ISTM that what would really work well is some kind of "Merge Sort" node
that would work by having multiple subnodes which are already sorted
and merging them into one sorted list.


Would... So this isn't available yet?


The planner would use this whenever it saw a query of the form:

SELECT * FROM a
UNION ALL
SELECT * FROM b
ORDER BY c;

It would push the ORDER BY down to the subqueries and then merge the
results. If the subqueries can be read efficiently sorted (via an index
for example) then you would get very quick output, especially if you
have a LIMIT clause.


I just realized that OFFSET kind of complicates the problem.

If PostgreSQL would handle this (for inheritance as well, I hope), it'd 
need to keep track of how many records came from which tables to set the 
offsets in the subqueries appropriately, which of course depends on the 
previous query... Well, I said it complicates things...


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Adding foreign key constraints without integrity

2006-06-21 Thread Wes
On 6/20/06 8:17 PM, "Florian G. Pflug" <[EMAIL PROTECTED]> wrote:

> Maybe you could ask at some postgresql support companies how much effort it
> would
> be to add a "without check" flag to "alter table add constraint foreign key",
> and
> how much they'd charge for it...

Or if I get ambitious, dig into the code myself if I can figure out where to
start...

Wes



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 14:16 +0200, Martijn van Oosterhout wrote:
> On Wed, Jun 21, 2006 at 02:08:29PM +0200, simon wrote:
> > > The aggregate stuff should work. something like:
> > > 
> > > SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
> > > 
> > > should do the trick.
> > i just found 
> > 
> > CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
> > WHEN $1 <>  THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
> > IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
> > sfunc=comma_aggregate, stype=text, initcond='' );
> > 
> > and this didn't work with the above mentioned querry.
> > 
> > i actually never found any docu about how tor write custom function
> > which takes a whole result set no matter how many rows.
> 
> Then you havn't looked very hard:
> 
> http://www.postgresql.org/docs/current/interactive/sql-createaggregate.html
> 
> You created an aggregate called "comma" so that's how you should call
> it. This is really no different from the SQL standard min(), max() and
> sum() functions.
> 
> SET kategorie = (SELECT comma(kategorie_bezeichnung) FROM ...)
> 
> Have a nice day,

thanks very much. you're right i should have read the docu harder. but
now everthing works fine.
you made my day.

simon

-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] GPL Licensed Files in 8.1.4

2006-06-21 Thread Jan Wieck

On 6/7/2006 4:34 PM, Bruce Momjian wrote:

Tom Lane wrote:

Andrew Sullivan <[EMAIL PROTECTED]> writes:
> At the same time, it strikes me that at least the userlock stuff, and
> maybe dbmirror as well, are candidates for pgfoundry rather than
> contrib/

We'd already agreed to move dbmirror to pgfoundry, but it just didn't
get done for 8.1.  I had not thought of pgfoundry as a reasonable
solution for userlock, but maybe that's the best thing to do with it.

A better idea would be to contact the module authors and get them to
relicense, but that might be hard.  Dal Zotto at least hasn't been
seen on these lists for a long time :-(


Here is the most recent feedback we have from Massimo:

 http://archives.postgresql.org/pgsql-hackers/2001-08/msg01001.php

 > Regarding the licencing of the code, I always release my code under GPL,
 > which is the licence I prefer, but my code in the backend is obviously
 > released under the original postgres licence. Since the module is loaded
 > dynamically and not linked into the backend I don't see a problem here.
 > If the licence becomes a problem I can easily change it, but I prefer the
 > GPL if possible.



Which means thus far he did not agree to the license change. Can we just 
move the stuff over to pgfoundry and be done with it?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread H.J. Sanders


>>>
> >>> Is anybody over at the dev team considering what an onerous burden 
> >>> this is?  Is anyone considering doing away with it?

Just my 2 cents:

more and more databases have to run 24 * 7 , so something has to be done.

The last 15 years we also used Informix and we never, never had to unload/load
the database because of an upgrade.

Perhaps somebody knows how they do the trick?

Regards

Henk Sanders


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Alban Hertroys

Kenneth Downs wrote:

Richard Huxton wrote:


Kenneth Downs wrote:

AFAIK it has always been the case that you should expect to have to 
dump out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden 
this is?  Is anyone considering doing away with it?


Is there any good reason not to invest in having a second database 
server? That way you could upgrade the slave server, switch that to be 
the master server and replicate the data (using Slony-I, for example) to 
the slave (formerly master) server.


It provides other benefits as well, like the ability to stay up during 
system maintenance, load balancing, etc.



Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out.
As a user of PostgreSQL, I benefit from a lot of things.  I gain a total 
advantage of "X" units of time/money.  Then its time to upgrade and I 
have to give a lot of it back.  The more I use the package, the more 
non-trivial is my upgrade, and the more I give back.
Regardless of whether a package is commercial or free, it strikes me as 
counter to the very soul of programming to build in a burden that 
increases with the user's use of the program, threatening even to tip 
the balance altogether away from its use.  This seems to be the very 
kind of feature that you want to programmatically control precisely 
because it is non-trivial.


Which is why you have to use the pg_dump from the new version to dump 
your data, so it will be compatible with the new database on restore. 
That's a good example of this already being the case.


Your real burden isn't the (possible!) data incompatibility between 
major versions, but the fact that your data grows. The more data you 
have, the more time a dump/restore will take.


You could attempt to just upgrade and hope your data can be interpreted 
by a newer major version (you should dump first, of course). You'll want 
to have some kind of checksums over your data to check if everything 
went well.
This method can't be expected to always work, that'd be near impossible 
to guarantee. There'll be changes to data structures (for the better), 
for example. I suppose the developers could give some estimate about 
your chances...


As mentioned, with a replicated setup your trouble should be minimal.

P.S. We don't use replication as of yet, but we probably will soon.
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 08:10:20AM -0400, Kenneth Downs wrote:
> Regardless of whether a package is commercial or free, it strikes me as 
> counter to the very soul of programming to build in a burden that 
> increases with the user's use of the program, threatening even to tip 
> the balance altogether away from its use.  This seems to be the very 
> kind of feature that you want to programmatically control precisely 
> because it is non-trivial.

That doesn't change the fact that it's a really hard problem. In-place
upgrades would require lots of safety checks because otherwise you
might end up with a cluster that's not readable by any version.

OTOH, you have something like slony which you can use to upgrade to
newer versions without any downtime at all. With a solution like that
working right now, why would people spend effort on making in-place
upgrades work?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] merge result sets

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 02:08:29PM +0200, simon wrote:
> > The aggregate stuff should work. something like:
> > 
> > SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
> > 
> > should do the trick.
> i just found 
> 
> CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
> WHEN $1 <>  THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
> IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
> sfunc=comma_aggregate, stype=text, initcond='' );
> 
> and this didn't work with the above mentioned querry.
> 
> i actually never found any docu about how tor write custom function
> which takes a whole result set no matter how many rows.

Then you havn't looked very hard:

http://www.postgresql.org/docs/current/interactive/sql-createaggregate.html

You created an aggregate called "comma" so that's how you should call
it. This is really no different from the SQL standard min(), max() and
sum() functions.

SET kategorie = (SELECT comma(kategorie_bezeichnung) FROM ...)

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Kenneth Downs

Richard Huxton wrote:


Kenneth Downs wrote:

AFAIK it has always been the case that you should expect to have to 
dump out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden 
this is?  Is anyone considering doing away with it?



Far from trivial. 


Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out. 

As a user of PostgreSQL, I benefit from a lot of things.  I gain a total 
advantage of "X" units of time/money.  Then its time to upgrade and I 
have to give a lot of it back.  The more I use the package, the more 
non-trivial is my upgrade, and the more I give back. 

Regardless of whether a package is commercial or free, it strikes me as 
counter to the very soul of programming to build in a burden that 
increases with the user's use of the program, threatening even to tip 
the balance altogether away from its use.  This seems to be the very 
kind of feature that you want to programmatically control precisely 
because it is non-trivial.



You have changes in on-disk formats and actual functionality between 
major version numbers. For instance - what would you do to deal with 
the recent changes in unicode validation?




begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 12:34 +0200, Martijn van Oosterhout wrote:
> On Wed, Jun 21, 2006 at 12:23:44PM +0200, simon wrote:
> > or in other words, i just would like to know how to rewrite 
> > 
> > SET kategorie = array_to_string ((SELECT ARRAY (SELECT
> > kategorie_bezeichnung
> > 
> > so it works in psql7.3 as well.
> 
> The aggregate stuff should work. something like:
> 
> SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
> 
> should do the trick.
i just found 

CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
WHEN $1 <>  THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
sfunc=comma_aggregate, stype=text, initcond='' );

and this didn't work with the above mentioned querry.

i actually never found any docu about how tor write custom function
which takes a whole result set no matter how many rows.

thanks anyway
simon



> 
> Have a nice day,
-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Alban Hertroys wrote:

Jim Nasby wrote:
Probably a better bet would be going to 8.1 and using constraint  
elimination.



Maybe you mean constraint exclusion?

If so, is that going to help excluding partitions (basically the same 
thing, it seems) from a query based on an ORDER BY and a LIMIT?


Say we take the query I posted:
"SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;"
and the knowledge that this table is inherited by two other tables, with 
number being unique across them (though PostgreSQL probably doesn't know 
about this).
Can constraint exclusion determine that the last 25 number values do not 
occur in some of the tables?


I did some experiments on my PostgreSQL 8.1 server at home (gotta love 
UNIX & SSH), with the following setup:

   Table "public.object"
 Column |  Type   |Modifiers 


+-+-
 number | integer | not null default nextval('object_number_seq'::regclass)
 title  | text| not null
Indexes:
"object_pkey" PRIMARY KEY, btree (number)

   Table "public.content"
 Column  |  Type   |Modifiers 


-+-+-
 number  | integer | not null default 
nextval('object_number_seq'::regclass)

 title   | text| not null
 summary | text| not null
 body| text| not null
Inherits: object

  Table "public.menu_item"
 Column |  Type   |Modifiers 


+-+-
 number | integer | not null default nextval('object_number_seq'::regclass)
 title  | text| not null
 pos| integer | not null default 1
Inherits: object

I inserted a few records into "object" (30, IIRC) and did:

 SET constraint_exclusion=on;
 explain analyze select number, title from object order by number desc 
limit 10;
   QUERY 
PLAN

-
 Limit  (cost=131.34..131.37 rows=10 width=36) (actual 
time=0.335..0.358 rows=10 loops=1)
   ->  Sort  (cost=131.34..135.67 rows=1730 width=36) (actual 
time=0.331..0.338 rows=10 loops=1)

 Sort Key: public."object".number
 ->  Result  (cost=0.00..38.30 rows=1730 width=36) (actual 
time=0.097..0.248 rows=30 loops=1)
   ->  Append  (cost=0.00..38.30 rows=1730 width=36) 
(actual time=0.091..0.184 rows=30 loops=1)
 ->  Seq Scan on "object"  (cost=0.00..1.30 rows=30 
width=12) (actual time=0.090..0.129 rows=30 loops=1)
 ->  Seq Scan on menu_item "object" 
(cost=0.00..21.00 rows=1100 width=36) (actual time=0.001..0.001 rows=0 
loops=1)
 ->  Seq Scan on content "object" 
(cost=0.00..16.00 rows=600 width=36) (actual time=0.001..0.001 rows=0 
loops=1)

 Total runtime: 0.446 ms
(9 rows)

As you can see, it still scans the empty tables menu_item and content. 
So I'm afraid this is no solution to our problem... :(


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] merge result sets

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 12:23:44PM +0200, simon wrote:
> or in other words, i just would like to know how to rewrite 
> 
> SET kategorie = array_to_string ((SELECT ARRAY (SELECT
> kategorie_bezeichnung
> 
> so it works in psql7.3 as well.

The aggregate stuff should work. something like:

SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)

should do the trick.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 00:09 +0200, simon wrote:
> On Die, 2006-06-20 at 15:34 -0500, Bruno Wolff III wrote:
> > On Tue, Jun 20, 2006 at 12:06:24 +0200,
> >   simon <[EMAIL PROTECTED]> wrote:
> > > hi all
> > > 
> > > i'm using postgres 7.3
> > > 
> > > my problem is i want to build a helper table:
> > > 
> > > UPDATE studienmodul_summary 
> > >SET kategorie = (SELECT kategorie_bezeichnung
> > > 
> > > if the SELECT gives back just one result row, everthing is easy. my
> > > problem is, there is sometimes more than one result row. 
> > > 
> > > is there a way to concatenate all result rows and insert them in one
> > > field?
> > 
> > Yes. You can have the subselect call a custom aggregate function that does
> > this for you. Sample code for doing this has been posted on the list
> > multiple times and you should be able to find it in the archives.
> 
> thanks for this hint i didn't konw about the custom aggregate function.
> i found comma_aggregate(text,text) amd similar examples.
> unfortunatly i didn't found something like comma_aggregate(SELECT...). 
> 
> is it possible to write an aggregate function that takes the result rows
> of any number and makes a long string out of it?
> 
> it would be great if someone would have done something before and is
> willing to share.
> but hints where to find docu and/or howtos about writting
> customaggregate functions are also very welcom.
> 
> simon
> > 
or in other words, i just would like to know how to rewrite 

SET kategorie = array_to_string ((SELECT ARRAY (SELECT
kategorie_bezeichnung

so it works in psql7.3 as well.

simon


-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 11:56:27AM +0200, Alban Hertroys wrote:
> Say we take the query I posted:
>   "SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;"
> and the knowledge that this table is inherited by two other tables, with 
> number being unique across them (though PostgreSQL probably doesn't know 
> about this).
> Can constraint exclusion determine that the last 25 number values do not 
> occur in some of the tables?

ISTM that what would really work well is some kind of "Merge Sort" node
that would work by having multiple subnodes which are already sorted
and merging them into one sorted list.

The planner would use this whenever it saw a query of the form:

SELECT * FROM a
UNION ALL
SELECT * FROM b
ORDER BY c;

It would push the ORDER BY down to the subqueries and then merge the
results. If the subqueries can be read efficiently sorted (via an index
for example) then you would get very quick output, especially if you
have a LIMIT clause.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] DocBook 4.2 detecting at configure time

2006-06-21 Thread Oleg Golovanov

Of cause I had probed it already - with command:
SGML_CATALOG_FILES=/usr/local/share/sgml/docbook/4.2/docbook.cat 
./configure --prefix=/usr/local/pgsql --enable-depend --enable-nls 
--enable-integer-datetimes --with-openssl --with-pam 
--enable-thread-safety --with-includes=/usr/local/include 
--with-libraries=/usr/local/lib --with-perl --with-python --with-tcl 
--with-tclconfig=/usr/local/lib/tcl8.4 2>&1 > sci-pgsql.log


And got the following

configure:22300: checking for DocBook V4.2
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:308:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amsa.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:312:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amsb.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:316:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amsc.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:320:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amsn.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:324:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amso.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:328:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-amsr.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:332:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-box.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:336:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-cyr1.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:340:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-cyr2.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:344:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-dia.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:348:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-grk1.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:352:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-grk2.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:356:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-grk3.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:360:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-grk4.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:364:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-lat1.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:368:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-lat2.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:372:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-num.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:376:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-pub.gml" (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:380:0:E: cannot 
open "/usr

/local/share/sgml/docbook/4.2/iso-tech.gml" (No such file or directory)
configure:22329: result: no

Any other suggestions?

Gratefully yours

Oleg Golovanov
Equant LLC

Peter Eisentraut wrote:

Am Dienstag, 20. Juni 2006 09:29 schrieb Oleg Golovanov:
  

I have changed configure command. Currently I issue command:
SGML_CATALOG_FILES=/usr/local/share/xml/docbook/4.2/docbook.cat



Replace that by /usr/local/share/sgml/docbook/4.2/docbook.cat


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Jim Nasby wrote:

On Jun 19, 2006, at 7:00 AM, Alban Hertroys wrote:


Now all we need to do is getting MMBase to do its queries like this :P



Probably a better bet would be going to 8.1 and using constraint  
elimination.


I searched the documentation, google and wikipedia for "constraint 
elimination", but couldn't find anything more specific than a reference 
to an O'Reilly conference about the subject.


Maybe you mean constraint exclusion?

If so, is that going to help excluding partitions (basically the same 
thing, it seems) from a query based on an ORDER BY and a LIMIT?


Say we take the query I posted:
"SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;"
and the knowledge that this table is inherited by two other tables, with 
number being unique across them (though PostgreSQL probably doesn't know 
about this).
Can constraint exclusion determine that the last 25 number values do not 
occur in some of the tables?


This looks liek an interesting solution, could save us quite a bit of 
work if we manage to use this...


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DocBook 4.2 detecting at configure time

2006-06-21 Thread Peter Eisentraut
Am Dienstag, 20. Juni 2006 09:29 schrieb Oleg Golovanov:
> I have changed configure command. Currently I issue command:
> SGML_CATALOG_FILES=/usr/local/share/xml/docbook/4.2/docbook.cat

Replace that by /usr/local/share/sgml/docbook/4.2/docbook.cat.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Computing transitive closure of a table

2006-06-21 Thread Gurjeet Singh

I have not been able to download the document for the last day and a
half... Can someone please forward a copoy to me if you have one???

Thanks,
Gurjeet.

On 6/20/06, Chris Smith <[EMAIL PROTECTED]> wrote:

Thanks for everyone's suggestions.  I found the following, which at least
seems to meet my needs temporarily.

http://citeseer.ist.psu.edu/dong99maintaining.html

Should it turn out that this is not feasible to implement via triggers in
PostgreSQL, I may be back with more questions and seek out a route that
involves modifying the database or other such things.

--
Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Changing encoding of a database

2006-06-21 Thread Marco Bizzarri

Hi Tomi.

Thanks for your answer, I was not aware of such a tool.

The next question at this point is (of course): what is the problem if
I have blob? Should I recode them as well?

Regards
Marco

On 6/20/06, Tomi NA <[EMAIL PROTECTED]> wrote:

On 6/19/06, Marco Bizzarri <[EMAIL PROTECTED]> wrote:
> I all.
>
> We've PostgreSQL database, with SQL_ASCII or LATIN1 encoding. We would
> like to migrate them to UNICODE. Is there some contributed/available
> script, or this is something we should do at hand?
>
> Regards
> Marco

If you don't have blobs in your database, dump it to insert
statements, use the recode tool to recode your data, create a new
database based on UTF8 and load the data.

t.n.a.




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] unsubscribe

2006-06-21 Thread Jad Madi

unsubscribe

--
Jad madi
-
Blog: http://jadmadi.net/
-
Web standards Planet
http://W3planet.net/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] [ppa][PATCHES] Argument handling improvements

2006-06-21 Thread John Jawed

Below are links for the patch and binary (image) files for argument
handling improvements in ppa. You can view the original concept @
http://jawed.name/pgsql_soc. It provides support for both JS and JS
deficient users.

I tried to keep it within the coding guidelines set forth.

I appreciate and look forward to any sort of feedback, as this is a
SoC project, suggestions are welcome.

http://jawed.name/pgsql_soc/jsargs.patch
http://jawed.name/pgsql_soc/jsargs.tar.gz

Regards,
John

---(end of broadcast)---
TIP 6: explain analyze is your friend