[GENERAL] [ppa][PATCHES] Argument handling improvements

2006-06-21 Thread John Jawed

Below are links for the patch and binary (image) files for argument
handling improvements in ppa. You can view the original concept @
http://jawed.name/pgsql_soc. It provides support for both JS and JS
deficient users.

I tried to keep it within the coding guidelines set forth.

I appreciate and look forward to any sort of feedback, as this is a
SoC project, suggestions are welcome.

http://jawed.name/pgsql_soc/jsargs.patch
http://jawed.name/pgsql_soc/jsargs.tar.gz

Regards,
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Changing encoding of a database

2006-06-21 Thread Marco Bizzarri

Hi Tomi.

Thanks for your answer, I was not aware of such a tool.

The next question at this point is (of course): what is the problem if
I have blob? Should I recode them as well?

Regards
Marco

On 6/20/06, Tomi NA [EMAIL PROTECTED] wrote:

On 6/19/06, Marco Bizzarri [EMAIL PROTECTED] wrote:
 I all.

 We've PostgreSQL database, with SQL_ASCII or LATIN1 encoding. We would
 like to migrate them to UNICODE. Is there some contributed/available
 script, or this is something we should do at hand?

 Regards
 Marco

If you don't have blobs in your database, dump it to insert
statements, use the recode tool to recode your data, create a new
database based on UTF8 and load the data.

t.n.a.




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Computing transitive closure of a table

2006-06-21 Thread Gurjeet Singh

I have not been able to download the document for the last day and a
half... Can someone please forward a copoy to me if you have one???

Thanks,
Gurjeet.

On 6/20/06, Chris Smith [EMAIL PROTECTED] wrote:

Thanks for everyone's suggestions.  I found the following, which at least
seems to meet my needs temporarily.

http://citeseer.ist.psu.edu/dong99maintaining.html

Should it turn out that this is not feasible to implement via triggers in
PostgreSQL, I may be back with more questions and seek out a route that
involves modifying the database or other such things.

--
Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] DocBook 4.2 detecting at configure time

2006-06-21 Thread Peter Eisentraut
Am Dienstag, 20. Juni 2006 09:29 schrieb Oleg Golovanov:
 I have changed configure command. Currently I issue command:
 SGML_CATALOG_FILES=/usr/local/share/xml/docbook/4.2/docbook.cat

Replace that by /usr/local/share/sgml/docbook/4.2/docbook.cat.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Jim Nasby wrote:

On Jun 19, 2006, at 7:00 AM, Alban Hertroys wrote:


Now all we need to do is getting MMBase to do its queries like this :P



Probably a better bet would be going to 8.1 and using constraint  
elimination.


I searched the documentation, google and wikipedia for constraint 
elimination, but couldn't find anything more specific than a reference 
to an O'Reilly conference about the subject.


Maybe you mean constraint exclusion?

If so, is that going to help excluding partitions (basically the same 
thing, it seems) from a query based on an ORDER BY and a LIMIT?


Say we take the query I posted:
SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;
and the knowledge that this table is inherited by two other tables, with 
number being unique across them (though PostgreSQL probably doesn't know 
about this).
Can constraint exclusion determine that the last 25 number values do not 
occur in some of the tables?


This looks liek an interesting solution, could save us quite a bit of 
work if we manage to use this...


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DocBook 4.2 detecting at configure time

2006-06-21 Thread Oleg Golovanov

Of cause I had probed it already - with command:
SGML_CATALOG_FILES=/usr/local/share/sgml/docbook/4.2/docbook.cat 
./configure --prefix=/usr/local/pgsql --enable-depend --enable-nls 
--enable-integer-datetimes --with-openssl --with-pam 
--enable-thread-safety --with-includes=/usr/local/include 
--with-libraries=/usr/local/lib --with-perl --with-python --with-tcl 
--with-tclconfig=/usr/local/lib/tcl8.4 21  sci-pgsql.log


And got the following

configure:22300: checking for DocBook V4.2
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:308:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amsa.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:312:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amsb.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:316:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amsc.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:320:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amsn.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:324:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amso.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:328:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-amsr.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:332:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-box.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:336:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-cyr1.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:340:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-cyr2.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:344:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-dia.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:348:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-grk1.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:352:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-grk2.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:356:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-grk3.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:360:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-grk4.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:364:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-lat1.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:368:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-lat2.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:372:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-num.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:376:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-pub.gml (No such file or directory)
onsgmls:/usr/local/share/sgml/docbook/4.2/dbcentx.mod:380:0:E: cannot 
open /usr

/local/share/sgml/docbook/4.2/iso-tech.gml (No such file or directory)
configure:22329: result: no

Any other suggestions?

Gratefully yours

Oleg Golovanov
Equant LLC

Peter Eisentraut wrote:

Am Dienstag, 20. Juni 2006 09:29 schrieb Oleg Golovanov:
  

I have changed configure command. Currently I issue command:
SGML_CATALOG_FILES=/usr/local/share/xml/docbook/4.2/docbook.cat



Replace that by /usr/local/share/sgml/docbook/4.2/docbook.cat


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 11:56:27AM +0200, Alban Hertroys wrote:
 Say we take the query I posted:
   SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;
 and the knowledge that this table is inherited by two other tables, with 
 number being unique across them (though PostgreSQL probably doesn't know 
 about this).
 Can constraint exclusion determine that the last 25 number values do not 
 occur in some of the tables?

ISTM that what would really work well is some kind of Merge Sort node
that would work by having multiple subnodes which are already sorted
and merging them into one sorted list.

The planner would use this whenever it saw a query of the form:

SELECT * FROM a
UNION ALL
SELECT * FROM b
ORDER BY c;

It would push the ORDER BY down to the subqueries and then merge the
results. If the subqueries can be read efficiently sorted (via an index
for example) then you would get very quick output, especially if you
have a LIMIT clause.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 00:09 +0200, simon wrote:
 On Die, 2006-06-20 at 15:34 -0500, Bruno Wolff III wrote:
  On Tue, Jun 20, 2006 at 12:06:24 +0200,
simon [EMAIL PROTECTED] wrote:
   hi all
   
   i'm using postgres 7.3
   
   my problem is i want to build a helper table:
   
   UPDATE studienmodul_summary 
  SET kategorie = (SELECT kategorie_bezeichnung
   
   if the SELECT gives back just one result row, everthing is easy. my
   problem is, there is sometimes more than one result row. 
   
   is there a way to concatenate all result rows and insert them in one
   field?
  
  Yes. You can have the subselect call a custom aggregate function that does
  this for you. Sample code for doing this has been posted on the list
  multiple times and you should be able to find it in the archives.
 
 thanks for this hint i didn't konw about the custom aggregate function.
 i found comma_aggregate(text,text) amd similar examples.
 unfortunatly i didn't found something like comma_aggregate(SELECT...). 
 
 is it possible to write an aggregate function that takes the result rows
 of any number and makes a long string out of it?
 
 it would be great if someone would have done something before and is
 willing to share.
 but hints where to find docu and/or howtos about writting
 customaggregate functions are also very welcom.
 
 simon
  
or in other words, i just would like to know how to rewrite 

SET kategorie = array_to_string ((SELECT ARRAY (SELECT
kategorie_bezeichnung

so it works in psql7.3 as well.

simon


-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] merge result sets

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 12:23:44PM +0200, simon wrote:
 or in other words, i just would like to know how to rewrite 
 
 SET kategorie = array_to_string ((SELECT ARRAY (SELECT
 kategorie_bezeichnung
 
 so it works in psql7.3 as well.

The aggregate stuff should work. something like:

SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)

should do the trick.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Alban Hertroys wrote:

Jim Nasby wrote:
Probably a better bet would be going to 8.1 and using constraint  
elimination.



Maybe you mean constraint exclusion?

If so, is that going to help excluding partitions (basically the same 
thing, it seems) from a query based on an ORDER BY and a LIMIT?


Say we take the query I posted:
SELECT * FROM mm_posrel ORDER BY number DESC LIMIT 25;
and the knowledge that this table is inherited by two other tables, with 
number being unique across them (though PostgreSQL probably doesn't know 
about this).
Can constraint exclusion determine that the last 25 number values do not 
occur in some of the tables?


I did some experiments on my PostgreSQL 8.1 server at home (gotta love 
UNIX  SSH), with the following setup:

   Table public.object
 Column |  Type   |Modifiers 


+-+-
 number | integer | not null default nextval('object_number_seq'::regclass)
 title  | text| not null
Indexes:
object_pkey PRIMARY KEY, btree (number)

   Table public.content
 Column  |  Type   |Modifiers 


-+-+-
 number  | integer | not null default 
nextval('object_number_seq'::regclass)

 title   | text| not null
 summary | text| not null
 body| text| not null
Inherits: object

  Table public.menu_item
 Column |  Type   |Modifiers 


+-+-
 number | integer | not null default nextval('object_number_seq'::regclass)
 title  | text| not null
 pos| integer | not null default 1
Inherits: object

I inserted a few records into object (30, IIRC) and did:

 SET constraint_exclusion=on;
 explain analyze select number, title from object order by number desc 
limit 10;
   QUERY 
PLAN

-
 Limit  (cost=131.34..131.37 rows=10 width=36) (actual 
time=0.335..0.358 rows=10 loops=1)
   -  Sort  (cost=131.34..135.67 rows=1730 width=36) (actual 
time=0.331..0.338 rows=10 loops=1)

 Sort Key: public.object.number
 -  Result  (cost=0.00..38.30 rows=1730 width=36) (actual 
time=0.097..0.248 rows=30 loops=1)
   -  Append  (cost=0.00..38.30 rows=1730 width=36) 
(actual time=0.091..0.184 rows=30 loops=1)
 -  Seq Scan on object  (cost=0.00..1.30 rows=30 
width=12) (actual time=0.090..0.129 rows=30 loops=1)
 -  Seq Scan on menu_item object 
(cost=0.00..21.00 rows=1100 width=36) (actual time=0.001..0.001 rows=0 
loops=1)
 -  Seq Scan on content object 
(cost=0.00..16.00 rows=600 width=36) (actual time=0.001..0.001 rows=0 
loops=1)

 Total runtime: 0.446 ms
(9 rows)

As you can see, it still scans the empty tables menu_item and content. 
So I'm afraid this is no solution to our problem... :(


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 12:34 +0200, Martijn van Oosterhout wrote:
 On Wed, Jun 21, 2006 at 12:23:44PM +0200, simon wrote:
  or in other words, i just would like to know how to rewrite 
  
  SET kategorie = array_to_string ((SELECT ARRAY (SELECT
  kategorie_bezeichnung
  
  so it works in psql7.3 as well.
 
 The aggregate stuff should work. something like:
 
 SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
 
 should do the trick.
i just found 

CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
WHEN $1   THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
sfunc=comma_aggregate, stype=text, initcond='' );

and this didn't work with the above mentioned querry.

i actually never found any docu about how tor write custom function
which takes a whole result set no matter how many rows.

thanks anyway
simon



 
 Have a nice day,
-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Kenneth Downs

Richard Huxton wrote:


Kenneth Downs wrote:

AFAIK it has always been the case that you should expect to have to 
dump out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden 
this is?  Is anyone considering doing away with it?



Far from trivial. 


Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out. 

As a user of PostgreSQL, I benefit from a lot of things.  I gain a total 
advantage of X units of time/money.  Then its time to upgrade and I 
have to give a lot of it back.  The more I use the package, the more 
non-trivial is my upgrade, and the more I give back. 

Regardless of whether a package is commercial or free, it strikes me as 
counter to the very soul of programming to build in a burden that 
increases with the user's use of the program, threatening even to tip 
the balance altogether away from its use.  This seems to be the very 
kind of feature that you want to programmatically control precisely 
because it is non-trivial.



You have changes in on-disk formats and actual functionality between 
major version numbers. For instance - what would you do to deal with 
the recent changes in unicode validation?




begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] merge result sets

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 02:08:29PM +0200, simon wrote:
  The aggregate stuff should work. something like:
  
  SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
  
  should do the trick.
 i just found 
 
 CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
 WHEN $1   THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
 IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
 sfunc=comma_aggregate, stype=text, initcond='' );
 
 and this didn't work with the above mentioned querry.
 
 i actually never found any docu about how tor write custom function
 which takes a whole result set no matter how many rows.

Then you havn't looked very hard:

http://www.postgresql.org/docs/current/interactive/sql-createaggregate.html

You created an aggregate called comma so that's how you should call
it. This is really no different from the SQL standard min(), max() and
sum() functions.

SET kategorie = (SELECT comma(kategorie_bezeichnung) FROM ...)

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 08:10:20AM -0400, Kenneth Downs wrote:
 Regardless of whether a package is commercial or free, it strikes me as 
 counter to the very soul of programming to build in a burden that 
 increases with the user's use of the program, threatening even to tip 
 the balance altogether away from its use.  This seems to be the very 
 kind of feature that you want to programmatically control precisely 
 because it is non-trivial.

That doesn't change the fact that it's a really hard problem. In-place
upgrades would require lots of safety checks because otherwise you
might end up with a cluster that's not readable by any version.

OTOH, you have something like slony which you can use to upgrade to
newer versions without any downtime at all. With a solution like that
working right now, why would people spend effort on making in-place
upgrades work?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] minimizing downtime when upgrading

2006-06-21 Thread H.J. Sanders



  Is anybody over at the dev team considering what an onerous burden 
  this is?  Is anyone considering doing away with it?

Just my 2 cents:

more and more databases have to run 24 * 7 , so something has to be done.

The last 15 years we also used Informix and we never, never had to unload/load
the database because of an upgrade.

Perhaps somebody knows how they do the trick?

Regards

Henk Sanders


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GPL Licensed Files in 8.1.4

2006-06-21 Thread Jan Wieck

On 6/7/2006 4:34 PM, Bruce Momjian wrote:

Tom Lane wrote:

Andrew Sullivan [EMAIL PROTECTED] writes:
 At the same time, it strikes me that at least the userlock stuff, and
 maybe dbmirror as well, are candidates for pgfoundry rather than
 contrib/

We'd already agreed to move dbmirror to pgfoundry, but it just didn't
get done for 8.1.  I had not thought of pgfoundry as a reasonable
solution for userlock, but maybe that's the best thing to do with it.

A better idea would be to contact the module authors and get them to
relicense, but that might be hard.  Dal Zotto at least hasn't been
seen on these lists for a long time :-(


Here is the most recent feedback we have from Massimo:

 http://archives.postgresql.org/pgsql-hackers/2001-08/msg01001.php

  Regarding the licencing of the code, I always release my code under GPL,
  which is the licence I prefer, but my code in the backend is obviously
  released under the original postgres licence. Since the module is loaded
  dynamically and not linked into the backend I don't see a problem here.
  If the licence becomes a problem I can easily change it, but I prefer the
  GPL if possible.



Which means thus far he did not agree to the license change. Can we just 
move the stuff over to pgfoundry and be done with it?



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] merge result sets

2006-06-21 Thread simon
On Mit, 2006-06-21 at 14:16 +0200, Martijn van Oosterhout wrote:
 On Wed, Jun 21, 2006 at 02:08:29PM +0200, simon wrote:
   The aggregate stuff should work. something like:
   
   SET kategorie = (SELECT comma_aggregate(kategorie_bezeichnung) FROM ...)
   
   should do the trick.
  i just found 
  
  CREATE FUNCTION comma_aggregate(text,text) RETURNS text AS ' SELECT CASE
  WHEN $1   THEN $1 || '', '' || $2 ELSE $2 END; ' LANGUAGE sql
  IMMUTABLE STRICT; CREATE AGGREGATE comma (basetype=text,
  sfunc=comma_aggregate, stype=text, initcond='' );
  
  and this didn't work with the above mentioned querry.
  
  i actually never found any docu about how tor write custom function
  which takes a whole result set no matter how many rows.
 
 Then you havn't looked very hard:
 
 http://www.postgresql.org/docs/current/interactive/sql-createaggregate.html
 
 You created an aggregate called comma so that's how you should call
 it. This is really no different from the SQL standard min(), max() and
 sum() functions.
 
 SET kategorie = (SELECT comma(kategorie_bezeichnung) FROM ...)
 
 Have a nice day,

thanks very much. you're right i should have read the docu harder. but
now everthing works fine.
you made my day.

simon

-- 
Simon Litwan   [EMAIL PROTECTED]
Wyona Inc.  -   Open Source Content Management   -   Apache Lenya
http://www.wyona.com  http://lenya.apache.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Adding foreign key constraints without integrity

2006-06-21 Thread Wes
On 6/20/06 8:17 PM, Florian G. Pflug [EMAIL PROTECTED] wrote:

 Maybe you could ask at some postgresql support companies how much effort it
 would
 be to add a without check flag to alter table add constraint foreign key,
 and
 how much they'd charge for it...

Or if I get ambitious, dig into the code myself if I can figure out where to
start...

Wes



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Alban Hertroys

Martijn van Oosterhout wrote:

ISTM that what would really work well is some kind of Merge Sort node
that would work by having multiple subnodes which are already sorted
and merging them into one sorted list.


Would... So this isn't available yet?


The planner would use this whenever it saw a query of the form:

SELECT * FROM a
UNION ALL
SELECT * FROM b
ORDER BY c;

It would push the ORDER BY down to the subqueries and then merge the
results. If the subqueries can be read efficiently sorted (via an index
for example) then you would get very quick output, especially if you
have a LIMIT clause.


I just realized that OFFSET kind of complicates the problem.

If PostgreSQL would handle this (for inheritance as well, I hope), it'd 
need to keep track of how many records came from which tables to set the 
offsets in the subqueries appropriately, which of course depends on the 
previous query... Well, I said it complicates things...


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Dynamic loading of C functions

2006-06-21 Thread Jasbinder Bali
I've tried everything so that my .so file is recognized but in vein.Don't know whats going wrong.~JasOn 6/20/06, Tom Lane 
[EMAIL PROTECTED] wrote:Bill Moran 
[EMAIL PROTECTED] writes: In response to Jasbinder Bali [EMAIL PROTECTED]: I get the follwing error ERROR:could not access file /usr/include/pgsql/server/test_func: No such
 file or directory Check the permissions.Can the Postgres user read the file?The error is pretty clearly file not found, not no permissions.One possibility is that the complaint is not about this file itself
but about some other shared library it depends on.Try ldd orlocal equivalent on the file to see if it shows any unresolvedreferences.Also, you might try looking in the postmaster log to see if any
additional info appears there --- anything the dynamic linker spit outto stderr is not going to appear on your terminal.regards, tom lane


[GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
Hi,I raised this problem yesterday aswell. I'm badly stuck at this point.The problem is as follows:I have a C function that i want to use in my postgres function.I adopt the following steps to do that.
--- compile the C file as follows gcc -shared -o test_func.so test_func.c test_func.c is the name of the C file--- the name of the function that i want to use from this c file is called 'command'
--- Postgres function is written as follows:  CREATE FUNCTION command(integer) RETURNS integer AS 'usr/include/pgsql/server/test_func', 'command' LANGUAGE C STRICT;when i try to run this function, always gives me the follwoing error:
ERROR: could not access file usr/include/pgsql/server/test_func: No such file or directoryI tried changin the permission of the file to 666 and even tried it with 755 but in vein.I checked the log file but it just prints the above error and doesn't give me any more information.
I have no clue why is postgres not reading test_func object file.Any kind of help would be appreciatedThanks,~Jas


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Joe Conway

Jasbinder Bali wrote:

CREATE FUNCTION command(integer) RETURNS integer
 AS 'usr/include/pgsql/server/test_func', 'command'
 LANGUAGE C STRICT;

when i try to run this function, always gives me the follwoing error:

ERROR:  could not access file usr/include/pgsql/server/test_func: No 
such file or directory


Should 'usr/include/pgsql/server/test_func' actually be 
'/usr/include/pgsql/server/test_func'?


Note the leading '/'

HTH,

Joe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
Yes, that helped. I was missing that leading '/'Now the error is different. It cries something on the permissions.ERROR: could not load library /usr/include/pgsql/server/test.so: /usr/include/pgsql/server/test.so: failed to map segment from shared object: Permission denied
Can you comment on this?Thanks,~JasOn 6/21/06, Joe Conway [EMAIL PROTECTED] wrote:
Jasbinder Bali wrote: CREATE FUNCTION command(integer) RETURNS integer
AS 'usr/include/pgsql/server/test_func', 'command'LANGUAGE C STRICT; when i try to run this function, always gives me the follwoing error: ERROR:could not access file usr/include/pgsql/server/test_func: No
 such file or directoryShould 'usr/include/pgsql/server/test_func' actually be'/usr/include/pgsql/server/test_func'?Note the leading '/'HTH,Joe


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Steve Atkins


On Jun 21, 2006, at 9:42 AM, Jasbinder Bali wrote:


Hi,
I raised this problem yesterday aswell. I'm badly stuck at this point.
The problem is as follows:

I have a C function that i want to use in my postgres function.
I adopt the following steps to do that.

--- compile the C file as follows
 gcc -shared -o test_func.so test_func.c
 test_func.c is the name of the C file

--- the name of the function that i want to use from this c file is  
called 'command'


--- Postgres function is written as follows:

CREATE FUNCTION command(integer) RETURNS integer
 AS 'usr/include/pgsql/server/test_func', 'command'
 LANGUAGE C STRICT;

when i try to run this function, always gives me the follwoing error:

ERROR:  could not access file usr/include/pgsql/server/test_func:  
No such file or directory



I tried changin the permission of the file to 666 and even tried it  
with 755 but in vein.


I checked the log file but it just prints the above error and  
doesn't give me any more information.


I have no clue why is postgres not reading test_func object file.

Any kind of help would be appreciated



IIRC the path name is relative to... dynamic_library_path and pwd, first
as given, then with .so appended.

Unless you've set one of those to / then 'usr/include/pgsql/server/ 
test_func'

is never going to resolve to where you want it to.

If you really want to keep it where it is, try using the correct  
absolute filename.
Better, though, would be to use ... AS '$libdir/test_func.so' ... and  
put the library

wherever pg_config --pkglibdir says - probably /usr/local/pgsql/lib.

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] multiple statement 'instead of' rule

2006-06-21 Thread Merlin Moncure

On 6/20/06, Tom Lane [EMAIL PROTECTED] wrote:

Merlin Moncure [EMAIL PROTECTED] writes:
 create or replace rule insert_fsv as on insert to frequency_service_view
   do instead
   (
 insert into frequency_operation
   select new.table_name, new.frequency, old.code where new.set =
 true and old.set = false;
 delete from frequency_operation
   where table_name = old.table_name and frequency_operation.code =
 old.code and
 frequency_operation.frequency = new.frequency and new.set = false;
 update operation
   set code = new.code where code = old.code and old.code != new.code;
   );

What is frequency_service_view?  Is it by any chance dependent on
frequency_operation?  If so, your changes to frequency_operation will
affect the behavior of OLD references.


right, actually that was a typo, was supposed to be 'create or replace
rule insert_fov as on insert to frequency_operation_view'.  I was
considering that old/new are invalid which is fine, but the problem is
in some cases the third (and sometimes second query) never fires at
all with any arguments.  I confirmed this by inserting into a log
table in between the rule queries (they never fired either).

I can prepare a test case if you think it's worth it.

Merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Joe Conway

Jasbinder Bali wrote:

Now the error is different. It cries something on the permissions.

ERROR:  could not load library /usr/include/pgsql/server/test.so: 
/usr/include/pgsql/server/test.so: failed to map segment from shared 
object: Permission denied


Can you comment on this?


What does
  ls -l /usr/include/pgsql/server/test.so
show?

Does the postgres user (or whomever postgres is running as) have the 
ability to read the file?


Joe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
well as of now my postgres is running on a trusted connection that well as of now my postgres is running on a trusted connection that i've specified in pg_hba.conf file.

ls -l /usr/include/pgsql/server/test.so shows the following

-rw-rw-rw- 1 root root 4620 Jun 21 12:00 /usr/include/pgsql/server/test.so

This means that the owner of this .so file is root and group is root.

Thanks
~JasOn 6/21/06, 
Joe Conway [EMAIL PROTECTED] wrote:

Jasbinder Bali wrote: Now the error is different. It cries something on the permissions. ERROR:could not load library /usr/include/pgsql/server/test.so: /usr/include/pgsql/server/test.so: failed to map segment from shared
 object: Permission denied Can you comment on this?What does ls -l /usr/include/pgsql/server/test.soshow?Does the postgres user (or whomever postgres is running as) have the
ability to read the file?Joe



Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Jasbinder Bali
I've disabled my SELinux and now postgres is being able to access the shared library i.e test.so file.Don't know if thats the right way to do it or not.PS: I'm using Fedora core 2 OSThanks,~Jas
On 6/21/06, Joe Conway [EMAIL PROTECTED] wrote:
Jasbinder Bali wrote: Now the error is different. It cries something on the permissions. ERROR:could not load library /usr/include/pgsql/server/test.so: /usr/include/pgsql/server/test.so: failed to map segment from shared
 object: Permission denied Can you comment on this?What does ls -l /usr/include/pgsql/server/test.soshow?Does the postgres user (or whomever postgres is running as) have the
ability to read the file?Joe


Re: [GENERAL] Dynamic loading of C functions: Badly stuck

2006-06-21 Thread Tom Lane
Jasbinder Bali [EMAIL PROTECTED] writes:
 I've disabled my SELinux and now postgres is being able to access the shared
 library i.e test.so file.
 Don't know if thats the right way to do it or not.

It's not.  Almost certainly, SELinux is keying the rejection off the
fact that you have the .so file in the wrong place, ie, not a place that
postgres is supposed to be reading executables from.

Put it in $libdir and everything will be much better.  (You might also
need to run restorecon on it, not sure.)  pg_config --pkglibdir will
tell you where that is.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] MS-SQL2PostgreSQL

2006-06-21 Thread Hrishikesh Deshmukh
Dear All,I have a sql MS-SQL script, i would like to run the script in postgresql. Is there a MS-SQL2PostgreSQL converter available or does one has to go trouble shooting line by line changing data types etc one line at a time??!!!
Thanks,Hrishi


[GENERAL] performance tips please

2006-06-21 Thread Hugo
Hi, I am testing my application and DB (postgres 8.1.4 ) on a : DELL-Power Edge
1800 with 2 Xeon 3.2ghz, 2 Gb RAM and 2 SCSI 149 Gb each.trouble is that the same application and DB(postgres 8.0.4) runs on a: DELL pentium 3 with 526MB of   RAM and an IDE 20 GB and comparing the performance of both, a get only 20-30seconds faster responses on the new server, where can I start looking to find out why is the second server performing this way.
any advice is very appreciated,thanks in advanceHugo


Re: [GENERAL] performance tips please

2006-06-21 Thread Tomi NA

On 6/21/06, Hugo [EMAIL PROTECTED] wrote:

Hi, I am testing my application and DB (postgres 8.1.4 ) on a :
DELL-Power Edge 1800 with 2 Xeon 3.2ghz,  2 Gb RAM and 2 SCSI 149 Gb
each.
trouble is that the same application and DB(postgres 8.0.4) runs on a:
 DELL pentium 3 with 526MB of RAM and an IDE 20 GB

and comparing the performance of both, a get only 20-30seconds faster
responses on the new server, where can I start looking to find out why is
the second server performing this way.

any advice is very appreciated,

thanks in advance

Hugo



I assume the the task takes longer than 5 seconds to complete?
Are you running the same OS? Can pgsql distribute the load on both
Xeon processors? Is pgsql custom compiled for a specific architecture
(Pentium III, for example)? How do you measure the response time? Are
there other apps involved?

t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Fwd: [GENERAL] performance tips please

2006-06-21 Thread Hugo
I assume the the task takes longer than 5 seconds to complete?
Are you running the same OS? Can pgsql distribute the load on both
Xeon processors? Is pgsql custom compiled for a specific architecture(Pentium III, for example)? How do you measure the response time? Arethere other apps involved?t.n.a.
The DELL PIII box runs FC4 , the config is the default one.
The DELL Xeon runs SUSE 10.0, the config has some changes to improve autovacuum, thats all.I just downloaded the source and follow the tipical install instructions on both machines.The application takes the start and end time of the query, mainly it is a reporting application running on a WinXP client.
thanksHugo




Re: [GENERAL] MS-SQL2PostgreSQL

2006-06-21 Thread Chris

Hrishikesh Deshmukh wrote:

Dear All,

I have a sql MS-SQL script, i would like to run the script in 
postgresql. Is there a MS-SQL2PostgreSQL converter available or does one 
has to go trouble shooting line by line changing data types etc one line 
at a time??!!!


Something on this page might help you:

http://www.postgresql.org/docs/techdocs.3

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] SQL query question

2006-06-21 Thread Gurjeet Singh

   Another way is to use correlated-subqueries (refrencing outer
query's columns inside a subquery; hope this feature is supported):

select  *
fromFileVersionHistory H1
where   modificationDate = ( select  max(modificationDate)
fromFileVersionHistory H2
where   H2.filename = H1.filename
  );

   And if you suspect that some different versions of a file might
have same Date, then you should add DISTINCT to 'select *', else
you'll get duplicates in the result.

Regards,
Gurjeet.


On 6/18/06, Bruno Wolff III [EMAIL PROTECTED] wrote:

On Sat, Jun 17, 2006 at 16:50:59 -0700,
  Kevin Jenkins [EMAIL PROTECTED] wrote:
  For example

 filename date revision
 file110/05/06 1
 file110/05/07 2
 file210/05/08 1

 I want to do a query that will return the greatest date for each
 unique filename

If the revisions for a filename are guarenteed to be ordered by date, then
another alternative for you would be:

SELECT filename, max(modificationDate), max(revision)
  FROM FileVersionHistory
  GROUP BY filename
;

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] A slow query - Help please?

2006-06-21 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 04:12:08PM +0200, Alban Hertroys wrote:
 Martijn van Oosterhout wrote:
 ISTM that what would really work well is some kind of Merge Sort node
 that would work by having multiple subnodes which are already sorted
 and merging them into one sorted list.
 
 Would... So this isn't available yet?

Not AFAIK.

 It would push the ORDER BY down to the subqueries and then merge the
 results. If the subqueries can be read efficiently sorted (via an index
 for example) then you would get very quick output, especially if you
 have a LIMIT clause.
 
 I just realized that OFFSET kind of complicates the problem.
 
 If PostgreSQL would handle this (for inheritance as well, I hope), it'd 
 need to keep track of how many records came from which tables to set the 
 offsets in the subqueries appropriately, which of course depends on the 
 previous query... Well, I said it complicates things...

OFFSET is not a problem at all. It's just code for throw away first N
rows. Once you have the above node type, the executor would simply
throw away somed rows, whichever table they came from.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-21 Thread Qingqing Zhou


On Wed, 21 Jun 2006, Relyea, Mike wrote:

 ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
 chunks); 355336392 used
 HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
 chunks); 290485792 used
 TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks); 1164240
 used
 HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15 chunks);
 37032016 used
 TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks); 1076840
 used

The same problem. ExecutorState uses much more memory than we expect --
but not sure where they are from :-(

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At the 
time I perform the INSERT command I need to retrieve the value of the 
serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea
Sorry, I just realised this should have gone to the SQL list... (Bloody 
Newbie's) :-[


John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At 
the time I perform the INSERT command I need to retrieve the value of 
the serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread Tim Allen

John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At the 
time I perform the INSERT command I need to retrieve the value of the 
serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea


It's not supported now, however it has been discussed several times, and 
there is a TODO entry for it at


http://www.postgresql.org/docs/faqs.TODO.html

using syntax along the lines of INSERT ... RETURNING ...

Search for the word returning in the todo list and you'll find the 
entry. Your options include waiting for someone to make it happen (no 
telling how long that will be), or helping to make it happen (for which 
we would all thank you :-) ). In the meantime you'll have to work around 
it, as you suggested.


Tim

--
---
Tim Allen  [EMAIL PROTECTED]
Proximity Pty Ltd  http://www.proximity.com.au/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-21 Thread John Tregea

Hi Tim,

Thanks for the advice, it saves me continuing to dig in the help files 
and my reference books any longer. I don't know how much help I could be 
in adding features but I am glad to participate in any way I can in the 
community. I will follow your link to the TODO pages.


Thanks again.

Regards

John

Tim Allen wrote:

John Tregea wrote:

Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At 
the time I perform the INSERT command I need to retrieve the value of 
the serial_id column from the newly created row.


Is it possible to have a specified column value returned after the 
INSERT (rather than the number of rows affected) ?


That would save me doing a SELECT select statement after every INSERT.

Please excuse the terminology if it is not SQL'esque, but I hope you 
know what I am getting at.


Thanks in advance

John Tregea


It's not supported now, however it has been discussed several times, 
and there is a TODO entry for it at


http://www.postgresql.org/docs/faqs.TODO.html

using syntax along the lines of INSERT ... RETURNING ...

Search for the word returning in the todo list and you'll find the 
entry. Your options include waiting for someone to make it happen (no 
telling how long that will be), or helping to make it happen (for 
which we would all thank you :-) ). In the meantime you'll have to 
work around it, as you suggested.


Tim



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Missing domain socket after reboot.

2006-06-21 Thread Bill Moseley
After a reboot today Postgresql 8.1 came back up and started
accepting connections over TCP but the unix socket file was missing.

This is on Debian Stable, and I can't imagine what might of removed
the file.

Running psql I get:

$ psql test
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket 
/var/run/postgresql/.s.PGSQL.5432?

Yep, missing:


$ ls -la /var/run/postgresql
total 8
drwxrwsr-x   2 postgres postgres 4096 2006-06-21 17:03 .
drwxr-xr-x  16 root root 4096 2006-06-21 21:10 ..

Config looks ok:

/etc/postgresql/8.1/main$ fgrep unix_socket_dir postgresql.conf 
unix_socket_directory = '/var/run/postgresql'

Startup option:

$ ps ux -u postgres | grep unix_socket
postgres  1512  0.0  0.3  17564  3476 ?S17:02   0:00 
/usr/lib/postgresql/8.1/bin/postmaster -D /var/lib/postgresql/8.1/main -c 
unix_socket_directory=/var/run/postgresql -c 
config_file=/etc/postgresql/8.1/main/postgresql.conf -c 
hba_file=/etc/postgresql/8.1/main/pg_hba.conf -c 
ident_file=/etc/postgresql/8.1/main/pg_ident.conf

Hum.  lsof knows about the file.

$ lsof -p 1512 | grep /var/run
postmaste 1512 postgres4u  unix 0xf78b5980   1631 
/var/run/postgresql/.s.PGSQL.5432


Any ideas what happened to the socket?


I had to stop and start the postmaster to get the socket back.



-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] join on next row

2006-06-21 Thread Sim Zacks

Harold,
That's brilliant.
Sim

Harald Fuchs wrote:

In article [EMAIL PROTECTED],
Sim Zacks [EMAIL PROTECTED] writes:


I want my query resultset to be
Employee,EventDate(1),EventTime(1),EventType(1),EventTime(2),EventType(2)
Where Event(2) is the first event of the employee that took place
after the other event.



Example
EventIDEmployeeEventDateEventTimeEventType
1John6/15/20067:00A
2Frank6/15/20067:15B
3Frank6/15/20067:17C
4John6/15/20067:20C
5Frank6/15/20067:25D
6John6/16/20067:00A
7John6/16/20068:30R



Expected Results
John, 6/15/2006, 7:00, A, 7:20, C
Frank, 6/15/2006, 7:15, B, 7:17, C
Frank, 6/15/2006, 7:17, C, 7:25, D
John, 6/16/2006, 7:00, A, 8:30, R



To get this result set it would have to be an inner join on employee
and date where the second event time is greater then the first. But I
don't want the all of the records with a greater time, just the first
event after.


You can filter the others out by an OUTER JOIN:

  SELECT e1.Employee, e1.EventDate,
 e1.EventTime, e1.EventType,
 e2.EventTime, e2.EventType
  FROM events e1
  JOIN events e2 ON e2.Employee = e1.Employee
AND e2.EventDate = e1.EventDate
AND e2.EventTime  e1.EventTime
  LEFT JOIN events e3 ON e3.Employee = e1.Employee
 AND e3.EventDate = e1.EventDate
 AND e3.EventTime  e1.EventTime
 AND e3.EventTime  e2.EventTime
  WHERE e3.EventID IS NULL
  ORDER BY e1.EventDate, e1.EventTime


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly