Re: [GENERAL] libpq.dll - how to check if field is nullable
I believe by now you would have found it out by going through the PostgreSQL documentation. If not you can use PQgetisnull (result, row_no, col_no) Regards, Siva Kumar.K -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pit M. Sent: Monday, August 15, 2005 3:39 PM To: pgsql-general@postgresql.org Subject: [GENERAL] libpq.dll - how to check if field is nullable Which function of libpq.dll should one use to determine if a field may contain NULL values ? ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] How to limit database size
Hi, How can I limit the maximum space uses by Posgres database ??. So that it should not use total disk space available on system. Thanks in advance, Vishnu ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] postgresql hosting
Someone questioned a while back about hosting providers for PostgreSQL -- a recent post led to finding http://www.planetargon.com/hosting/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] How to limit database size
am 23.08.2005, um 15:12:54 +0530 mailte Vishnu folgendes: > Hi, > > How can I limit the maximum space uses by Posgres database ??. So that it > should not use total disk space available on system. You can use the tools from the os, under Linux for instance quotas. You can also put the data dir on a separate partition. Regards, Andreas -- Andreas Kretschmer(Kontakt: siehe Header) Heynitz: 035242/47212, D1: 0160/7141639 GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Consulta
Macario Mares wrote: He tratado de importar datos con la instrucción COPY pero no he podido. Les agradeceria me dieran un ejemplo de cómo importar datos ya sea de Windows o linux si das mas info de porque no puedes seria mejor con lo que comentas lo unico que te puedo comentar es que le des una leida al manual http://www.postgresql.org/docs/8.0/interactive/sql-copy.html Leonel ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] ctid access is slow
Hello! Could anybody help me with this [artificial] query select ctid from aaa where ctid in (select ctid from aaa limit 10); here is explained plan Nested Loop IN Join (cost=3.47..300325932.99 rows=10 width=6) Join Filter: ("outer".ctid = "inner".ctid) -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 width=6) -> Materialize (cost=0.47..0.57 rows=10 width=6) -> Subquery Scan "IN_subquery" (cost=0.00..0.46 rows=10 width=6) -> Limit (cost=0.00..0.36 rows=10 width=6) -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 width=6) There are 1250998 records in aaa. As you see it is pretty slow - actually this thing is faster even if I use oid instead of ctid. Inner query works promptly of course. Any clue? The original idea was to collect ctid's of records to delete and use this info in DELETE statement (and something similar with UPDATE), but performance is absolutely unacceptable. Help is very very appeciated! -- Best regards Ilja Golshtein ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ctid access is slow
On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: > Hello! > > Could anybody help me with this [artificial] query > > select ctid from aaa where ctid in (select ctid from aaa limit 10); > > here is explained plan > > Nested Loop IN Join (cost=3.47..300325932.99 rows=10 width=6) > Join Filter: ("outer".ctid = "inner".ctid) > -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 width=6) > -> Materialize (cost=0.47..0.57 rows=10 width=6) > -> Subquery Scan "IN_subquery" (cost=0.00..0.46 rows=10 width=6) > -> Limit (cost=0.00..0.36 rows=10 width=6) > -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 > width=6) > > There are 1250998 records in aaa. > > As you see it is pretty slow - actually this thing is faster > even if I use oid instead of ctid. > Inner query works promptly of course. > > Any clue? > I think using an indexed field would probably be faster for you, especially if you have a PK on the table. Barring that, make sure you have vacuumed/analyzed and send us explain analyze output. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ctid access is slow
On Tue, Aug 23, 2005 at 09:15:42AM -0400, Robert Treat wrote: > On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: > > > > select ctid from aaa where ctid in (select ctid from aaa limit 10); > > > > Nested Loop IN Join (cost=3.47..300325932.99 rows=10 width=6) > > Join Filter: ("outer".ctid = "inner".ctid) > > -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 width=6) > > -> Materialize (cost=0.47..0.57 rows=10 width=6) > > -> Subquery Scan "IN_subquery" (cost=0.00..0.46 rows=10 width=6) > > -> Limit (cost=0.00..0.36 rows=10 width=6) > > -> Seq Scan on aaa (cost=0.00..44457.98 rows=1250998 > > width=6) > > > > There are 1250998 records in aaa. > > > > As you see it is pretty slow - actually this thing is faster > > even if I use oid instead of ctid. > > Inner query works promptly of course. > > > > Any clue? > > I think using an indexed field would probably be faster for you, especially > if > you have a PK on the table. Barring that, make sure you have > vacuumed/analyzed and send us explain analyze output. Aside from that, ctid is of type tid, and its equality operator isn't hashable. Here's an example that shows the difference between ctid (not hashable) and oid (hashable) on a table with 10 rows: EXPLAIN ANALYZE SELECT ctid FROM foo WHERE ctid IN (SELECT ctid FROM foo LIMIT 10); QUERY PLAN -- Nested Loop IN Join (cost=0.27..24137.27 rows=10 width=6) (actual time=0.127..12729.741 rows=10 loops=1) Join Filter: ("outer".ctid = "inner".ctid) -> Seq Scan on foo (cost=0.00..1637.00 rows=10 width=6) (actual time=0.029..951.297 rows=10 loops=1) -> Materialize (cost=0.27..0.37 rows=10 width=6) (actual time=0.005..0.052 rows=10 loops=10) -> Subquery Scan "IN_subquery" (cost=0.00..0.26 rows=10 width=6) (actual time=0.037..0.318 rows=10 loops=1) -> Limit (cost=0.00..0.16 rows=10 width=6) (actual time=0.023..0.195 rows=10 loops=1) -> Seq Scan on foo (cost=0.00..1637.00 rows=10 width=6) (actual time=0.013..0.094 rows=10 loops=1) Total runtime: 12730.011 ms (8 rows) EXPLAIN ANALYZE SELECT oid FROM foo WHERE oid IN (SELECT oid FROM foo LIMIT 10); QUERY PLAN -- Hash IN Join (cost=0.29..2137.39 rows=10 width=4) (actual time=0.574..1477.235 rows=10 loops=1) Hash Cond: ("outer".oid = "inner".oid) -> Seq Scan on foo (cost=0.00..1637.00 rows=10 width=4) (actual time=0.016..864.519 rows=10 loops=1) -> Hash (cost=0.26..0.26 rows=10 width=4) (actual time=0.412..0.412 rows=0 loops=1) -> Subquery Scan "IN_subquery" (cost=0.00..0.26 rows=10 width=4) (actual time=0.063..0.336 rows=10 loops=1) -> Limit (cost=0.00..0.16 rows=10 width=4) (actual time=0.048..0.218 rows=10 loops=1) -> Seq Scan on foo (cost=0.00..1637.00 rows=10 width=4) (actual time=0.035..0.118 rows=10 loops=1) Total runtime: 1477.508 ms (8 rows) -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Consulta
He tratado de importar datos con la instrucción COPY pero no he podido. Les agradeceria me dieran un ejemplo de cómo importar datos ya sea de Windows o linux
Re: [GENERAL] ctid access is slow
Hello! >On Tue, Aug 23, 2005 at 09:15:42AM -0400, Robert Treat wrote: >> On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: >> > >> > select ctid from aaa where ctid in (select ctid from aaa limit 10); >Aside from that, ctid is of type tid, and its equality operator >isn't hashable. It is the piece of knowledge I failed to deduce exploring plans of queries ;(. So I have no better solution then creating indexed field of serial type, have I? The only thing I am curios is ctid good for anything from user point of view? Thanks a lot. -- Best regards Ilja Golshtein ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ORDER BY time consuming
You're also free to set sort_mem (7.4.x) or work_mem (8.0.x) on a per session basis, so you could try experimenting with raising the value of those settings during sessions in which your query is running. -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Strategic Open Source: Open Your i™ http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-469-5150 615-469-5151 (fax) On Aug 21, 2005, at 12:01 PM, Jim C. Nasby wrote: On Sun, Aug 21, 2005 at 12:04:01PM +0200, Ben-Nes Yonatan wrote: Hi All, I got a table with about 4.5 millions rows in it which is connected to another table with about 60 millions rows which are used as keywords for searching. I succeded to create fast queries on the first table that finds a row at the first table which is connected to up to 4 diffrent keywords at the second table and LIMIT the result to 12 (I want to allow the surfers of the site to press back and next to see more products so ill make it with OFFSET). I want to be able to order my result by a specific column but when I insert ORDER BY into the query (and any other query that I tried) it becomes extremly slow, what can I do to solve this problem? Your question is too generic to answer specifically, but I suspect that if you use your un-ordered query as a subquery in the FROM clause and then order that it will work well. IE: SELECT * FROM (SELECT ...) a ORDER BY f1, f2, f3 -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Question about Foreign key constraint causes "costly sequential scans"?
Greetings, I have a question about foreign key cost. I have two tables A and B. A has 300,000 records, B has 3 records. A (col1, col2, col3... ... colN) B (colB1 primary key, colB2) I'd like to setup foreign key constraint for A.col3, as the following: CONSTRAINT Aclo3_fk FOREIGN KEY (col3) REFERENCES B(colB1) But I got a warning msg from postgresql as: foreign key constraint "Aclo3_fk" will require costly sequential scans Some comments about it? Thanks a lot! Emi ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ctid access is slow
Michael Fuhr <[EMAIL PROTECTED]> writes: >> On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: >>> select ctid from aaa where ctid in (select ctid from aaa limit 10); > Aside from that, ctid is of type tid, and its equality operator > isn't hashable. Nor mergejoinable, so there's not much scope for a smart join plan. AFAIR without rereading the code, the only case that's actually fast is WHERE ctid = constant [ OR ctid = constant ... ] which of course is the same as WHERE ctid IN (constant, constant, ...) but not at all the same as "ctid IN (sub-select)". >>> The original idea was to collect ctid's of records to delete >>> and use this info in DELETE statement (and something similar >>> with UPDATE), but performance is absolutely unacceptable. Right, you can do that, but you have to actually collect the ctid's on the client side and incorporate them literally into the later DELETE command. This is probably a good idea anyway to be sure you are deleting exactly the rows you saw before, and not some other ones that happen to now match the query conditions you gave before. Be wary also that you can't trust a ctid to be valid longer than one transaction. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to limit database size
On Tue, Aug 23, 2005 at 01:19:11PM +0200, A. Kretschmer wrote: > am 23.08.2005, um 15:12:54 +0530 mailte Vishnu folgendes: > > How can I limit the maximum space uses by Posgres database ??. So that it > > should not use total disk space available on system. > > You can use the tools from the os, under Linux for instance quotas. Be aware that exceeding a disk quota might cause problems beyond a simple "Sorry, you can't insert any more records" error. http://www.postgresql.org/docs/8.0/static/disk-full.html Even with a quota in place, you should probably monitor the database's disk usage so you can take action before the quota is exceeded. -- Michael Fuhr ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Import File
Hi, I'm trying to import a sequence txt file. The file has fix columns ... i.e the first 10 positions is the primary key ... The comand copy need a delimitier but the file is seq without a delimitier. I look in the manual but i don't won't if it is possible. My problem is the file is very large ... > 1G and I don't have a lot of memory to change the file and put delimitiers. Tanks --Rodrigo Africani
Re: [GENERAL] Question about Foreign key constraint causes "costly sequential scans"?
Emi Lu <[EMAIL PROTECTED]> writes: > But I got a warning msg from postgresql as: >foreign key constraint "Aclo3_fk" will require costly sequential scans Try to make the referencing and referenced columns the same datatype. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Question about Foreign key constraint causes "costly
On Tue, 2005-08-23 at 10:30 -0400, Emi Lu wrote: > I'd like to setup foreign key constraint for A.col3, as the following: > CONSTRAINT Aclo3_fk FOREIGN KEY (col3) REFERENCES B(colB1) > > But I got a warning msg from postgresql as: > >foreign key constraint "Aclo3_fk" will require costly sequential > scans Index the foreign key: create index a_col3_ix on a (col3); and then drop an recreate the foreign key and see if the warning is still there. As a rule I index foreign keys. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Question about Foreign key constraint causes "costly sequential scans"?
On Tue, Aug 23, 2005 at 10:30:14AM -0400, Emi Lu wrote: > I'd like to setup foreign key constraint for A.col3, as the following: > CONSTRAINT Aclo3_fk FOREIGN KEY (col3) REFERENCES B(colB1) > > But I got a warning msg from postgresql as: > > foreign key constraint "Aclo3_fk" will require costly sequential scans Is there not a DETAIL message following this warning? It should explain the problem: the referring and the referred-to columns are of different types. Additionally, it's usually a good idea to create an index on the referring column (A.col3) to speed up referential integrity checks when modifying the referred-to table (B). -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Import File
On Tue, Aug 23, 2005 at 11:47:57AM -0300, Rodrigo Africani wrote: > I'm trying to import a sequence txt file. > The file has fix columns ... i.e the first 10 positions is the primary key ... > The comand copy need a delimitier but the file is seq without a delimitier. > I look in the manual but i don't won't if it is possible. > My problem is the file is very large ... > 1G and I don't have a lot of memory > to change the file and put delimitiers. If you have enough disk space then you don't need a lot of memory to change the file: simply run it through a filter that reads one line at a time, changes it, and writes it to a new file. Such filters are trivial to write in Perl and other scripting languages. If you don't have enough disk space then you could pipe the filter's output into psql and use "COPY FROM stdin" -- this might be faster anyway because it involves less disk I/O. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Where to get 8.1 beta test version?
We want to test the 8.1 beta test version of PostgreSQL on a SUSE Linux box. What would be the best download for this? TIA, -Kevin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Question about Foreign key constraint causes "costly
Thanks a lot for all helps. I do not have warnings anymore :-) I'd like to setup foreign key constraint for A.col3, as the following: CONSTRAINT Aclo3_fk FOREIGN KEY (col3) REFERENCES B(colB1) But I got a warning msg from postgresql as: foreign key constraint "Aclo3_fk" will require costly sequential scans Is there not a DETAIL message following this warning? It should explain the problem: the referring and the referred-to columns are of different types. Additionally, it's usually a good idea to create an index on the referring column (A.col3) to speed up referential integrity checks when modifying the referred-to table (B). ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Where to get 8.1 beta test version?
My mistake. - Any idea when it will get to beta test status? - Is it currently stable enough to make a real-world test worthwhile? - Would this be the snapshot you mentioned?: http://wwwmaster.postgresql.org/download/mirrors-ftp?file=dev%2Fpostgresql-snapshot.tar.gz Thanks, -Kevin >>> Bruce Momjian 08/23/05 10:27 AM >>> Kevin Grittner wrote: > We want to test the 8.1 beta test version of PostgreSQL on a SUSE Linux box. > What would be the best download for this? > We are not in beta yet, but you can grab the nightly snapshot from the ftp server. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Where to get 8.1 beta test version?
Kevin Grittner wrote: > We want to test the 8.1 beta test version of PostgreSQL on a SUSE Linux box. > What would be the best download for this? > We are not in beta yet, but you can grab the nightly snapshot from the ftp server. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Where to get 8.1 beta test version?
Kevin Grittner wrote: > My mistake. > > - Any idea when it will get to beta test status? > > - Is it currently stable enough to make a real-world test worthwhile? > > - Would this be the snapshot you mentioned?: > > http://wwwmaster.postgresql.org/download/mirrors-ftp?file=dev%2Fpostgresql-snapshot.tar.gz All beta discussion should happen on hackers. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Question about Foreign key constraint causes "costly
On Tue, Aug 23, 2005 at 02:53:33PM +, Matt Miller wrote: > On Tue, 2005-08-23 at 10:30 -0400, Emi Lu wrote: > > I'd like to setup foreign key constraint for A.col3, as the following: > > CONSTRAINT Aclo3_fk FOREIGN KEY (col3) REFERENCES B(colB1) > > > > But I got a warning msg from postgresql as: > > > >foreign key constraint "Aclo3_fk" will require costly sequential > > scans > > Index the foreign key: > > create index a_col3_ix on a (col3); > > and then drop an recreate the foreign key and see if the warning is > still there. That's a good idea, but it's not the condition that elicits the "will require costly sequential scans" warning. That warning is caused by the keys being of different types (see ATAddForeignKeyConstraint() in src/backend/commands/tablecmds.c in the source code for 8.0 and later). -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Import File
am 23.08.2005, um 11:47:57 -0300 mailte Rodrigo Africani folgendes: > Hi, > > I'm trying to import a sequence txt file. > The file has fix columns ... i.e the first 10 positions is the primary key ... > The comand copy need a delimitier but the file is seq without a delimitier. > I look in the manual but i don't won't if it is possible. > My problem is the file is very large ... > 1G and I don't have a lot of > memory to change the file and put delimitiers. There are lot of tools like sed, awk, perl, ... to manipulate a file. This tools can act as a filter, big file aren't a problem. Regards, Andreas -- Andreas Kretschmer(Kontakt: siehe Header) Heynitz: 035242/47212, D1: 0160/7141639 GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ctid access is slow
Hi! >> Could anybody help me with this [artificial] query >> >> select ctid from aaa where ctid in (select ctid from aaa limit 10); [skipped] >I think using an indexed field would probably be faster for you, especially >>if you have a PK on the table. I used to think ctid is the same as rowid in Oracle, where rowid access is the fastest. Is it wrong? After all, why oid is faster then ctid? I consider using index of course. I just cannot believe it is the best what I can do here. >Barring that, make sure you have >vacuumed/analyzed and send us explain analyze output. I played with fresh database. server version is 8.0.3, enable_tidscan is on, looks like hints have no effect. Thanks. -- Best regards Ilja Golshtein ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Where to get 8.1 beta test version?
On Tue, Aug 23, 2005 at 10:36:34 -0500, Kevin Grittner <[EMAIL PROTECTED]> wrote: > > - Any idea when it will get to beta test status? Based on the discussions I have seen, the first beta will probably be released within a week. > - Is it currently stable enough to make a real-world test worthwhile? CVS pulls are usually in a state suitable for testing. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ctid access is slow
On Tue, Aug 23, 2005 at 05:30:31PM +0400, Ilja Golshtein wrote: > Hi! > > >> Could anybody help me with this [artificial] query > >> > >> select ctid from aaa where ctid in (select ctid from aaa limit 10); > > [skipped] > > >I think using an indexed field would probably be faster for you, > >especially if you have a PK on the table. > > I used to think ctid is the same as rowid in Oracle, > where rowid access is the fastest. Is it wrong? I don't know what is rowid internally, but in Postgres, ctid _is_ the fastest way to access a (single) tuple, because it's the physical address. However, since the ctid datatype does not implement hashjoin nor mergejoin, the plans are not as good when you have to access lots of tuples. > After all, why oid is faster then ctid? Accessing single values by oid, even when there is an index, will be slower than accessing single values by ctid; though in practice, probably there won't be a measurable difference. If you are too worried about it, you could look at what is needed to implement hashjoin and mergejoin for ctids. I take it it isn't trivial, or it would be done already, but I don't think it's too hard (unless there is an implementation detail that makes it impossible). -- Alvaro Herrera () "Nadie esta tan esclavizado como el que se cree libre no siendolo" (Goethe) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] OCFS released as GPL
On Mon, Aug 22, 2005 at 08:55:00AM -0300, Bruno Almeida do Lago wrote: > I haven't found any message on archives talking about this, so I'm posting > it. > > The OCFS (Oracle Cluster File System) is now GPL - see more on > http://oss.oracle.com/projects/ocfs/ > > What do you think? The file system is not the problem -- it's shared memory that's an issue for Postgres. There is a "distributed lock manager" (by Redhat I think) that could be the base for implementing something vaguely cluster-y, but there are lots of roadblocks to it. I assume Slony-II will be here way earlier, making it all moot. -- Alvaro Herrera () "Right now the sectors on the hard disk run clockwise, but I heard a rumor that you can squeeze 0.2% more throughput by running them counterclockwise. It's worth the effort. Recommended." (Gerry Pourwelle) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ctid access is slow
Alvaro Herrera <[EMAIL PROTECTED]> writes: > If you are too worried about it, you could look at what is needed to > implement hashjoin and mergejoin for ctids. I take it it isn't trivial, > or it would be done already, but I don't think it's too hard (unless > there is an implementation detail that makes it impossible). It wouldn't be hard that I can see (just build hash and btree opclasses for tid), but I'm pretty unclear on why bother. There's no use-case for cross-table joins involving ctid, since you couldn't usefully store a ctid referencing another table. The example Ilja showed was quite artificial and should not convince anyone to expend effort on this. Perhaps there are more convincing examples, but let's see one. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] plpgsql: returning multiple named columns from function *simply*
Hi guys, First time (I think, certainly recently) posting to this mailing list. I've been casually using Postgres for a couple of years, but recently am going to be using it in a professional settings, so I figure it's time to get serious about learning the right way to write functions/stored procedures. In the past year, I've spent a lot of time writing MS SQL Server stored procedures, which are pretty good as they go, and I'm essentially trying to port some actual procedures and my skill set at writing those to Postgres. I'm finding it a bit difficult as there are (of course) some pretty fundamental differences. I think I've handled most of the important ones so far, but there's one that's been bugging me. In MSSQL, I can write a stored procedure that does something like this: CREATE PROCEDURE test( @lookup char(50)) WITH ENCRYPTION AS BEGIN -- ... a bunch of code to do some lookup, and then ... SELECT @Result1 AS Result1, @Result2 AS Result2, @Result3 AS Result3, @Result4 AS Result4 END GO and then when I call this procedure, I get a result row (like it came from a SELECT on a table) which has the columns neatly labeled with 'Result1', 'Result2', etc. Note that these column labels are rather arbitrary and not necessarily associated with a single table or perhaps even any existing column in a table. The question is, how can I best (most easily and elegantly) handle this in plpgsql? I've spent a few hours researching it and it seems like you either use a single (or set of) record or composite types. The only way I could get it to work with records though, was to specify the layout of the row on the actual call to the plpgsql function, which is undesirable. And if I use a composite type, I'm going to have to setup a separate composite type for every special return tuple I might want out of a function (I do this in a few different places, want to return perhaps 2-4 *named* columns as results from a function call). I hope I've made what I'm looking to do clear. Is there any slick way of handling this that allows the column naming to occur completely within the function itself and perhaps doesn't require an external table or composite type definition? Thanks, John Lawler ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] plpgsql: returning multiple named columns from function
CREATE PROCEDURE test( @lookup char(50)) WITH ENCRYPTION AS BEGIN -- ... a bunch of code to do some lookup, and then ... SELECT @Result1 AS Result1, @Result2 AS Result2, @Result3 AS Result3, @Result4 AS Result4 END GO and then when I call this procedure, I get a result row (like it came from a SELECT on a table) which has the columns neatly labeled with 'Result1', 'Result2', etc. Note that these column labels are rather arbitrary and not necessarily associated with a single table or perhaps even any existing column in a table. I think what you are looking for is SetOF functions. http://www.postgresql.org/docs/8.0/interactive/functions-srf.html ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] plpgsql: returning multiple named columns from function
Joshua D. Drake wrote: perhaps even any existing column in a table. I think what you are looking for is SetOF functions. http://www.postgresql.org/docs/8.0/interactive/functions-srf.html Thanks for the response. The reference you indicated is talking about Set Returning Functions. I'm looking to return multiple *columns* from a function, not rows. Plus, the main part was to be able to have the columns (arbitrarily) named as if they'd been selected from a table. I hope that there's something about as easy as the example I cited from MS SQL. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ctid access is slow
On Tue, Aug 23, 2005 at 06:02:05PM +0400, Ilja Golshtein wrote: > The only thing I am curios is ctid good for anything from user point > of view? No -- it changes far too frequently for that. -- Alvaro Herrera () "Un poeta es un mundo encerrado en un hombre" (Victor Hugo) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] plpgsql: returning multiple named columns from function
you can do this with a function that returns a refcursor. (lookup refcursor in the docs) you would call it something like this select mycursorfunct(); fetch all from return_cursor; In this example I hardcode the name return cursor and then call both lines from a transaction. you could also retrieve the name of the cursor into a variable, then do something like(this is delphi code) connection.starttransaction; try query1.sql.add('select mycursorfunct();'); query1.open; refcursorname:= query1.fieldbyname('mycursofunct').asstring; query1.close; query1.sql.add('fetch all from '+refcursorname); query1.open; finally connection.commit; end; You won't be able to do it exactly like M$ SQL server, but you can do something equivelent with a couple extra lines of code. A refcursor takes a couple of more lines of code on the client, but you don't have to use a type or a record. If you need a actual test function, let me know. hope this helps, Tony Caduto http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql 8.x CREATE PROCEDURE test( @lookup char(50)) WITH ENCRYPTION AS BEGIN -- ... a bunch of code to do some lookup, and then ... SELECT @Result1 AS Result1, @Result2 AS Result2, @Result3 AS Result3, @Result4 AS Result4 END GO ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plpgsql: returning multiple named columns from function *simply*
John Lawler wrote: > In MSSQL, I can write a stored procedure that > does something like this: > > CREATE PROCEDURE test( > @lookup char(50)) > WITH ENCRYPTION AS BEGIN > > -- ... a bunch of code to do some lookup, and then ... > > SELECT >@Result1 AS Result1, >@Result2 AS Result2, >@Result3 AS Result3, >@Result4 AS Result4 Not sure if this a function like this does everything you may want, but it may work for you. Disclaimer: I have not actually used the proc with the "As" for the column names, but I'd expect it "should" work. CREATE FUNCTION "getlogdata"("int4", "int4", "int4", "int4") RETURNS "refcursor" AS ' DECLARE curs refcursor; BEGIN OPEN curs FOR SELECT logdata.datavalue As Result1,logdata.timestamp As Result2 from logdata where logfielddatatype = $1 and graphtargetlog = $2 and (timestamp >= $3 OR $3 = 0) and (timestamp <= $4 OR $4 = 0) order by timestamp; RETURN curs; END; ' LANGUAGE 'plpgsql'; ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] drop table before create
Hi .. I have not been able to work out how to do this is Postgres 8 (pseudo-code) if exists table foo drop table foo; end create table foo; If I go with drop table foo; create table foo; then it barfs on an empty db. I can find the table name in pg_class but I am not sure of where to go from there. The assumption here is that the SQL is coming in on a script via the programmatic interface. Slurp in a bunch of SQL commands and then fire them at the database. Perhaps it is just easier to 'dropdb' then 'createdb' and go from there. Many thanks, -- -mark. (probertm at acm dot org) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ctid access is slow
Tom Lane wrote: It wouldn't be hard that I can see (just build hash and btree opclasses for tid), but I'm pretty unclear on why bother. There's no use-case for cross-table joins involving ctid, since you couldn't usefully store a ctid referencing another table. The example Ilja showed was quite artificial and should not convince anyone to expend effort on this. Perhaps there are more convincing examples, but let's see one. Would it be useful for extremely static (read-only) data? The largest tables in my database are read-only for many months at a time (geospatial data which the vendor updates annually). I've occasionally wondered if storing ctids in tables that link to these tables rather than the traditional id column would help. (I never really bothered, though; since normal index scans were fast enough; and any future performance optimization will probably cache this data in memcached instead.) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Optimize a VIEW question
I'm just starting out using VIEWs -- and mostly new to postgresql and I'm trying to learn a little about reading the EXPLAIN ANALYZE output. I have the following view setup and I'm wonder where to create indexes, and mostly if I'm doing anything the incorrect or a very expensive way. CREATE VIEW class_list ( id, name, class_time, location, workshop, review_mode, workshop_group, location_name, address, city, state, zip, region, region_name ) AS SELECT class.id, class.name, class.class_time, class.location, class.workshop, class.review_mode, class.workshop_group, location.name, address.id, address.city, address.state, address.zip, region.id, region.name FROM class, location, address, region WHERE class.location = location.id AND location.address = address.id AND location.region = region.id; I'm not clear about the Seq Scan below. The region table is quite small, so am I correct that is why the planner is doing a seq scan on that table? \d region Table "public.region" Column | Type | Modifiers +-+ id | integer | not null default nextval('public.region_id_seq'::text) active | boolean | not null default true sort_order | integer | not null default 1 name | text| not null Indexes: "region_pkey" primary key, btree (id) "region_name_key" unique, btree (name) EXPLAIN ANALYZE select * from class_list where workshop = 28; QUERY PLAN -- Nested Loop (cost=51.78..93.07 rows=9 width=157) (actual time=1.306..1.468 rows=6 loops=1) -> Hash Join (cost=51.78..76.87 rows=8 width=129) (actual time=1.245..1.299 rows=6 loops=1) Hash Cond: ("outer".id = "inner".region) -> Seq Scan on region (cost=0.00..20.00 rows=1000 width=36) (actual time=0.016..0.027 rows=10 loops=1) -> Hash (cost=51.76..51.76 rows=8 width=97) (actual time=1.019..1.019 rows=0 loops=1) -> Hash Join (cost=26.68..51.76 rows=8 width=97) (actual time=0.201..1.007 rows=6 loops=1) Hash Cond: ("outer".id = "inner"."location") -> Seq Scan on "location" (cost=0.00..20.00 rows=1000 width=44) (actual time=0.014..0.694 rows=104 loops=1) -> Hash (cost=26.66..26.66 rows=7 width=57) (actual time=0.150..0.150 rows=0 loops=1) -> Index Scan using class_workshop_index on "class" (cost=0.00..26.66 rows=7 width=57) (actual time=0.057..0.137 rows=6 loops=1) Index Cond: (workshop = 28) -> Index Scan using address_pkey on address (cost=0.00..2.01 rows=1 width=32) (actual time=0.013..0.015 rows=1 loops=6) Index Cond: ("outer".address = address.id) Total runtime: 1.853 ms (14 rows) By the way -- at one point I managed to hang postgresql (7.4.8-16 on Debian Sid). I have not been able to make it happen again, but it seemed odd. (gdb) bt #0 0x081e51ee in tuplestore_gettuple () #1 0x0810c7f0 in ExecMaterial () #2 0x08102cb2 in ExecProcNode () #3 0x0810d8d5 in ExecNestLoop () #4 0x08102ceb in ExecProcNode () #5 0x081093a4 in ExecAgg () #6 0x08102c79 in ExecProcNode () #7 0x08101ecc in ExecutorRun () #8 0x0816f58b in PortalSetResultFormat () #9 0x0816f8c7 in PortalRun () #10 0x0816da9f in PostgresMain () #11 0x08148b4e in ClosePostmasterPorts () #12 0x0814a4e1 in PostmasterMain () #13 0x0811c2e7 in main () -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] [newbie] server startup
Hi .. I am just playing with PostgreSQL 8.0.1 The server is running on a Solaris 8 platform with the kernel parameters tweaked as suggested. When I start the server, the logfile reports: $ pg_ctl start -l logfile postmaster starting $ cat logfile LOG: could not bind IPv6 socket: Cannot assign requested address HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. LOG: could not bind socket for statistics collector: Cannot assign requested address .. LOG: database system is ready SO, the db is up and running locally but I can't get to it from the network. $ psql -l -h dbhost psql: could not connect to server: Connection refused Is the server running on host "dbhost" and accepting TCP/IP connections on port 5432? The build is from sunfreeware. Any thoughts on how to get my config up and working across the network? TIA, -- -mark. (probertm at acm dot org) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ctid access is slow
On Tuesday 23 August 2005 15:55, Alvaro Herrera wrote: > On Tue, Aug 23, 2005 at 06:02:05PM +0400, Ilja Golshtein wrote: > > The only thing I am curios is ctid good for anything from user point > > of view? > > No -- it changes far too frequently for that. Oh I dunno... In general I'd agree with you, but I've seen a couple of (some would say hackey) use cases where you use the ctid to iterate over the columns returned in a row in a plpgsql function... :-) -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [newbie] server startup
Hi .. On Tue, 2005-08-23 at 14:45 -0700, Mark Probert wrote: > > SO, the db is up and running locally but I can't get to it from the > network. > With a friend's help, I worked out that I need to edit two files: postgresql.conf (listen_address = '*') pg_hba.conf (add "host all all 192.168.0.0/16 trust") Now it is working. Thanks, -- -mark. (probertm at acm dot org) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ctid access is slow
""Ilja Golshtein"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Hello! > >>On Tue, Aug 23, 2005 at 09:15:42AM -0400, Robert Treat wrote: >>> On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: >>> > >>> > select ctid from aaa where ctid in (select ctid from aaa limit 10); > >>Aside from that, ctid is of type tid, and its equality operator >>isn't hashable. > > It is the piece of knowledge I failed to deduce exploring > plans of queries ;(. > > So I have no better solution then creating indexed > field of serial type, have I? > > The only thing I am curios is ctid good for > anything from user point of view? The ctid value can be useful in a multi user application, to check whether a record has been changed by another user, before committing changes. Whenever a record is updated the ctid value will be changed, so by storing the ctid value when first fetching the record, that can be compared with the current ctid value before doing the update. > > Thanks a lot. > > -- > Best regards > Ilja Golshtein > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ctid access is slow
On Wed, Aug 24, 2005 at 09:26:10AM +0930, Jeff Eckermann wrote: > The ctid value can be useful in a multi user application, to check whether a > record has been changed by another user, before committing changes. > Whenever a record is updated the ctid value will be changed, so by storing > the ctid value when first fetching the record, that can be compared with the > current ctid value before doing the update. I believe that's not necessarily true. If you select a tuple and it's ctid and it's updated more than once with a vacuum in-between I believe it could end up back in the same position, which would mean the same ctid. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ctid access is slow
--- "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > On Wed, Aug 24, 2005 at 09:26:10AM +0930, Jeff > Eckermann wrote: > > The ctid value can be useful in a multi user > application, to check whether a > > record has been changed by another user, before > committing changes. > > Whenever a record is updated the ctid value will > be changed, so by storing > > the ctid value when first fetching the record, > that can be compared with the > > current ctid value before doing the update. > > I believe that's not necessarily true. If you select > a tuple and it's > ctid and it's updated more than once with a vacuum > in-between I believe > it could end up back in the same position, which > would mean the same > ctid. True. But the probability of that happening would generally be low enough not to bother the designers of most applications. > -- > Jim C. Nasby, Sr. Engineering Consultant > [EMAIL PROTECTED] > Pervasive Softwarehttp://pervasive.com > 512-569-9461 > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] ERROR: database is being accessed by other users
Hi! I got the following psql's error when executing a script, ERROR: database "test" is being accessed by other users To solve this, I have to disconnect the database connection from the pgAdmin III. May I know how can I configure the database to allow multiple user access? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ERROR: database is being accessed by other users
On Wed, Aug 24, 2005 at 10:10:55AM +0800, wolverine my wrote: > I got the following psql's error when executing a script, > ERROR: database "test" is being accessed by other users What does the script do? Based on the error message I'd guess that you're trying to drop or rename the database. > To solve this, I have to disconnect the database connection from the > pgAdmin III. > > May I know how can I configure the database to allow multiple user access? Databases do allow multiple user access, but PostgreSQL won't allow you to drop or rename a database if somebody else is using it. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ctid access is slow
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > I believe that's not necessarily true. If you select a tuple and it's > ctid and it's updated more than once with a vacuum in-between I believe > it could end up back in the same position, which would mean the same > ctid. This is the reason for the recommendation that you don't trust a TID for longer than one transaction. If you select a row and see it has TID such and such, and then later try to fetch/update/delete that row by TID, the worst that can happen is that you'll not find the row because some other xact has already updated or deleted it. You can not find a different row in the TID slot, because VACUUM will not have removed a row that is possibly still visible to your transaction. (VACUUM has no idea whether you're running under SERIALIZABLE rules or not, and so it takes the conservative approach that any row you could ever possibly have seen as good is still interesting.) But this guarantee only lasts as long as your current transaction. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ctid access is slow
--- Tom Lane <[EMAIL PROTECTED]> wrote: > "Jim C. Nasby" <[EMAIL PROTECTED]> writes: > > I believe that's not necessarily true. If you > select a tuple and it's > > ctid and it's updated more than once with a vacuum > in-between I believe > > it could end up back in the same position, which > would mean the same > > ctid. > > This is the reason for the recommendation that you > don't trust a TID for > longer than one transaction. If you select a row > and see it has TID > such and such, and then later try to > fetch/update/delete that row by > TID, the worst that can happen is that you'll not > find the row because > some other xact has already updated or deleted it. > You can not find > a different row in the TID slot, because VACUUM will > not have removed > a row that is possibly still visible to your > transaction. (VACUUM > has no idea whether you're running under > SERIALIZABLE rules or not, > and so it takes the conservative approach that any > row you could ever > possibly have seen as good is still interesting.) > But this guarantee > only lasts as long as your current transaction. > > regards, tom lane > Just in case anyone following this thread gets a little confused, my response was somewhat tangential to the main discussion; I was talking of fetching the record by primary key or such, and then comparing the ctid values. Agreed that any other valid use of ctid is dubious. __ Yahoo! Mail for Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] plpgsql: returning multiple named columns from function
John Lawler <[EMAIL PROTECTED]> writes: > Plus, the main part was to be able to have the columns (arbitrarily) > named as if they'd been selected from a table. I hope that there's > something about as easy as the example I cited from MS SQL. In existing releases you need to create a named composite type (row type) and declare the function as returning that type. PG 8.1 will provide some syntactic sugar for this in the form of OUT parameters. There are examples in the docs ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ctid access is slow
On Tue, Aug 23, 2005 at 06:42:33PM -0700, Jeff Eckermann wrote: > > I believe that's not necessarily true. If you select > > a tuple and it's > > ctid and it's updated more than once with a vacuum > > in-between I believe > > it could end up back in the same position, which > > would mean the same > > ctid. > > True. But the probability of that happening would > generally be low enough not to bother the designers of > most applications. Designers that don't care about their data, maybe. Here's the use case that was implied: Application selects a bunch of data to present to the user to be edited User edits data even though it's not locked in the database Application gets data and checks to see if it's changed. If it not, *BLAM*, new changes are put into the database Now, if that check for changed data fails with a false negative, you just nuked data. A better solution is to use a combination of a timestamp and a sequence. Why both? Because it's possible for the clock to be set back (though this is something best avoided), and a sequence will eventually roll over. It's impossible to have a collision in this scenario unless you roll the clock way back AND reset the sequence (assuming you're using an integer sequence). -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ERROR: database is being accessed by other users
Hi, Michael You are right, the script was trying to drop the database and the error makes sense to me now. Thank you! On 8/24/05, Michael Fuhr <[EMAIL PROTECTED]> wrote: > On Wed, Aug 24, 2005 at 10:10:55AM +0800, wolverine my wrote: > > I got the following psql's error when executing a script, > > ERROR: database "test" is being accessed by other users > > What does the script do? Based on the error message I'd guess that > you're trying to drop or rename the database. > > > To solve this, I have to disconnect the database connection from the > > pgAdmin III. > > > > May I know how can I configure the database to allow multiple user access? > > Databases do allow multiple user access, but PostgreSQL won't allow > you to drop or rename a database if somebody else is using it. > > -- > Michael Fuhr > ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ecpg and C++
On Thu, Jul 28, 2005 at 05:07:04PM -0400, Peter L. Berghold wrote: > If I wanted to embed SQL code in a C++ file is there a way to use ecpg > to generate the proper C++ files? Sorry for the late answer, but I have been on vacation and some business trips, so I'm just catching up on my mail backlog. What exactly do you want to get and do not? ecpg is supposed to be a C precompiler, but it should at least accept most C++ stuff as well. That of course means that it is output as read, so remains C++. The internal code produced remains C of course unless you implement a different output engine. Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly