Re: [GENERAL] clients and libs and versions (oh my)
On Tue, Aug 08, 2006 at 02:22:56PM -0700, Steve Crawford wrote: > When version mismatches between the client and server are inevitable, > what is less likely to get me into trouble - clients/libs newer than > server or server newer than clients/libs? Note that you can in theory install multiple versions of the client libraries. People have posted here on the list about setting up aliases to switch between versions. $ pg7.4 $ psql ... runs psql version 7.4 .. $ pg8.1 $ psql ... runs psql version 8.1 ... That way, if version skew is bothering you, you can jump out and use the matching client version for the best results... Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] read only transaction, temporary tables
I should have added that I want to make further use of the temporary table after the COMMIT -- the rollback approach you propose makes it go away. Carl Brune On Tue, 8 Aug 2006, John DeSoi wrote: On Aug 8, 2006, at 1:25 PM, Carl R. Brune wrote: I recently tried to do something like the following BEGIN READONLY; ... CREATE TEMPORARY TABLE ABC AS SELECT ... ... COMMIT; and it failed because CREATE is not allowed within a read-only transaction. The select is something long and complicated (pieced together with php) and I'm just trying to be careful. What other alternatives are there for accomplishing this? Preferably simple ones... How about: BEGIN; CREATE TEMPORARY TABLE ABC AS SELECT ... ROLLBACK; John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] DROP TABLESPACE fails
On Tue, Aug 08, 2006 at 11:25:23AM -0500, Steve Peterson wrote: > Note that if needed I can whack the database and start over very > easily, if that's the most expedient way out of this. That might be the easiest thing to do, but it might also be worth investigating what happened. It's possible that you've encountered a case where files should have been removed but weren't, in which case the developers might be interested in finding out why. > >SELECT datname FROM pg_database WHERE oid = 16388; > > database 'postgres' And that's the database you were connected to when making the following query? Just making sure. > The query returns no rows with an IN clause of > > ... > WHERE c.relfilenode IN (17383, 17385, 17387, 17388); > > Note that during one of the copies the system became quiescent and I > ended up stopping what I thought was the server process that had hung: [...] > 2006-08-07 16:47:56 LOG: server process (PID 5140) was terminated by signal 1 Maybe one of the developers can comment on whether this might have caused a problem. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Accessing Database Data from C Function
On Tue, Aug 08, 2006 at 12:16:18PM -0700, [EMAIL PROTECTED] wrote: > I'd like to write a C Function for Postgresql that can access data in > the database, use that to conduct additional queries to find more > information in the database and eventually return a result. I > currently have the functionality I want implemented as php/C programs > that first connect to the database, do the processing and then > disconnect, but I want to make it what I think is called a "stored > procedure" in other databases. It seems odd that I would write a C > program to be included in the database that connects to the localhost > - is that what I should do? Is that somehow more efficient than > running it as a separate process? Server-side functions written in C can use the Server Programming Interface (SPI) to query the database. http://www.postgresql.org/docs/8.1/interactive/spi.html Do you have a reason to use C instead of a higher-level language? Functions that are mostly queries are probably best done in SQL or PL/pgSQL, while text processing and some OS-level access (e.g., reading and writing files) can be done with PL/Perl, PL/Tcl, PL/Python, PL/Ruby, PL/R, etc. There's even a third-party PL/php if that's your preferred language: http://projects.commandprompt.com/public/plphp -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] clients and libs and versions (oh my)
Steve Crawford <[EMAIL PROTECTED]> writes: > When version mismatches between the client and server are inevitable, > what is less likely to get me into trouble - clients/libs newer than > server or server newer than clients/libs? For the most part I'd update the server first; the exception is that you'd better be running a pg_dump at least as new as the server. Also, psql's \d commands are pretty version-specific and frequently fail if the server is either up-rev or down-rev. Whichever order you do it in (and most of the time it's probably not that critical), I'd try to avoid really serious gaps in versions. 7.3 client against 8.1 server, or vice versa, is likely to be a headache. We do try to maintain cross-version compatibility, but over more than a couple of releases the gotchas will accumulate. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Does DROP TABLE free up disk space immediately?
Gabor Siklos wrote: > Does DROP TABLE free up disk space immediately? It frees space at COMMIT. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] sizing of a server
HiI would like to create a database server with postgres web application.My DB size will be 20GB.I will write 1 a day with until 10 to 15 simultanous write.How could I size my server (memory, CPU, ...) ?Regardsgertrude iFRANCE exprimez-vous !
[GENERAL] Accessing Database Data from C Function
Hello, I'd like to write a C Function for Postgresql that can access data in the database, use that to conduct additional queries to find more information in the database and eventually return a result. I currently have the functionality I want implemented as php/C programs that first connect to the database, do the processing and then disconnect, but I want to make it what I think is called a "stored procedure" in other databases. It seems odd that I would write a C program to be included in the database that connects to the localhost - is that what I should do? Is that somehow more efficient than running it as a separate process? I've looked into C Functions but the examples I've found have talked about performing calculations on the arguments. Table function examples show how I might return more than one row or multi-columned rows, but I don't see how to actually access the data in the database. I apologize if this question is answered somewhere. I suspect it is but I apparently don't know the terminology to look it up and have spent an hour conducting fruitless Google searches. A pointer to such a resource along with the topic I should check out would be more than helpful. Thanks, Jeff ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Does DROP TABLE free up disk space immediately?
Does DROP TABLE free up disk space immediately? Thanks, -Gabor NOTE: The contents of this e-mail including any attachments may contain confidential or privileged information and are to be read solely by the intended recipient(s). If you are not an intended recipient of this e-mail or have received this e-mail in error, please note that any dissemination, distribution, copying or retention of this e-mail without express permission is strictly prohibited. If you believe that you received this e-mail in error, please notify the sender by return e-mail and then delete the message; do not read, save, forward or otherwise preserve or disseminate it. The sender is unaware of any virus or similar defect that might affect a computer system on which this message or any attachment is opened, but MSD Capital does not assume liability by reason of sending this message for damages that may result from its access. Please satisfy yourself as to the safety of any attachments before opening.
Re: [GENERAL] Dumping database using 8.1 or 7.1
It was a little hassle though because I have to manually remove "\M" from the dump file. This is manageable. I am stucked though with one problem psql:si_t2:36: ERROR: missing data for column "remarks" CONTEXT: COPY si_t2, line 145: Upon checking line 145, there are no weird characters. Upon checking the whole dump file, the db dump still containts Sample data: "jonathan \" " " maybe data was: "jonathan \N" and that "\N" means a null value in COPY dialect... -- regards, Jaime Casanova "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning." Richard Cook ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs
On 8/8/06, Thomas F. O'Connell <[EMAIL PROTECTED]> wrote: On Aug 8, 2006, at 1:10 PM, Merlin Moncure wrote: > On 8/8/06, Thomas F. O'Connell <[EMAIL PROTECTED]> wrote: >> On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote: >> > if have super high write volumes, consider writing your insert >> call in >> > C. prepare your statement, and use the parameterized >> > versionExecPrepared(...). >> >> Can you point to a good example of this anywhere in the docs? I don't >> see ExecPrepared anywhere in the core documentation. > > well, it's actually PQexecPrepared() > http://www.postgresql.org/docs/8.1/interactive/libpq-exec.html > > do some tests and you should see a nice improvement over PQexec(). Thanks! I remain curious, though: in the event that a RAM-disk-based architecture remains in place, do all traditional disk-based considerations go out the window? For instance, does trying to if you are not syncing, the computer essentially is a giant ram disk. the operating system just pages ram to disk here and there to free up more ram for temporary demands. while kernel buffers are slower than local process memory, they are still awfully fast compared to disk drives. operating systems thes days are pretty good about deciding what should and should not stay in ram, better than most people believe. it's *usually* a rookie sysadmin move to force things into memory this way. another common sysadmin misadventure is to turn off the o/s swap file. cluster same-table statements together in a transaction in an effort to reduce disk activity make any difference? there is still a small savings due to optimizng transaction cleanup although you have to handle failures which can be a downside. And is the overall strategy of attempting to keep distance between checkpoints somewhat high (especially since the need for checkpointing overall is reduced) still a good basis? depends. with fsync=off, usually the only time you are worried about checkpoints is during bullk load, and i think in these cases you will want to increase wal segments and leave the bgwriter alone. merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: Practical maximums (was Re: [GENERAL] PostgreSQL theoretical
On Tue, 2006-08-08 at 13:48 -0500, Ron Johnson wrote: > >> I'm imagining something similar to the following: > >> mydb=# select pg_start_backup('mybackup'); > >> > >> $ tar zcf - /path/to/pgdata | mux.script /dev/st0 /dev/st1 /dev/st2 > >> $ tar zcf - /path/to/tablespace1 | mux.script /dev/st3 /dev/st4 /dev/st5 > >> $ tar zcf - /path/to/tablespace2 | mux.script /dev/st6 /dev/st7 /dev/st8 > >> > > > > Oops, let me clarify. Those tar commands are independent and can be run > > in parallel, which was the point I was trying to make. So, just run them > > in the background in whatever way is most convenient. > > Open database or closed database? See http://www.postgresql.org/docs/8.1/static/backup-online.html In section 23.3.2, it suggests that you _don't_ need the same connection to do pg_start_backup('mybackup') as to do pg_stop_backup(). So, it appears you really need no database connections at all open during the process, but clients can freely connect if you want with no interruption. This backup procedure will not affect your applications or stop your database. All you need to do is: (1) connect, issue pg_start_backup('mybackup'), and disconnect (2) back up the data dir + all tablespaces. You can do this in parallel if you want. You don't have to make a filesystem snapshot to do this, the restoration will fix any internal inconsistencies as long as you have archived the WAL files according to the above docs. So long as the backup makes it onto permanent media, and so have the WAL files, it should be fine. (3) connect, issue pg_stop_backup(), and disconnect I recommend reading the above link carefully and doing a few tests to make sure it works as you expect. But you certainly can do online backups (hot backups) with PITR, and the restoration as well. With PITR, the large portion of the data is moved using standard filesystem interaction, so you can do it in parallel with no problems. Hope this helps, Jeff Davis ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why is default value not working on insert?
Chris Hoover wrote: I have the following table: CREATE TABLE code_source ( csn_src int4 NOT NULL, csn_type varchar(8) NOT NULL, cs_code varchar(15) NOT NULL, cs_desc_short varchar(30), cs_desc_long text, cs_remarks varchar(20), cs_work_flag char(1), cs_status char(1), cs_manual_key bool NOT NULL DEFAULT false, cs_create timestamp NOT NULL DEFAULT now(), cs_live date NOT NULL, cs_last_mod timestamp, cs_expire date, cs_last_mod_user varchar(12), CONSTRAINT code_source_pkey PRIMARY KEY (csn_src, csn_type, cs_code), CONSTRAINT code_source_fkey FOREIGN KEY (csn_src, csn_type) REFERENCES code_source_name (csn_src, csn_type) MATCH SIMPLE ON UPDATE CASCADE ON DELETE RESTRICT ) WITHOUT OIDS; As you can see, cs_create is set to not null with a default value of now(). However, when I run the following insert, it errors stating cs_create can not be null. Why is the default not working? insert into code_source (csn_src, csn_type, cs_code, cs_desc_short, cs_desc_long, cs_remarks, cs_work_flag, cs_status, cs_manual_key, cs_create, cs_live, cs_last_mod, cs_expire, cs_last_mod_user) values ('132', 'CODE', '49', 'Some Code', null, 'NEWCODE', null, null, false, to_date(null,'mmdd'), to_date('19000101','mmdd'), to_date('20040318','mmdd'), to_date('1231','mmdd'), 'MSBIUSERID'); ERROR: null value in column "cs_create" violates not-null constraint The reason for the null being passed to to_date is this is on of almot 90k lines of data we are trying to load, and the script was built to generate this code. Since to_date(null,'mmdd') returns a null, why is the default not working? Chris PG 8.1.3 RH AS 4 Defaults are set when you do not specify a value, not when you try to set a value that violates a constraint (which is what NOT NULL is...). You need to have the script that generates this insert query leave that field out. -- erik jones <[EMAIL PROTECTED]> software development emma(r) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs
On Aug 8, 2006, at 1:10 PM, Merlin Moncure wrote: On 8/8/06, Thomas F. O'Connell <[EMAIL PROTECTED]> wrote: On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote: > if have super high write volumes, consider writing your insert call in > C. prepare your statement, and use the parameterized > versionExecPrepared(...). Can you point to a good example of this anywhere in the docs? I don't see ExecPrepared anywhere in the core documentation. well, it's actually PQexecPrepared() http://www.postgresql.org/docs/8.1/interactive/libpq-exec.html do some tests and you should see a nice improvement over PQexec(). Thanks! I remain curious, though: in the event that a RAM-disk-based architecture remains in place, do all traditional disk-based considerations go out the window? For instance, does trying to cluster same-table statements together in a transaction in an effort to reduce disk activity make any difference? And is the overall strategy of attempting to keep distance between checkpoints somewhat high (especially since the need for checkpointing overall is reduced) still a good basis? -- Thomas F. O'Connell Sitening, LLC http://www.sitening.com/ 3004B Poston Avenue Nashville, TN 37203-1314 615-469-5150 x802 615-469-5151 (fax) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] DB connectivity from a client machine
In my database server, I'm using packaged version of postgres.I installed it using: "yum install postgres" and this inturn searches for the rmp.So, its a packaged version of postgresql.Now as far as my client is concerned, that has the middle tier component written in C, i don't have any postgres there. Do you mean to say that I need to install the client version of postgres in the m/c hosting the middle tier component and trying to connect to the postgres server?~JasOn 8/8/06, Scott Marlowe <[EMAIL PROTECTED]> wrote: On Tue, 2006-08-08 at 16:25, Jasbinder Bali wrote:> Hi,> I have a middle tier application in C on one machine which doesn't> have postgres installed.> I have a machine that hosts my postgres database. > My middle tier application wants to connect to the Db server.>> Is it mandatory for the m/c that hosts the middle tier application to> have postgres installed.> Why I'm asking this is (though generally m/c hosting the middle tier > component can connect to the m/c hosting the Db without any DB client> installed) because i tried to run ecpg in the middle tier m/c and it> doesn't work. Well it would definitely not work coz ecpg comes with > postgres.Are you running a packaged version of postgresql (rpm / deb???) orinstalling from source?If it's packaged, you should be able to get away with just thepostgresql and -client packages, no -server etc... If it's source compiled, look in the INSTALL file. From the 7.4.13INSTALL file we find this: Client-only installation: If you want to install only the client applications and interface libraries, then you can use these commands:gmake -C src/bin installgmake -C src/include installgmake -C src/interfaces installgmake -C doc install
Re: [GENERAL] DB connectivity from a client machine
On Tue, 2006-08-08 at 16:25, Jasbinder Bali wrote: > Hi, > I have a middle tier application in C on one machine which doesn't > have postgres installed. > I have a machine that hosts my postgres database. > My middle tier application wants to connect to the Db server. > > Is it mandatory for the m/c that hosts the middle tier application to > have postgres installed. > Why I'm asking this is (though generally m/c hosting the middle tier > component can connect to the m/c hosting the Db without any DB client > installed) because i tried to run ecpg in the middle tier m/c and it > doesn't work. Well it would definitely not work coz ecpg comes with > postgres. Are you running a packaged version of postgresql (rpm / deb???) or installing from source? If it's packaged, you should be able to get away with just the postgresql and -client packages, no -server etc... If it's source compiled, look in the INSTALL file. From the 7.4.13 INSTALL file we find this: Client-only installation: If you want to install only the client applications and interface libraries, then you can use these commands: gmake -C src/bin install gmake -C src/include install gmake -C src/interfaces install gmake -C doc install ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] DB connectivity from a client machine
Hi, I have a middle tier application in C on one machine which doesn't have postgres installed.I have a machine that hosts my postgres database.My middle tier application wants to connect to the Db server. Is it mandatory for the m/c that hosts the middle tier application to have postgres installed.Why I'm asking this is (though generally m/c hosting the middle tier component can connect to the m/c hosting the Db without any DB client installed) because i tried to run ecpg in the middle tier m/c and it doesn't work. Well it would definitely not work coz ecpg comes with postgres. Can anyone suggest me a work around?Thanks, Jas
[GENERAL] clients and libs and versions (oh my)
When version mismatches between the client and server are inevitable, what is less likely to get me into trouble - clients/libs newer than server or server newer than clients/libs? We have dozens of client machines and multiple PG servers in a 24x7x365 operation and I'd like to deploy upgrades in the least risky way possible. Cheers, Steve ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why is default value not working on insert?
On Tue, Aug 08, 2006 at 04:41:04PM -0400, Chris Hoover wrote: > I have the following table: > The reason for the null being passed to to_date is this is on of almot 90k > lines of data we are trying to load, and the script was built to generate > this code. Since to_date(null,'mmdd') returns a null, why is the > default not working? If you try to insert a NULL into a not null column, you get an error. To trigger the default you either need to omit the column from the insert statement, or use the DEFAULT keyword. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] Why is default value not working on insert?
2006/8/8, Chris Hoover <[EMAIL PROTECTED]>: (...) The reason for the null being passed to to_date is this is on of almot 90k lines of data we are trying to load, and the script was built to generate this code. Since to_date(null,'mmdd') returns a null, why is the default not working? Because you're trying to explicitly insert a NULL into a column which is specified as NOT NULL. (If you can't change the script to output DEFAULT or the explicit default value, the best workaraound would be to create a trigger which converts any attempt to insert a NULL value into that column to the intended default value). Ian Barwick ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Why is default value not working on insert?
I have the following table:CREATE TABLE code_source( csn_src int4 NOT NULL, csn_type varchar(8) NOT NULL, cs_code varchar(15) NOT NULL, cs_desc_short varchar(30), cs_desc_long text, cs_remarks varchar(20), cs_work_flag char(1), cs_status char(1), cs_manual_key bool NOT NULL DEFAULT false, cs_create timestamp NOT NULL DEFAULT now(), cs_live date NOT NULL, cs_last_mod timestamp, cs_expire date, cs_last_mod_user varchar(12), CONSTRAINT code_source_pkey PRIMARY KEY (csn_src, csn_type, cs_code), CONSTRAINT code_source_fkey FOREIGN KEY (csn_src, csn_type) REFERENCES code_source_name (csn_src, csn_type) MATCH SIMPLE ON UPDATE CASCADE ON DELETE RESTRICT) WITHOUT OIDS;As you can see, cs_create is set to not null with a default value of now().However, when I run the following insert, it errors stating cs_create can not be null. Why is the default not working? insert into code_source (csn_src, csn_type, cs_code, cs_desc_short, cs_desc_long, cs_remarks, cs_work_flag, cs_status, cs_manual_key, cs_create, cs_live, cs_last_mod, cs_expire, cs_last_mod_user) values ('132', 'CODE', '49', 'Some Code', null, 'NEWCODE', null, null, false, to_date(null,'mmdd'), to_date('19000101','mmdd'), to_date('20040318','mmdd'), to_date('1231','mmdd'), 'MSBIUSERID'); ERROR: null value in column "cs_create" violates not-null constraintThe reason for the null being passed to to_date is this is on of almot 90k lines of data we are trying to load, and the script was built to generate this code. Since to_date(null,'mmdd') returns a null, why is the default not working? ChrisPG 8.1.3RH AS 4
Re: [GENERAL] read only transaction, temporary tables
On Aug 8, 2006, at 1:25 PM, Carl R. Brune wrote: I recently tried to do something like the following BEGIN READONLY; ... CREATE TEMPORARY TABLE ABC AS SELECT ... ... COMMIT; and it failed because CREATE is not allowed within a read-only transaction. The select is something long and complicated (pieced together with php) and I'm just trying to be careful. What other alternatives are there for accomplishing this? Preferably simple ones... How about: BEGIN; ... CREATE TEMPORARY TABLE ABC AS SELECT ... ... ROLLBACK; John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] DISTINCT to get distinct *substrings*?
Christoph Pingel <[EMAIL PROTECTED]> writes: > Am 08.08.2006 um 19:49 schrieb Nikolay Samokhvalov: >> don't use "DISTINCT ON" at all, it's evil :-) (why? >> http://chernowiki.ru/index.php?node=38#A13 > Thanks for the good advice! From reading this, it seems to be a > *really* bad thing. And I didn't get it from the official > documentation. :-) That page is complaining about DISTINCT ON as it was defined in 1999. It's a lot harder to shoot yourself in the foot now: regression=# select distinct on (ten) hundred from tenk1 order by unique2; ERROR: SELECT DISTINCT ON expressions must match initial ORDER BY expressions I don't deny that it's nonstandard and pretty ugly, but sometimes it's just really hard to solve a problem any other way. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] restoring a backup, incompatible with server
Joe Lester <[EMAIL PROTECTED]> writes: > Development is Mac OS 10.4.0 (PPC). Production is Mac OS 10.4.7 > (Intel). Is that the kicker... PPC vs Intel? You don't get to move physical database files between different machine architectures... usually we see people griping about 32-bit vs 64-bit, but big to little endian is even less likely to work. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] DISTINCT to get distinct *substrings*?
Thanks for the input, I think I get this now. In my case, the querySELECT DISTINCT ON (substring(attribute from '^http://[^/]*/')) attribute from pg_atp where attribute like 'http://%' doesn't get me just the root of the URL, but the whole URL - but only for the first row for each individual root. While SELECT DISTINCT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%' does what I first intended - get a list of all (distinct) root URLs. Wieder was gelernt. (Learnt something again. :-)best regards,ChristophAm 08.08.2006 um 20:36 schrieb Ben:DISTINCT ON is extremely useful when you know what you're doing. It's postgres' version of oracle's first_value analytical function, and when you need it, nothing else really suffices.On Tue, 8 Aug 2006, Nikolay Samokhvalov wrote: SELECT DISTINCT substring(attribute from '^http://[^/]*/') from pg_atpwhere attribute like 'http://%';w/o DISTINCT there should be duplicates (if any)don't use "DISTINCT ON" at all, it's evil :-) (why?http://chernowiki.ru/index.php?node=38#A13)On 8/8/06, Christoph Pingel web.de> wrote: Hello to the list,here's an SQL question, I hope it's not off topic. From a list of URLs Iwant to get only the distinct values of the *web sites* these URLs belongto, that is everything before and including the 3rd slash, and I think thisshould be possible within the DB. I would like to say something likeSELECT substring(attribute from '^http://[^/]*/') from pg_atp whereattribute like 'http://%'(which works) but get only the distinct values. SELECT DISTINCT ONsubstring.. doesn't work. Probably I haven't understood the semantics of theDISTINCT keyword. Can anybody help?thanks in advanceChristoph -- Best regards,Nikolay---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] DISTINCT to get distinct *substrings*?
Am 08.08.2006 um 19:49 schrieb Nikolay Samokhvalov: SELECT DISTINCT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%'; w/o DISTINCT there should be duplicates (if any) don't use "DISTINCT ON" at all, it's evil :-) (why? http://chernowiki.ru/index.php?node=38#A13) Thanks for the good advice! From reading this, it seems to be a *really* bad thing. And I didn't get it from the official documentation. :-) ok, SELECT DISTINCT works, and it seems that the results are ordered (by the substring) - is this the default behaviour or just by chance (and probably version dependent)? best regards, Christoph ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: Practical maximums (was Re: [GENERAL] PostgreSQL theoretical
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Jeff Davis wrote: > On Tue, 2006-08-08 at 09:47 -0700, Jeff Davis wrote: >> I'm imagining something similar to the following: >> mydb=# select pg_start_backup('mybackup'); >> >> $ tar zcf - /path/to/pgdata | mux.script /dev/st0 /dev/st1 /dev/st2 >> $ tar zcf - /path/to/tablespace1 | mux.script /dev/st3 /dev/st4 /dev/st5 >> $ tar zcf - /path/to/tablespace2 | mux.script /dev/st6 /dev/st7 /dev/st8 >> > > Oops, let me clarify. Those tar commands are independent and can be run > in parallel, which was the point I was trying to make. So, just run them > in the background in whatever way is most convenient. Open database or closed database? - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFE2Nx/S9HxQb37XmcRAgM5AKCkFR21o7/tUQiDPF/tvxpT0hmENACgsCBW LeMdlk1n2TnD7gfqYeAFySw= =R8CG -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] restoring a backup, incompatible with server
On Tue, 2006-08-08 at 13:06, Joe Lester wrote: > Hi, > > Development is Mac OS 10.4.0 (PPC). Production is Mac OS 10.4.7 > (Intel). Is that the kicker... PPC vs Intel? Yep. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] DISTINCT to get distinct *substrings*?
DISTINCT ON is extremely useful when you know what you're doing. It's postgres' version of oracle's first_value analytical function, and when you need it, nothing else really suffices. On Tue, 8 Aug 2006, Nikolay Samokhvalov wrote: SELECT DISTINCT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%'; w/o DISTINCT there should be duplicates (if any) don't use "DISTINCT ON" at all, it's evil :-) (why? http://chernowiki.ru/index.php?node=38#A13) On 8/8/06, Christoph Pingel <[EMAIL PROTECTED]> wrote: Hello to the list, here's an SQL question, I hope it's not off topic. From a list of URLs I want to get only the distinct values of the *web sites* these URLs belong to, that is everything before and including the 3rd slash, and I think this should be possible within the DB. I would like to say something like SELECT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%' (which works) but get only the distinct values. SELECT DISTINCT ON substring.. doesn't work. Probably I haven't understood the semantics of the DISTINCT keyword. Can anybody help? thanks in advance Christoph -- Best regards, Nikolay ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] read only transaction, temporary tables
I recently tried to do something like the following BEGIN READONLY; ... CREATE TEMPORARY TABLE ABC AS SELECT ... ... COMMIT; and it failed because CREATE is not allowed within a read-only transaction. The select is something long and complicated (pieced together with php) and I'm just trying to be careful. What other alternatives are there for accomplishing this? Preferably simple ones... Thanks, Carl Brune ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] running PostGres without installing it
On 8/6/06, jesus martinez <[EMAIL PROTECTED]> wrote: Merlin, first all, thanks for taking time to answer me. - where do i have to find the tutorial you said ? do you have the URL ? http://pginstaller.projects.postgresql.org/silent.html - i actually didnt try the silent-mode. because the regular installers i know not always publish their command-line options... where can i get them ? finally, is it to much to ask you if you could send to me the script-file of your installer ? (if you cant i will understand) don't work at that company anymore, sorry =) merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs
On 8/8/06, Thomas F. O'Connell <[EMAIL PROTECTED]> wrote: On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote: > if have super high write volumes, consider writing your insert call in > C. prepare your statement, and use the parameterized > versionExecPrepared(...). Can you point to a good example of this anywhere in the docs? I don't see ExecPrepared anywhere in the core documentation. well, it's actually PQexecPrepared() http://www.postgresql.org/docs/8.1/interactive/libpq-exec.html do some tests and you should see a nice improvement over PQexec(). regards, merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] restoring a backup, incompatible with server
Hi,Development is Mac OS 10.4.0 (PPC). Production is Mac OS 10.4.7 (Intel). Is that the kicker... PPC vs Intel?On Aug 8, 2006, at 1:46 PM, Talha Khan wrote:Hey Joe!! Which OS are you running on development server and production server?? regards Talha Khan On 8/8/06, Joe Lester <[EMAIL PROTECTED]> wrote: I'm having a little trouble restoring a backup (from the productionserver to our development server).I recently updated both production and development servers topostgres 8.1.4 from 8.0.x. When I updated production, I did a fresh initdb and imported from a previous dump. This server is running well.Then I did my normal restore-from-backup routine... I copied the$PGDATA folder on production over to development (making sure thatthe necessary log file was copied over as well into the pg_xlog folder). However, when I tried to start the development server I gotthis message.FATAL: database files are incompatible with serverDETAIL: The database cluster was initialized with PG_CONTROL_VERSION 738394112, but the server was compiled with PG_CONTROL_VERSION 812.HINT: It looks like you need to initdb.I'm a little confused since the $PGDATA folder from production was afresh initdb on 8.1.4... and I'm copying it to the development server, which is also 8.1.4.Any advise would be appreciated. Thanks.---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] PostgreSQL on RAM Disk / tmpfs
On Aug 3, 2006, at 1:26 PM, Merlin Moncure wrote: On 8/2/06, Thomas F. O'Connell <[EMAIL PROTECTED]> wrote: I'm working on a postgres instance (8.1.2 running on Solaris 10) where the data directory (including WAL) is being mounted on tmpfs. Based on this, and with knowledge that fsync is disabled, I'm operating under the assumption that recoverability is not a precondition for optimized performance. With that assumption, I have several questions, some performance- related, others internals-related: to be honest, I think the best approach is to simply write to the traditional filesystem and leave fsync off. writing to a ramdisk might be a bit faster, but you deprive the server memory from doing other things like caching and sorting. this might be more true for some o/s than others though. i'm just curious, what led you to do ramdisk implementation (most people who ask questions about ramdisk have no idea what they are talking about, although you seem to). That was how I found it. :) I think, though, that it was the result of benchmarking a variety of on-disk RAID configurations with an eye toward ever increasing write throughput. 4. Considering that recoverability is not a precondition, is there an easy patch that could be applied to the 8.1.x series from 8.1.4 on that would allow disabling full_page_writes? For a database in RAM with high write volume, is this setting even likely to make a difference? I would suggest pulling 8.2dev (shortly beta) if you want this and experiment. it is perfectly stable. looking at the todo list, 8.2 also gets the multiple insert syntax, which is nice. if have super high write volumes, consider writing your insert call in C. prepare your statement, and use the parameterized versionExecPrepared(...). Can you point to a good example of this anywhere in the docs? I don't see ExecPrepared anywhere in the core documentation. -- Thomas F. O'Connell Sitening, LLC http://www.sitening.com/ 3004B Poston Avenue Nashville, TN 37203-1314 615-469-5150 x802 615-469-5151 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Restoring database from old DATA folder
I tried to modify the parameters of the .CONF file when the pgsql-8.1service was running. The next time I started, it displayed error: "The service did not listen to local host". (something like this).If you were to do a"su - postgres -c "pg_ctl -D start" what error information does it show? Steve
Re: [GENERAL] Restoring database from old DATA folder
>Was the old postgres stopped when you created the copy? >Was the new postgres stopped when you put the copy in? >Did you restore correct permissions when you put the copy in? >Is the new install the same version of postgres as the old one? >What are the error messages? I tried to modify the parameters of the .CONF file when the pgsql-8.1 service was running. The next time I started, it displayed error: "The service did not listen to local host". (something like this). So I copied the old DATA folder and un-installed PostgreSQL. I again re-installed it and replace the new DATA folder with the old one. Both previous and new are same versions and same settings were used during installation. -- View this message in context: http://www.nabble.com/Restoring-database-from-old-DATA-folder-tf2070639.html#a5711630 Sent from the PostgreSQL - general forum at Nabble.com. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] DISTINCT to get distinct *substrings*?
SELECT DISTINCT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%'; w/o DISTINCT there should be duplicates (if any) don't use "DISTINCT ON" at all, it's evil :-) (why? http://chernowiki.ru/index.php?node=38#A13) On 8/8/06, Christoph Pingel <[EMAIL PROTECTED]> wrote: Hello to the list, here's an SQL question, I hope it's not off topic. From a list of URLs I want to get only the distinct values of the *web sites* these URLs belong to, that is everything before and including the 3rd slash, and I think this should be possible within the DB. I would like to say something like SELECT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%' (which works) but get only the distinct values. SELECT DISTINCT ON substring.. doesn't work. Probably I haven't understood the semantics of the DISTINCT keyword. Can anybody help? thanks in advance Christoph -- Best regards, Nikolay ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] DISTINCT to get distinct *substrings*?
Hello to the list,here's an SQL question, I hope it's not off topic. From a list of URLs I want to get only the distinct values of the *web sites* these URLs belong to, that is everything before and including the 3rd slash, and I think this should be possible within the DB. I would like to say something likeSELECT substring(attribute from '^http://[^/]*/') from pg_atp where attribute like 'http://%'(which works) but get only the distinct values. SELECT DISTINCT ON substring.. doesn't work. Probably I haven't understood the semantics of the DISTINCT keyword. Can anybody help?thanks in advanceChristoph
[GENERAL] restoring a backup, incompatible with server
I'm having a little trouble restoring a backup (from the production server to our development server). I recently updated both production and development servers to postgres 8.1.4 from 8.0.x. When I updated production, I did a fresh initdb and imported from a previous dump. This server is running well. Then I did my normal restore-from-backup routine... I copied the $PGDATA folder on production over to development (making sure that the necessary log file was copied over as well into the pg_xlog folder). However, when I tried to start the development server I got this message. FATAL: database files are incompatible with server DETAIL: The database cluster was initialized with PG_CONTROL_VERSION 738394112, but the server was compiled with PG_CONTROL_VERSION 812. HINT: It looks like you need to initdb. I'm a little confused since the $PGDATA folder from production was a fresh initdb on 8.1.4... and I'm copying it to the development server, which is also 8.1.4. Any advise would be appreciated. Thanks. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: Practical maximums (was Re: [GENERAL] PostgreSQL theoretical
On Tue, 2006-08-08 at 09:47 -0700, Jeff Davis wrote: > I'm imagining something similar to the following: > mydb=# select pg_start_backup('mybackup'); > > $ tar zcf - /path/to/pgdata | mux.script /dev/st0 /dev/st1 /dev/st2 > $ tar zcf - /path/to/tablespace1 | mux.script /dev/st3 /dev/st4 /dev/st5 > $ tar zcf - /path/to/tablespace2 | mux.script /dev/st6 /dev/st7 /dev/st8 > Oops, let me clarify. Those tar commands are independent and can be run in parallel, which was the point I was trying to make. So, just run them in the background in whatever way is most convenient. Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: Practical maximums (was Re: [GENERAL] PostgreSQL theoretical
On Mon, 2006-08-07 at 21:10 -0500, Ron Johnson wrote: > > What situation are you worried about here exactly? > > Backing up a large database in a limited amount of time. > > If the DB is large enough, and the time is short enough, then the > single-reader pg_dump can not read the data fast enough (especially > if other operations are pounding the disk system) to meet the time > limit. > > Thus, the need (some times) for multiple readers. > Oh, I see. You're not only worried about write speed (to tape), but also the ability of pg_dump to read from all your disk spindles at once to get fast enough read speed. If you use PITR, the during the backup you can copy the data directory in any order that you want. It should be pretty easy to get it to copy from multiple streams in that way. You can use tar to create a single stream out of each tablespace, and then multiplex the stream to multiple tapes so that it can write quickly enough. It's not as easy to create multiple reading streams using pg_dump, because each transaction can have at most one COPY going at a time. You might be able to hack up something, but I think PITR is the way to go. > But, you say: do PITR and weekly full backups. Ok. Say you do > Saturday night backups and nightly PITR backups. And the disk > system barfs hard on Friday. If the Tuesday night PITR tape has a > parity error, you're hosed and have lost 3 days of data. > By modifying the scripts you use to multiplex the data, you can employ some redundancy. I don't think it would be hard to include duplicate blocks or parity blocks in the multiplex script. By creating multiple reading streams, one for each tablespace, and then writing each stream to multiple tapes (with parity blocks), you should be able to back up at the speed your hardware allows, and restore at the speed your hardware allows. Furthermore, with PITR you can do incremental backups continuously and wait 'til the weekend to do a full backup. I'm imagining something similar to the following: mydb=# select pg_start_backup('mybackup'); $ tar zcf - /path/to/pgdata | mux.script /dev/st0 /dev/st1 /dev/st2 $ tar zcf - /path/to/tablespace1 | mux.script /dev/st3 /dev/st4 /dev/st5 $ tar zcf - /path/to/tablespace2 | mux.script /dev/st6 /dev/st7 /dev/st8 mydb=# select pg_stop_backup(); -- in same connection as pg_start_backup Where mux.script can write to three tapes and use parity blocks. Adjust based on the number of tapes you actually have. Hope this helps, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] DROP TABLESPACE fails
At 07:06 PM 8/7/2006, Michael Fuhr wrote: On Mon, Aug 07, 2006 at 05:36:49PM -0500, Steve Peterson wrote: > I'm doing some tests with version 8.1.4 on Windows XP and am having > trouble recovering some disk space. > > The tests involve loading some largeish CSV files in via COPY. One > of the COPY commands failed and I'm trying to recover the disk space > without having to completely reinitialize things. You might be able to recover disk space by running VACUUM FULL against the table you were loading. Have you tried that? Are you running VACUUM (with or without FULL) regularly? It's a new install as of yesterday; some DDL and the COPY operations are the only things that have been done to it. I've done a VACUUM and a VACUUM FULL on it with no effect. Note that if needed I can whack the database and start over very easily, if that's the most expedient way out of this. > DROP TABLESPACE x returns 'tablespace "x" is not empty', but if I > examine the 'referenced by' tab in pgAdmin nothing is listed. The > tablespace directory contains several very large files. > > Can these files be deleted safely with the server shut down? I'd recommend investigating what the files are before deleting them. Have you queried the system catalogs directly to see if you can identify the files? In the tablespace directory should be a file named PG_VERSION and zero or more numbered subdirectories. Those numbers correspond to database oids, so make sure you're connected to the right database. For example, if a subdirectory is named 16388 then you can find out what database it is with the following query: SELECT datname FROM pg_database WHERE oid = 16388; database 'postgres' The large files are probably within a database's subdirectory and their names are probably numbers. For example, if under the tablespace's 16388 subdirectory are files named 706712, 706715, and 706717, then you can find out what they are by connecting to the database identified above and querying pg_class: SELECT t.spcname, n.nspname, c.relname, c.relfilenode, c.relkind FROM pg_class AS c LEFT JOIN pg_namespace AS n ON n.oid = c.relnamespace LEFT JOIN pg_tablespace AS t ON t.oid = c.reltablespace WHERE c.relfilenode IN (706712, 706715, 706717); The directory contains the following files: 17383 17383.1 17385 17385.1 17387 17388 The query returns no rows with an IN clause of ... WHERE c.relfilenode IN (17383, 17385, 17387, 17388); Note that during one of the copies the system became quiescent and I ended up stopping what I thought was the server process that had hung: 2006-08-07 16:34:00 LOG: checkpoints are occurring too frequently (12 seconds apart) 2006-08-07 16:34:00 HINT: Consider increasing the configuration parameter "checkpoint_segments". 2006-08-07 16:34:09 LOG: checkpoints are occurring too frequently (9 seconds apart) 2006-08-07 16:34:09 HINT: Consider increasing the configuration parameter "checkpoint_segments". 2006-08-07 16:34:19 LOG: checkpoints are occurring too frequently (10 seconds apart) 2006-08-07 16:34:19 HINT: Consider increasing the configuration parameter "checkpoint_segments". 2006-08-07 16:35:00 LOG: autovacuum: processing database "template1" 2006-08-07 16:36:04 LOG: autovacuum: processing database "postgres" 2006-08-07 16:37:08 LOG: autovacuum: processing database "template1" 2006-08-07 16:38:11 LOG: autovacuum: processing database "postgres" 2006-08-07 16:39:16 LOG: autovacuum: processing database "template1" 2006-08-07 16:40:19 LOG: autovacuum: processing database "postgres" 2006-08-07 16:41:23 LOG: autovacuum: processing database "template1" 2006-08-07 16:42:27 LOG: autovacuum: processing database "postgres" 2006-08-07 16:43:28 LOG: autovacuum: processing database "template1" 2006-08-07 16:44:29 LOG: autovacuum: processing database "postgres" 2006-08-07 16:45:33 LOG: autovacuum: processing database "template1" 2006-08-07 16:46:35 LOG: autovacuum: processing database "postgres" 2006-08-07 16:47:39 LOG: autovacuum: processing database "template1" 2006-08-07 16:47:56 LOG: server process (PID 5140) was terminated by signal 1 2006-08-07 16:47:56 LOG: terminating any other active server processes 2006-08-07 16:47:56 LOG: all server processes terminated; reinitializing 2006-08-07 16:47:56 LOG: database system was interrupted at 2006-08-07 16:39:19 Central Daylight Time 2006-08-07 16:47:56 LOG: checkpoint record is at 0/ED0AC1E8 2006-08-07 16:47:56 LOG: redo record is at 0/ED0AC1E8; undo record is at 0/0; shutdown FALSE 2006-08-07 16:47:56 LOG: next transaction ID: 1995; next OID: 25168 2006-08-07 16:47:56 LOG: next MultiXactId: 1; next MultiXactOffset: 0 2006-08-07 16:47:56 LOG: database system was not properly shut down; automatic recovery in progress 2006-08-07 16:47:56 LOG: record with zero length at 0/ED0AC230 2006-08-07 16:47:56 LOG: redo is not required 2006-08-07 16:47:56 LOG: database system is ready 2006-08-07 16:47:56 LOG: transaction ID wrap lim
Re: [GENERAL] I need a function
On 8/8/06, Feri@ <[EMAIL PROTECTED]> wrote: Hello to wholes, I am new therein and I am going by of mysql to postgres,somebody serious so kind to explain me as I can obtain in an array the names of the draws of a given base of data in PostGres. am programming in PHP, sorry for my English, i speak spanish. thank you very much. What do you mean by "draws of a given [database]"? Tables? Columns? Try something like: SELECT * FROM pg_class Search the manual for pg_class: this should help a lot. Hope it helps. t.n.a. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Dumping database using 8.1 or 7.1
"Jonathan Vallar" <[EMAIL PROTECTED]> writes: > I am stucked though with one problem > psql:si_t2:36: ERROR: missing data for column "remarks" > CONTEXT: COPY si_t2, line 145: This basically means that the data line hasn't got the right number of column separators. Count the tabs ... regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Dumping database using 8.1 or 7.1
Hi guys! I was able to dump my database created on Postgres 7.1.2 to Postgres 8.1.4 . It was a little hassle though because I have to manually remove "\M" from the dump file. This is manageable. I am stucked though with one problem psql:si_t2:36: ERROR: missing data for column "remarks"CONTEXT: COPY si_t2, line 145: Upon checking line 145, there are no weird characters. Upon checking the whole dump file, the db dump still containts Sample data:"jonathan \"" "I manually removed "\" and the new line. The problem still persists. According to the table schema, the "remarks" field allows blank or null values. The database already has missing triggers and also has missing stored procedures. I assume, some tables are corrupted. The pg_dump of postgres 7.1.2 is unable to dump the db tables, triggers and stored procs.The pg_dump utility returns an error that it cannot dump missing triggers but are still at the database catalogue. Does this mean that I really cannot dump my database?Thanks!Regards,JonathanOn 8/8/06, Alvaro Herrera < [EMAIL PROTECTED]> wrote: Jonathan Vallar wrote:> Thanks sir.>> Is it safe to use the Postgres 8.1.4 pg_dump utility to dump data from a> postgres 7.1.2 database?Yes.--Alvaro Herrerahttp://www.CommandPrompt.com/The PostgreSQL Company - Command Prompt, Inc.
Re: [GENERAL] XPath question - big trouble
> Since you brought up the subject, will this > //[EMAIL PROTECTED] = "02"] > be slower than > /mydocument/body/chapter/contents/[EMAIL PROTECTED] > >> = "02"] > when handling a big document? > I mean from the query time point of view ... I have no idea about the postgres XML implementation internals, I never used it, but for other XML processors I know the speed would be the same, and I would expect the same for postgres too. Cheers, Csaba. ---(end of broadcast)--- TIP 6: explain analyze is your friend