Re: [HACKERS] SSL certificate info on SQL level and HSM support for libpq
On 2006.05.19 at 10:02:34 +0200, Martijn van Oosterhout wrote: > On Fri, May 19, 2006 at 10:33:52AM +0400, Victor B. Wagner wrote: > > 1. Am I correct that these function have to be INTERNAL? Or it is > > possible to get access to MyProcPort variable (on Windows platform too) > > from dynamically loadable object? > > You should be able to have these in a contrib module, Postgres exports > all symbols. Thanks, it really works this way. BTW, what should I do if I want to submit entirely new directory into contrib - shall I make a patch against current source tree or just archive of this directory would do? > > 3. Almost same question about arbitrary precession integer values. > > Where to look up C functions to convert decimal/hexadecimal/binary > > string representing integer of 128 bits or so into Postgres NUMERIC > > type. > > numeric_in converts a text string to a numeric. Don't know about 128 > bit strings though. This works. But I was unable to look up information about this function in the docs, and have to resort to looking for its usage in the backend sources. I've also unable to find useful description of DirectFunctionCallN family of functions, which seems to be neccessary to call one 'C' language PostgreSQL function from another. It is clear enough from examples in the sources, but... ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement
On Fri, May 19, 2006 at 11:51:00AM -0700, [EMAIL PROTECTED] wrote: > I attempt to execute a "COPY test FROM STDIN" using PQsendQuery. > Subsequently, after the appropriate poll()/PQconsumeInput/PQisBusy > calls, PQgetResult returns tens of millions of (invalid?) non-null > PGresults. This behavior seems incorrect, and sometimes causes my > application to exhaust memory and crash. In addition to the issues other people have pointed out, when using COPY you transfer the data with PQputCopyData()/PQgetCopyData(). Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [HACKERS] String Similarity
Try contrib/pg_trgm... Chris Mark Woodward wrote: I have a side project that needs to "intelligently" know if two strings are contextually similar. Think about how CDDB information is collected and sorted. It isn't perfect, but there should be enough information to be usable. Think about this: "pink floyd - dark side of the moon - money" "dark side of the moon - pink floyd - money" "money - dark side of the moon - pink floyd" etc. To a human, these strings are almost identical. Similarly: "dark floyd of money moon pink side the" Is a puzzle to be solved by 13 year old children before the movie starts. My post has three questions: (1) Does anyone know of an efficient and numerically quantified method of detecting these sorts of things? I currently have a fairly inefficient and numerically bogus solution that may be the only non-impossible solution for the problem. (2) Does any one see a need for this feature in PostgreSQL? If so, what kind of interface would be best accepted as a patch? I am currently returning a match liklihood between 0 and 100; (3) Is there also a desire for a Levenshtein distence function for text and varchars? I experimented with it, and was forced to write the function in item #1. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Christopher Kings-Lynne Technical Manager CalorieKing Tel: +618.9389.8777 Fax: +618.9389.8444 [EMAIL PROTECTED] www.calorieking.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] FW: iDefense Q2 2006 Vulnerability Challenge
"Magnus Hagander" <[EMAIL PROTECTED]> writes: > For those that haven't already seen it, this might give some extra > exposure to PostgreSQL wrt vulnerability research. Though I think nobody > will have a chance to find one (I just don't see how you could possibly > get root through postgresql, since we refuse to run as root), other > things might be exposed by someone who's poking around. Yeah, I think they've really done the database community a disservice by defining interesting exploits as being only those resulting in root. An exploit that lets you get database superuser privs would be the appropriate criterion here, IMHO. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement
[EMAIL PROTECTED] writes: > I attempt to execute a "COPY test FROM STDIN" using PQsendQuery. > Subsequently, after the appropriate poll()/PQconsumeInput/PQisBusy > calls, PQgetResult returns tens of millions of (invalid?) non-null > PGresults. You should fix your code to pay some attention to what those results are. I'm betting they are error results. > This behavior seems incorrect, and sometimes causes my > application to exhaust memory and crash. Well, that's because you're not PQclear'ing a result when done with it. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: Porting MSSQL to PGSQL (Was: [HACKERS] [OT] MySQL is bad, but THIS bad?)
Mischa, > Somebody earlier was mentioning, why no automatic transformer from > Transact-SQL to PLPGSQL (maybe with a bunch of glue routines). The grammar > is not a problem, though you have to wonder at all the wired-in keywords > (T-SQL always felt like COBOL). Actually, porting TSQL to PL/pgSQL would be very hard. I speak as an expert TSQL developer. For example, most data manipulation in TSQL is done through updatable cursors, something we don't currently support. Also, T-SQL uses un-ordered, callable parameters for SPs, something which we *also* don't support. > Other issues: stored procs returning multiple result sets; "print" > statements; SELECT TOP n PERCENT; COMPUTE-expressions (subtotals); and some > of the @@global variables that are hard to emulate @@IDENTITY being the > main problem in older T-SQL code. Yeah, but @@IDENTITY sucks. Most MSSQL developers are glad to leave it behind. ;-) -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement
On May 19 11:51, [EMAIL PROTECTED] wrote: > if (PQsendQuery(conn, "COPY test FROM STDIN") > 0) { > retrieve(conn, 20); Shouldn't you be send()'ing instead of retrieve()'ing? COPY tbl FROM stdin, requests data from client to COPY FROM stdin TO tbl. Regards. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [ADMIN] does wal archiving block the current client connection?
On Sun, 21 May 2006, Simon Riggs wrote: I've been futzing with trying to reproduce the original problem for a few days and so far postgres seems to be just fine with a long delay on archiving, so now I'm rather at a loss. In fact, I currently have 1,234 xlog files in pg_xlog, but the archiver is happily archiving one every 5 minutes. Perhaps I'll try a long delay followed by a failure to see if that could be it. So the chances of the original problem being archiver related are receding... This is possible, but I guess I should try and reproduce the actual problem with the same archive_command script and a CIFS mount just to see what happens. Perhaps the real root of the problem is elsewhere, it just seems strange since the archive_command is the only postgres related process that accesses the CIFS share. More later. -- Jeff Frost, Owner <[EMAIL PROTECTED]> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Porting MSSQL to PGSQL (Was: [HACKERS] [OT] MySQL is bad, but THIS bad?)
On Thursday 18 May 2006 12:38, Josh Berkus wrote: Personally, I'd go after MSSQL before I bothered with MySQL. Sure, let's make *migration* easier for those who wake up and smell the BS, but migration can (and probably should) be one-way. Somebody earlier was mentioning, why no automatic transformer from Transact-SQL to PLPGSQL (maybe with a bunch of glue routines). The grammar is not a problem, though you have to wonder at all the wired-in keywords (T-SQL always felt like COBOL). The stumbling blocks are not in language, but function. Many of those functions are rarely used, but some big ones are quite common ... T-SQL has statement-level triggers, and they get used a lot (some big apps ONLY put code in triggers). Statement-level triggers are very efficient for maintaining aggregates; the closest PG has are rewrite rules. Other issues: stored procs returning multiple result sets; "print" statements; SELECT TOP n PERCENT; COMPUTE-expressions (subtotals); and some of the @@global variables that are hard to emulate @@IDENTITY being the main problem in older T-SQL code. OpenXML is cool, but such a pig, that its difficulty in emulation is probably not an issue. There are plenty of things that happily go away, or can be implemented with a client wrapper; for example, BULK INSERT and BACKUP. Other things just have no equivalent, and amount to no-ops in a PG world (partition functions) A few things require some custom metadata tables (MSSQL "RULE" != PG "RULE"). If you want to convince MSSQL users to move over to PG, statement-level triggers (with "OLD" and "NEW" rowsets) are a bottom-line requirement. ... For high-end MSSQL shops, a high value is being able to trace and profile (EXPLAIN) every client SQL command from the server side ... with plenty of options for selective trace. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement
I attempt to execute a "COPY test FROM STDIN" using PQsendQuery. Subsequently, after the appropriate poll()/PQconsumeInput/PQisBusy calls, PQgetResult returns tens of millions of (invalid?) non-null PGresults. This behavior seems incorrect, and sometimes causes my application to exhaust memory and crash. The postgres version is 8.1.3. I can reproduce the problem in about 50 lines of C. I include below (1) the code, (2) a psql dump of the table in question, (3) the code's output. I'd appreciate any insight or suggestions you may have. Thanks, max poletto == (1) code // compile with: g++ -Wall -O -o pgtest pgtest.cc -lpq #include #include #include #include using namespace std; #include #include void retrieve(PGconn *conn, time_t timeout) { vector res; while (1) { int r; do { struct pollfd pfds[1]; pfds[0].fd = PQsocket(conn); pfds[0].events = POLLIN; pfds[0].revents = 0; r = poll(pfds, sizeof(pfds) / sizeof(struct pollfd), 1000 * timeout); } while (r < 0 && errno == EINTR); if (r <= 0 || !PQconsumeInput(conn)) return; int i = 0; PGresult *oldr = 0; while (!PQisBusy(conn)) { PGresult *r = PQgetResult(conn); if (r) { res.push_back(r); if (++i % 500 == 0) { printf("%d results\n", i); } if (r == oldr) { printf("r==oldr (%p)\n", r); } oldr = r; } else { printf("PQgetResult return 0 after %d results\n", i); return; } } } } int main() { PGconn *conn = PQconnectdb("dbname=testdb user=postgres"); if (!conn) return -1; if (PQsendQuery(conn, "COPY test FROM STDIN") > 0) { retrieve(conn, 20); if (PQputCopyEnd(conn, 0) < 0) return -1; } PQfinish(conn); return 0; } == (2) psql session [EMAIL PROTECTED] psql testdb postgres Welcome to psql 8.1.3, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit testdb=# \d test Table "public.test" Column | Type | Modifiers +-+--- a | integer | b | integer | c | integer | testdb=# select * from test; a | b | c ---+---+--- 1 | 2 | 3 (1 row) testdb=# == (3) output [EMAIL PROTECTED] ./pgtest 500 results 1000 results 1500 results 2000 results 2500 results PQgetResult return 0 after 25649299 results 4.640u 4.696s 0:09.34 99.8% 0+0k 0+0io 0pf+0w In this toy example, the process VM size exceeds 2GB before PQgetResult finally returns 0. The real application, which has already allocated ~1GB prior to the query, runs out of memory (3GB limit on 32-bit linux) before PQgetResult ever returns 0. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [ADMIN] does wal archiving block the current client connection?
Jeff Frost <[EMAIL PROTECTED]> writes: > Well now, will you look at this: > postgres 20228 1 0 May17 ?00:00:00 postgres: archiver process > postgres 20573 1 0 May17 ?00:00:00 postgres: archiver process > postgres 23817 23810 0 May17 pts/11 00:00:00 postgres: archiver process > 23810 is the running postmaster: > postgres 23810 1 0 May17 pts/11 00:03:01 > /usr/local/pgsql-8.1.3/bin/postm Well, there's our smoking gun. IIRC, all the failures you showed us are consistent with race conditions caused by multiple archiver processes all trying to do the same tasks concurrently. Do you frequently stop and restart the postmaster? Because I don't see how you could get into this state without having done so. I've just been looking at the code, and the archiver does commit hara-kiri when it notices its parent postmaster is dead; but it only checks that in the outer loop. Given sufficiently long delays in the archive_command, that could be a long time after the postmaster died; and in the meantime, successive executions of the archive_command could be conflicting with those launched by a later archiver incarnation. Seems we need an interlock to ensure there's not more than one archiver active. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [ADMIN] does wal archiving block the current client connection?
On Sun, 2006-05-21 at 14:16 -0700, Jeff Frost wrote: > On Fri, 19 May 2006, Simon Riggs wrote: > > >> Now I can run my same pg_bench, or do you guys > >> have any other suggestions on attempting to reproduce the problem? > > > > No. We're back on track to try to reproduce the original error. > > I've been futzing with trying to reproduce the original problem for a few > days > and so far postgres seems to be just fine with a long delay on archiving, so > now I'm rather at a loss. In fact, I currently have 1,234 xlog files in > pg_xlog, but the archiver is happily archiving one every 5 minutes. Perhaps > I'll try a long delay followed by a failure to see if that could be it. So the chances of the original problem being archiver related are receding... -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [ADMIN] does wal archiving block the current client connection?
On Fri, 19 May 2006, Simon Riggs wrote: Now I can run my same pg_bench, or do you guys have any other suggestions on attempting to reproduce the problem? No. We're back on track to try to reproduce the original error. I've been futzing with trying to reproduce the original problem for a few days and so far postgres seems to be just fine with a long delay on archiving, so now I'm rather at a loss. In fact, I currently have 1,234 xlog files in pg_xlog, but the archiver is happily archiving one every 5 minutes. Perhaps I'll try a long delay followed by a failure to see if that could be it. -- Jeff Frost, Owner <[EMAIL PROTECTED]> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] FW: iDefense Q2 2006 Vulnerability Challenge
For those that haven't already seen it, this might give some extra exposure to PostgreSQL wrt vulnerability research. Though I think nobody will have a chance to find one (I just don't see how you could possibly get root through postgresql, since we refuse to run as root), other things might be exposed by someone who's poking around. //Magnus > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: Wednesday, May 17, 2006 7:15 AM > To: bugtraq@securityfocus.com; [EMAIL PROTECTED]; > full-disclosure@lists.grok.org.uk > Subject: iDefense Q2 2006 Vulnerability Challenge > > iDefense Labs is pleased to announce the launch of next > installment in our quarterly vulnerability challenge. Last > quarter's challenge focused on critical vulnerabilities in > Microsoft products and was a great success. We would like to > thank everyone that forwarded submissions prior to the > deadline on March 31, 2006. We look forward to announcing > award winners once public advisories become available for the > vulnerabilities. > > For the second quarter of 2006, we're shifting the focus from > vendor to technology. This time around, we're focusing on > database vulnerabilities. For submissions received before > June 30, 2006, iDefense Labs will pay $10,000 for each > vulnerability submission that results in the discovery of a > remotely exploitable database vulnerability that meets the > following criteria. > > - Technologies: > - Oracle Database 10G > - Microsoft SQL Server 2005 > - IBM DB Universal Database 8.2 > - MySQL 5.0 > - PostgreSQL 8.1 > - The vulnerability must be original and not previously > disclosed either > publicly or to the vendor by another party > - The vulnerability must be remotely exploitable in a default > installation of one of the targeted technologies > - The vulnerability must exist in the latest version of the affected > technology with all current patches/upgrades applied > - The vulnerability cannot be caused by or require third > party software > - The vulnerability must result in root access on the target machine > - The vulnerability must not require the use of authentication > credentials > - The vulnerability must receive the vendor's maximum severity ranking > when the advisory is published (if applicable). > > In order to qualify, the submission must be sent during the > current quarter and be received by midnight EST on June 30, > 2006. The $10,000 prizes will be paid out following > confirmation with the affected vendor and will be paid in > addition to any amount paid for the vulnerability when it is > first accepted. Only the initial submission for a given > vulnerability will qualify for the reward and a maximum of > six awards will be paid out. Should more than six submissions > qualify, the first six submissions will receive the reward. > > Further details on the iDefense Vulnerability Contributor > Program (VCP) can be found at: > > http://labs.idefense.com/vcp.php > > Michael Sutton > Director, iDefense Labs > > > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] COMMIT leads to ROLLBACK
On 5/21/06, Gurjeet Singh <[EMAIL PROTECTED]> wrote: One more question comes to mind. IIRC, Oracle doesn't need you to rollback the whole transaction if one statement fails (like constarint violation in this case)!!! Does the standard dictate that an error in a transaction should force a rollback? I could be wrong about Oracle; I do not have an Oracle installation to check, can somebody verify this? Regards, Gurjeet. that is only possible if you use savepoints and we have them in postgres too. more than that it's beyond spec i think... -- regards, Jaime Casanova "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning." Richard Cook ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] COMMIT leads to ROLLBACK
One more question comes to mind. IIRC, Oracle doesn't need you to rollback the whole transaction if one statement fails (like constarint violation in this case)!!! Does the standard dictate that an error in a transaction should force a rollback? I could be wrong about Oracle; I do not have an Oracle installation to check, can somebody verify this? Regards, Gurjeet. On 5/21/06, Jaime Casanova <[EMAIL PROTECTED]> wrote: there was a thread about that two months ago... here's Tom's response: http://archives.postgresql.org/pgsql-hackers/2006-03/msg00786.php ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] COMMIT leads to ROLLBACK
refer: [HACKERS] Commit turns into rollback? On 3/17/06, Tom Lane <[EMAIL PROTECTED]> wrote: So it's not the fact that it rolls back that bugs you, it's the way that the action is reported? We could talk about changing that maybe --- it wouldn't break existing scripts AFAICS. It might break applications though. Yes, we should change the message. A silent SUCCESS message leads the onlooker (script/application) to believe that everything was okay. As Peter mentioned in the above-mentioned thread: The standard does address the issue of transactions that cannot be committed because of an error. In 16.6. GR 6 it basically says that if the transaction cannot be completed (here: because of a constraint violation), then an exception condition should be raised. That is, the transaction is over but you get an error. As far as breaking-the-applications goes, I think we'll be catching the broken applications rather than breaking any well wrtten application (which detects a DML error as soon as it occurs, and issues a rollback). These are a few scenarios after entering an ABORTed transaction state: 1) =# END; ROLLBACK 2) =# ROLLBACK; ROLLBACK 3) =# COMMIT; ERROR: The transaction has been ended with a ROLLBACK. So, in effect, all these statements do end the ABORTed transaction with a ROLLBACK. Whereas situations (1) and (2) behave just as expected, situation (3) needs to be implemented with a appropriate/correct error message. Regards, Gurjeet. On 5/21/06, Jaime Casanova <[EMAIL PROTECTED]> wrote: On 5/21/06, Gurjeet Singh <[EMAIL PROTECTED]> wrote: > Hi All, > >While trying to implement a recent TODO item, I noticed this behaviour: > > test=# select * from t2; > ERROR: current transaction is aborted, commands ignored until end of > transaction block > test=# commit; > ROLLBACK > >Either: > 1) COMMIT in an aborted transaction should lead to an 'ERROR: > Cannot COMMIT an aborted transaction.' > 2) At least a 'WARNING: transaction is being rolled back to last > known consistent state.' should precede the success (ROLLBACK) > message. > > Regards, > Gurjeet. > there was a thread about that two months ago... here's Tom's response: http://archives.postgresql.org/pgsql-hackers/2006-03/msg00786.php ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] COMMIT leads to ROLLBACK
On 5/21/06, Gurjeet Singh <[EMAIL PROTECTED]> wrote: Hi All, While trying to implement a recent TODO item, I noticed this behaviour: test=# drop table t2; drop table t1; DROP TABLE DROP TABLE test=# create table t1(a int primary key ); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t1_pkey" for table "t1" CREATE TABLE test=# create table t2( a int references t1(a )); CREATE TABLE test=# begin; BEGIN test=# insert into t2 values( 5 ); ERROR: insert or update on table "t2" violates foreign key constraint "t2_a_fkey" DETAIL: Key (a)=(5) is not present in table "t1". test=# select * from t2; ERROR: current transaction is aborted, commands ignored until end of transaction block test=# commit; ROLLBACK Issuing a COMMIT or an END as the last command leads to a ROLLBACK. Although the behaviour is correct, shouldn't the user be atleast warned about having issued a wrong command to end the transaction? An application might believe that everything was OK if it recieves a SUCCESS for a COMMIT, although the data (INSERT or any other subsequent DML) never made it to the data-files because the COMMIT was converted into a ROLLBACK!!! Either: 1) COMMIT in an aborted transaction should lead to an 'ERROR: Cannot COMMIT an aborted transaction.' 2) At least a 'WARNING: transaction is being rolled back to last known consistent state.' should precede the success (ROLLBACK) message. Regards, Gurjeet. there was a thread about that two months ago... here's Tom's response: http://archives.postgresql.org/pgsql-hackers/2006-03/msg00786.php -- regards, Jaime Casanova "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning." Richard Cook ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] COMMIT leads to ROLLBACK
"Gurjeet Singh" <[EMAIL PROTECTED]> writes: > Issuing a COMMIT or an END as the last command leads to a > ROLLBACK. Although the behaviour is correct, shouldn't the user be > atleast warned about having issued a wrong command to end the > transaction? Please see the archives. This has been discussed ad nauseam before, and no consensus has emerged to change it. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] COMMIT leads to ROLLBACK
Hi All, While trying to implement a recent TODO item, I noticed this behaviour: test=# drop table t2; drop table t1; DROP TABLE DROP TABLE test=# create table t1(a int primary key ); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t1_pkey" for table "t1" CREATE TABLE test=# create table t2( a int references t1(a )); CREATE TABLE test=# begin; BEGIN test=# insert into t2 values( 5 ); ERROR: insert or update on table "t2" violates foreign key constraint "t2_a_fkey" DETAIL: Key (a)=(5) is not present in table "t1". test=# select * from t2; ERROR: current transaction is aborted, commands ignored until end of transaction block test=# commit; ROLLBACK Issuing a COMMIT or an END as the last command leads to a ROLLBACK. Although the behaviour is correct, shouldn't the user be atleast warned about having issued a wrong command to end the transaction? An application might believe that everything was OK if it recieves a SUCCESS for a COMMIT, although the data (INSERT or any other subsequent DML) never made it to the data-files because the COMMIT was converted into a ROLLBACK!!! Either: 1) COMMIT in an aborted transaction should lead to an 'ERROR: Cannot COMMIT an aborted transaction.' 2) At least a 'WARNING: transaction is being rolled back to last known consistent state.' should precede the success (ROLLBACK) message. Regards, Gurjeet. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Compression and on-disk sorting
On Fri, May 19, 2006 at 01:39:45PM -0500, Jim C. Nasby wrote: > > Do you have any stats on CPU usage? Memory usage? > > I've only been taking a look at vmstat from time-to-time, and I have yet > to see the machine get CPU-bound. Haven't really paid much attention to > memory. Is there anything in partucular you're looking for? I can log > vmstat for the next set of runs (with a scaling factor of 1). I plan > on doing those runs tonight... I've got some more info on zlibs memory usage: Compression: 5816 bytes + 256KB buffer = approx 261KB Decompression: 9512 bytes + 32KB buffer = approx 42KB As Tom said, you only run one compression at a time but logtape doesn't know that. It can only free the compression structures on Rewind or Freeze, neither of which are run until the merge pass. I don't understand the algorithm enough to know if it's safe to rewind the old tape in selectnewtape. That would seem to defeat the "freeze if only one tape" optimisation. One final thing, with trace_sort=on on my machine I get this with compression: LOG: performsort done (except 28-way final merge): CPU 1.48s/7.49u sec elapsed 10.24 sec LOG: external sort ended, 163 disk blocks used: CPU 1.48s/7.49u sec elapsed 10.30 sec and without compression: LOG: performsort done (except 28-way final merge): CPU 2.85s/1.90u sec elapsed 14.76 sec LOG: external sort ended, 18786 disk blocks used: CPU 2.88s/1.90u sec elapsed 15.70 sec This indicates an awful lot of I/O waiting, some 60% of the time without compression. The compression has cut the I/O wait from 10sec to 1.5sec at the expense of 5.5sec of compression time. If you had a faster compression algorithm (zlib is not that fast) the results would be even better... Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature