[HACKERS] Buildfarm's opsrey goes green...
Hi, My buildfarm member opsrey has turned green, thanks to the following two things: * the removal of the contrib module tsearch (that was miscompiling) * the removal from my config of plperl and pltcl. My installations of perl and tcl link to pthread, and postgresql does not, hence the crash in the tests. NetBSD 2.0 does not have all the necessary threadsafe calls to pass the thread safety test made during configure. Regards, Rémi Zara smime.p7s Description: S/MIME cryptographic signature
Re: [HACKERS] [COMMITTERS] pgsql: Basic documentation for ROLEs.
On Sat, Jul 30, 2005 at 12:19:41AM -0400, Bruce Momjian wrote: > I have just loaded the patches list with all outstanding patches that > need consideration, and updated the open items list: > > http://momjian.postgresql.org/cgi-bin/pgpatches > http://momjian.postgresql.org/cgi-bin/pgopenitems The main "shared dependency" patch is applied. I still owe a patch to implement "DROP OWNED" and "REASSIGN OWNED", to drop or give away objects owned by a list of roles. -- Alvaro Herrera () "Crear es tan difícil como ser libre" (Elsa Triolet) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PL/Perl list value return causes segfault
On Fri, Jul 29, 2005 at 11:24:37PM -0400, Bruce Momjian wrote: > > Would someone who knows perl update plperl.sgml and send me a patch? > > Also, is this still true in 8.1: > > In the current implementation, if you are fetching or returning > very large data sets, you should be aware that these will all go > into memory. That's no longer true. Please find enclosed a new patch :) Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! Index: doc/src/sgml/plperl.sgml === RCS file: /projects/cvsroot/pgsql/doc/src/sgml/plperl.sgml,v retrieving revision 2.42 diff -c -r2.42 plperl.sgml *** doc/src/sgml/plperl.sgml13 Jul 2005 02:10:42 - 2.42 --- doc/src/sgml/plperl.sgml30 Jul 2005 05:42:56 - *** *** 46,52 To create a function in the PL/Perl language, use the standard !syntax: CREATE FUNCTION funcname (argument-types) RETURNS return-type AS $$ # PL/Perl function body --- 46,57 To create a function in the PL/Perl language, use the standard !syntax. A PL/Perl function must always return a scalar value. You !can return more complex structures (arrays, records, and sets) !in the appropriate context by returning a reference. !Never return a list. Here follows an example of a PL/Perl !function. ! CREATE FUNCTION funcname (argument-types) RETURNS return-type AS $$ # PL/Perl function body *** *** 282,288 !PL/Perl provides two additional Perl commands: --- 287,293 !PL/Perl provides three additional Perl commands: *** *** 293,303 spi_exec_query(query [, max-rows]) spi_exec_query(command) !Executes an SQL command. Here is an example of a query !(SELECT command) with the optional maximum !number of rows: $rv = spi_exec_query('SELECT * FROM my_table', 5); --- 298,315 spi_exec_query(query [, max-rows]) spi_exec_query(command) + spi_query(command) + spi_fetchrow(command) + !spi_exec_query executes an SQL command and ! returns the entire rowset as a reference to an array of hash ! references. You should only use this command when you know ! that the result set will be relatively small. Here is an ! example of a query (SELECT command) with the ! optional maximum number of rows: ! $rv = spi_exec_query('SELECT * FROM my_table', 5); *** *** 345,351 INSERT INTO test (i, v) VALUES (3, 'third line'); INSERT INTO test (i, v) VALUES (4, 'immortal'); ! CREATE FUNCTION test_munge() RETURNS SETOF test AS $$ my $rv = spi_exec_query('select i, v from test;'); my $status = $rv->{status}; my $nrows = $rv->{processed}; --- 357,363 INSERT INTO test (i, v) VALUES (3, 'third line'); INSERT INTO test (i, v) VALUES (4, 'immortal'); ! CREATE OR REPLACE FUNCTION test_munge() RETURNS SETOF test AS $$ my $rv = spi_exec_query('select i, v from test;'); my $status = $rv->{status}; my $nrows = $rv->{processed}; *** *** 360,366 SELECT * FROM test_munge(); ! --- 372,416 SELECT * FROM test_munge(); ! ! ! spi_query and spi_fetchrow ! work together as a pair for rowsets which may be large, or for cases ! where you wish to return rows as they arrive. ! spi_fetchrow works only with ! spi_query. The following example illustrates how ! you use them together: ! ! ! CREATE TYPE foo_type AS (the_num INTEGER, the_text TEXT); ! ! CREATE OR REPLACE FUNCTION lotsa_md5 (INTEGER) RETURNS SETOF foo_type AS $$ ! use Digest::MD5 qw(md5_hex); ! my $file = '/usr/share/dict/words'; ! my $t = localtime; ! elog(NOTICE, "opening file $file at $t" ); ! open my $fh, '<', $file # ooh, it's a file access! ! or elog(ERROR, "Can't open $file for reading: $!"); ! my @words = <$fh>; ! close $fh; ! $t = localtime; ! elog(NOTICE, "closed file $file at $t"); ! chomp(@words); ! my $row; ! my $sth = spi_query("SELECT * FROM generate_series(1,$_[0]) AS b(a)"); ! while (defined ($row = spi_fetchrow($sth))) { ! return_next({ ! the_num => $row->{a}, ! the_text => md5_hex($words[rand @words]) ! }); ! } ! return; ! $$ LANGUAGE plperlu; ! ! SELECT * from lotsa_md5(500); ! ! ! *** *** 716,724 ! In the current implementation, if you are fetching or returning ! very large data sets, you should be aware that these will all go ! into memory. --- 766,776 ! If you are fetching
[HACKERS] Updated open items
I have just loaded the patches list with all outstanding patches that need consideration, and updated the open items list: http://momjian.postgresql.org/cgi-bin/pgpatches http://momjian.postgresql.org/cgi-bin/pgopenitems We will need to make some decisions on that goes into 8.1. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [COMMITTERS] pgsql: Basic documentation for ROLEs.
Alvaro Herrera wrote: > On Thu, Jul 28, 2005 at 01:59:10PM -0400, Tom Lane wrote: > > Alvaro Herrera <[EMAIL PROTECTED]> writes: > > > I just noticed the "createuser" and "dropuser" pages may need > > > adjustments as well ... are you still working on this? > > > > The programs themselves need adjustment, too :-(. I have a TODO note > > to look at them, but would be grateful if someone else could take a > > whack at it. > > I'll take a look. > > Would you post your whole to-do list for roles? > > Also, what do we have in the open items list? The current list at > http://candle.pha.pa.us/cgi-bin/pgopenitems is outdated. I'd correct it > as: I have just loaded the patches list with all outstanding patches that need consideration, and updated the open items list: http://momjian.postgresql.org/cgi-bin/pgpatches http://momjian.postgresql.org/cgi-bin/pgopenitems -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Constraint Exclusion on all tables
Simon Riggs wrote: > On Sun, 2005-07-24 at 17:57 +0900, Tatsuo Ishii wrote: > > It seems current CE implementation ignores UPDATE, DELETE quries. Is > > this an intended limitation? > > Yes, it does not currently optimise the execution of UPDATE/DELETE > against a parent table. > > This is not an intended long-term limitation and I hope to fix this > also. The code for this is actually in a different place to the code for > SELECT, so I need to do extra work to fix that. My priority for CE was > to provide for the most common rolling window use cases on very large > databases, so in those cases UPDATEs or DELETEs against large tables are > actually fairly suicidal statements; that meant that feature had a lower > implementation prioritybut as I say, I will get to that. Here are some new TODO items added based on this discusssion: * Allow EXPLAIN to identify tables that were skipped because of enable_constraint_exclusion * Allow EXPLAIN output to be more easily processed by scripts * Allow enable_constraint_exclusion to work for UNIONs like it does for inheritance * Allow enable_constraint_exclusion to work for UPDATE and DELETE queries -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] bgwriter, inherited temp tables TODO items?
Added to TODO: * Prevent inherited tables from expanding temporary subtables of other sessions --- Thomas F. O'Connell wrote: > > On Jul 21, 2005, at 1:22 PM, Bruce Momjian wrote: > > > Thomas F. O'Connell wrote: > > > >> I'm switching the aftermath of this thread -- http:// > >> archives.postgresql.org/pgsql-general/2005-07/msg00501.php -- to - > >> hackers since it raised issues of potential concern to developers. > >> > >> At various points in the thread, Tom Lane said the following: > >> > >> "I have an old note to myself that persistent write errors could > >> "clog" > >> the bgwriter, because I was worried that after an error it would > >> stupidly try to write the same buffer again instead of trying to make > >> progress elsewhere. (CVS tip might be better about this, I'm not > >> sure.) > >> A dirty buffer for a file that doesn't exist anymore would certainly > >> qualify as a persistent failure." > >> > >> and > >> > >> "Hmm ... a SELECT from one of the "actual tables" would then scan the > >> temp tables too, no? > >> > >> Thinking about this, I seem to recall that we had agreed to make the > >> planner ignore temp tables of other backends when expanding an > >> inheritance list --- but I don't see anything in the code > >> implementing > >> that, so it evidently didn't get done yet." > >> > >> I don't immediately see TODO items correpsonding to these. Should > >> there be some? Or do these qualify as bugs and should they be > >> submitted to that queue? > >> > > > > Would you show a query that causes the problem so I can properly word > > the TODO item for inheritance and temp tables? > > It's really more of a timing issue than a specific query issue. > Here's a scenario: > > CREATE TABLE parent ( ... ); > > begin thread1: > CREATE TEMP TABLE child ( ... ) INHERITS FROM ( parent ); > > begin thread2: > while( 1 ) { > SELECT ... FROM parent WHERE ...; > } > > end thread1 (thereby dropping the temp table at the end of session) > > At this point, the file is gone, but, as I understand it, the planner > not ignoring temp tables of other backends means that thread2 is > inappropriately accessing the temp table "child" as it performs > SELECTS, thus causing potential dirty buffers in bgwriter, which at > some point during the heavy activity of the tight SELECT loop, will > have the file yanked out from under it and will throw a "No such > file" error. > > So I guess the core issue is the failure of the planner to limit > access to temp tables. > > Tom seems to come pretty close to a TODO item in his analysis in my > opinion. Something like: > > "Make the planner ignore temp tables of other backends when expanding > an inheritance list." > > -- > Thomas F. O'Connell > Co-Founder, Information Architect > Sitening, LLC > > Strategic Open Source: Open Your i? > > http://www.sitening.com/ > 110 30th Avenue North, Suite 6 > Nashville, TN 37203-6320 > 615-260-0005 > -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] PL/Perl list value return causes segfault
Would someone who knows perl update plperl.sgml and send me a patch? Also, is this still true in 8.1: In the current implementation, if you are fetching or returning very large data sets, you should be aware that these will all go into memory. --- Andrew Dunstan wrote: > > > David Fetter wrote: > > >On Tue, Jul 12, 2005 at 03:45:55PM -0400, Andrew Dunstan wrote: > > > > > >> > >>In perl, if there is any ambiguity it is the called function that is > >>responsible for checking, not the caller. See "perldoc -f > >>wantarray". PLPerl explicitly passed G_SCALAR as a flag on all > >>calls to plperl routines. So returning a list is a case of pilot > >>error. > >> > >> > > > >Is this a kind of pilot error that documents could help avert in some > >useful way? > > > > > > > > > > Sure. "A plperl function must always return a scalar value.More complex > structures (arrays, records, and sets) can be returned in the > appropriate context by returning a reference. A list should never be > returned." Salt to taste and insert where appropriate. > > cheers > > andrew > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Vacuum summary?
Added to TODO: * Add system view to show free space map contents --- Simon Riggs wrote: > On Tue, 2005-07-12 at 14:56 -0700, Joshua D. Drake wrote: > > > It'd be relatively easy I think to extract the current FSM statistics > > > in a function that could be invoked separately from VACUUM. Not sure > > > how we ought to return 'em though --- the VACUUM way of a bunch of INFO > > > messages is a bit old-fashioned. Maybe a statistics view? > > > > That would work for me. > > Sounds good. > > I would also like the statistics view to show when all the FSM tracked > pages are used up for a particular relation and the relation needs > vacuuming. That way we can integrate it with autovacuum. > > Best Regards, Simon Riggs > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Must be owner to truncate?
Stephen Frost wrote: -- Start of PGP signed section. > * Jim C. Nasby ([EMAIL PROTECTED]) wrote: > > On Thu, Jul 07, 2005 at 01:48:59PM -0400, Tom Lane wrote: > > > I don't really agree with the viewpoint that truncate is just a quick > > > DELETE, and so I do not agree that DELETE permissions should be enough > > > to let you do a TRUNCATE. > > > > What about adding a truncate permission? I would find it useful, as it > > seems would others. > > That would be acceptable for me as well. I'd prefer it just work off > delete, but as long as I can grant truncate to someone w/o giving them > ownership rights on the table I'd be happy. Added to TODO: * Add TRUNCATE permission Currently only the owner can TRUNCATE a table because triggers are not called, and the table is locked in exclusive mode. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PQescapeIdentifier
Christopher Kings-Lynne wrote: > How about a PQescapeIdentifier function in libpq? :) Good idea, added to TODO. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] More buildfarm stuff
Jim C. Nasby wrote: > > My buildfarm machine > (http://pgbuildfarm.org/cgi-bin/show_history.pl?nm=octopus&br=HEAD) > is SMP, so if anything we need UP testing. My UP 4.11-STABLE box is back accessable again. If someone wants, I can set up another buildfarm member... LER -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: ler@lerctr.org US Mail: 3535 Gaspar Drive, Dallas, TX 75220-3611 US ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] More buildfarm stuff
On Tue, Jul 26, 2005 at 10:17:05PM +0200, Palle Girgensohn wrote: > --On tisdag, juli 26, 2005 15.17.57 -0400 Tom Lane <[EMAIL PROTECTED]> > wrote: > > >Larry Rosenman writes: > >>On Jul 26 2005, Jim C. Nasby wrote: > >>>So the question now is: how do we fix the issue with threaded python? > > > >>how do we get libc_r into the mix on FreeBSD 4.11? > > > >A possible compromise is to add -lc_r to LIBS if (a) --enable-python > >and (b) platform is one of those known to need it. > > > > regards, tom lane > > > I think most people use the ports when using postgresql with FreeBSD. > > There are a bunch of ports, one for the server, another for plpython, yet > another for plperl. Hence, if the ports are used, the server will be > configured separately from the plpython.so, and the above suggestion will > not do. OTH, the port for the server has a bunch of options (opted using > dialog(1)), where one is: > > "Link w/ libc_r, used by plpython" > > It defaults to off, but it is pretty obvious that if you need plpython, you > should check that option. > > If it is on, following happens: > > -- > .if ${OSVERSION} < 500016 > PTHREAD_CFLAGS?= -D_THREAD_SAFE > PTHREAD_LIBS?=-pthread > .elif ${OSVERSION} < 502102 > PTHREAD_CFLAGS?= -D_THREAD_SAFE > PTHREAD_LIBS?=-lc_r > .else > PTHREAD_CFLAGS?= > PTHREAD_LIBS?=-pthread > .endif > -- That works for /usr/ports/databases/postgresql*, but IMHO it'd be nice if the PostgreSQL source just dealt with this... > BTW, I do have 4.11 boxes running SMP, shall I run another test, on SMP? > Tip-of-trunk with --enable-python? > > /Palle My buildfarm machine (http://pgbuildfarm.org/cgi-bin/show_history.pl?nm=octopus&br=HEAD) is SMP, so if anything we need UP testing. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, Jul 29, 2005 at 01:11:35PM -0700, Mark Wong wrote: > On Fri, 29 Jul 2005 14:57:42 -0500 > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > > On Fri, Jul 29, 2005 at 12:51:57PM -0700, Mark Wong wrote: > > > > Not sure I fully understand what you're trying to say, but it seems like > > > > it might still be worth trying my original idea of just turning all 80 > > > > disks into one giant RAID0/striped array and see how much more bandwidth > > > > you get out of that. At a minimum it would allow you to utilize the > > > > remaining spindles, which appear to be unused right now. > > > > > > I have done that before actually, when the tablespace patch came out. I > > > was able to get almost 40% more throughput with half the drives than > > > striping all the disks together. > > > > Wow, that's a pretty stunning difference... any idea why? > > > > I think it might be very useful to see some raw disk IO benchmarks... > > A lot of it has to do with how the disk is being accessed. The log is > ideally doing sequential writes, some tables only read, some > read/writer. The varying access patterns between tables/log/indexes can > negatively conflict with each other. Well, seperating logs from everything else does make a lot of sense. Still interesting that you've been able to see so much gain. > Some of it has to do with how the OS deals with file systems. I think > on linux is there a page buffer flush daemon per file system. A real OS > person can answer this part better than me. So, about testing with FreeBSD :P -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Chocked
Bruce Momjian wrote: > Josh Berkus wrote: > > OHP, > > > > > title says : Mysql: The world most advanced opensource database. > > > > Just to head this off: no, it doesn't. > > > > It says: MySQL: The world's most popular open source database > > ^ > > > > That's been their slogan for quite a while. It's not precisely accurate > > either, depending on your version of "popular", but it does provide a nice > > contrast between us and them. > > They tool their current slogan as a response to ours. took Sorry. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Chocked
Josh Berkus wrote: > OHP, > > > title says : Mysql: The world most advanced opensource database. > > Just to head this off: no, it doesn't. > > It says: MySQL: The world's most popular open source database > ^ > > That's been their slogan for quite a while. It's not precisely accurate > either, depending on your version of "popular", but it does provide a nice > contrast between us and them. They tool their current slogan as a response to ours. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, 29 Jul 2005 13:19:06 -0700 "Luke Lonergan" <[EMAIL PROTECTED]> wrote: > Mark, > > On 7/29/05 12:51 PM, "Mark Wong" <[EMAIL PROTECTED]> wrote: > > > Adaptec 2200s > > Have you tried non-RAID SCSI controllers in this configuration? When we > used the Adaptec 2120s previously, we got very poor performance using SW > RAID (though much better than HW RAID) compared to simple SCSI controllers. > > See attached, particularly the RAW RESULTS tab. Comments welcome :-) No, we actually don't have any non-RAID SCSI controllers to try... Mark ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, 29 Jul 2005 13:35:32 -0700 Josh Berkus wrote: > Mark, > > > I have done that before actually, when the tablespace patch came out. I > > was able to get almost 40% more throughput with half the drives than > > striping all the disks together. > > That's not the figures you showed me. In your report last year it was 14%, > not 40%. Sorry I wasn't clear, I'll elaborate. In the BOF at LWE-SF 2004, I did report a 13% improvement but at the same time I also said I had not quantified it as well as I would have liked and was still working on a better physical disk layout. For LWE-Boston 2005, I did a little better and reported 35% (and misquoted myself to say 40%) here in these slides: http://developer.osdl.org/markw/presentations/lwebos2005bof.sxi In that test I still had not separated the primary keys into separate tablespaces. I would imagine there is more throughput to be gained by doing that. I have the build scripts do that now, but again haven't quite quantified it yet. Mark ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
Mark, > I have done that before actually, when the tablespace patch came out. I > was able to get almost 40% more throughput with half the drives than > striping all the disks together. That's not the figures you showed me. In your report last year it was 14%, not 40%. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, 29 Jul 2005 14:57:42 -0500 "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > On Fri, Jul 29, 2005 at 12:51:57PM -0700, Mark Wong wrote: > > > Not sure I fully understand what you're trying to say, but it seems like > > > it might still be worth trying my original idea of just turning all 80 > > > disks into one giant RAID0/striped array and see how much more bandwidth > > > you get out of that. At a minimum it would allow you to utilize the > > > remaining spindles, which appear to be unused right now. > > > > I have done that before actually, when the tablespace patch came out. I > > was able to get almost 40% more throughput with half the drives than > > striping all the disks together. > > Wow, that's a pretty stunning difference... any idea why? > > I think it might be very useful to see some raw disk IO benchmarks... A lot of it has to do with how the disk is being accessed. The log is ideally doing sequential writes, some tables only read, some read/writer. The varying access patterns between tables/log/indexes can negatively conflict with each other. Some of it has to do with how the OS deals with file systems. I think on linux is there a page buffer flush daemon per file system. A real OS person can answer this part better than me. Mark ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, Jul 29, 2005 at 12:51:57PM -0700, Mark Wong wrote: > > Not sure I fully understand what you're trying to say, but it seems like > > it might still be worth trying my original idea of just turning all 80 > > disks into one giant RAID0/striped array and see how much more bandwidth > > you get out of that. At a minimum it would allow you to utilize the > > remaining spindles, which appear to be unused right now. > > I have done that before actually, when the tablespace patch came out. I > was able to get almost 40% more throughput with half the drives than > striping all the disks together. Wow, that's a pretty stunning difference... any idea why? I think it might be very useful to see some raw disk IO benchmarks... -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Fri, 29 Jul 2005 14:39:08 -0500 "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > On Thu, Jul 28, 2005 at 05:00:44PM -0700, Mark Wong wrote: > > On Thu, 28 Jul 2005 16:55:55 -0700 > > Mark Wong <[EMAIL PROTECTED]> wrote: > > > > > On Thu, 28 Jul 2005 18:48:09 -0500 > > > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > > > > > > On Thu, Jul 28, 2005 at 04:15:31PM -0700, Mark Wong wrote: > > > > > On Thu, 28 Jul 2005 17:17:25 -0500 > > > > > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > > > > > > > > > > On Wed, Jul 27, 2005 at 07:32:34PM -0700, Josh Berkus wrote: > > > > > > > > This 4-way has 8GB of memory and four Adaptec 2200s controllers > > > > > > > > attached > > > > > > > > to 80 spindles (eight 10-disk arrays). For those familiar with > > > > > > > > the > > > > > > > > schema, here is a visual of the disk layout: > > > > > > > > > > > > > > > > http://www.osdl.org/projects/dbt2dev/results/dev4-015/layout-6.html > > > > > > > > > > > > Have you by-chance tried it with the logs and data just going to > > > > > > seperate RAID10s? I'm wondering if a large RAID10 would do a better > > > > > > job > > > > > > of spreading the load than segmenting things to specific drives. > > > > > > > > > > No, haven't tried that. That would reduce my number of spindles as I > > > > > scale up. ;) I have the disks attached as JBODs and use LVM2 to > > > > > stripe > > > > > the disks together. > > > > > > > > I'm confused... why would it reduce the number of spindles? Is > > > > everything just striped right now? You could always s/RAID10/RAID0/. > > > > > > RAID10 requires a minimum of 4 devices per LUN, I think. At least 2 > > > devices in a mirror, at least 2 mirrored devices to stripe. > > > > > > RAID0 wouldn't be any different than what I have now, except if I use > > > hardware RAID I can't stripe across controllers. That's treating LVM2 > > > striping equal to software RAID0 of course. > > > > Oops, spindles was the wrong word to describe what I was losing. But I > > wouldn't be able to spread the reads/writes across as many spindles if I > > have any mirroring. > > Not sure I fully understand what you're trying to say, but it seems like > it might still be worth trying my original idea of just turning all 80 > disks into one giant RAID0/striped array and see how much more bandwidth > you get out of that. At a minimum it would allow you to utilize the > remaining spindles, which appear to be unused right now. I have done that before actually, when the tablespace patch came out. I was able to get almost 40% more throughput with half the drives than striping all the disks together. Mark ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [Testperf-general] dbt2 & opteron performance
On Thu, Jul 28, 2005 at 05:00:44PM -0700, Mark Wong wrote: > On Thu, 28 Jul 2005 16:55:55 -0700 > Mark Wong <[EMAIL PROTECTED]> wrote: > > > On Thu, 28 Jul 2005 18:48:09 -0500 > > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > > > > On Thu, Jul 28, 2005 at 04:15:31PM -0700, Mark Wong wrote: > > > > On Thu, 28 Jul 2005 17:17:25 -0500 > > > > "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > > > > > > > > > On Wed, Jul 27, 2005 at 07:32:34PM -0700, Josh Berkus wrote: > > > > > > > This 4-way has 8GB of memory and four Adaptec 2200s controllers > > > > > > > attached > > > > > > > to 80 spindles (eight 10-disk arrays). For those familiar with > > > > > > > the > > > > > > > schema, here is a visual of the disk layout: > > > > > > > > > > > > > > http://www.osdl.org/projects/dbt2dev/results/dev4-015/layout-6.html > > > > > > > > > > Have you by-chance tried it with the logs and data just going to > > > > > seperate RAID10s? I'm wondering if a large RAID10 would do a better > > > > > job > > > > > of spreading the load than segmenting things to specific drives. > > > > > > > > No, haven't tried that. That would reduce my number of spindles as I > > > > scale up. ;) I have the disks attached as JBODs and use LVM2 to stripe > > > > the disks together. > > > > > > I'm confused... why would it reduce the number of spindles? Is > > > everything just striped right now? You could always s/RAID10/RAID0/. > > > > RAID10 requires a minimum of 4 devices per LUN, I think. At least 2 > > devices in a mirror, at least 2 mirrored devices to stripe. > > > > RAID0 wouldn't be any different than what I have now, except if I use > > hardware RAID I can't stripe across controllers. That's treating LVM2 > > striping equal to software RAID0 of course. > > Oops, spindles was the wrong word to describe what I was losing. But I > wouldn't be able to spread the reads/writes across as many spindles if I > have any mirroring. Not sure I fully understand what you're trying to say, but it seems like it might still be worth trying my original idea of just turning all 80 disks into one giant RAID0/striped array and see how much more bandwidth you get out of that. At a minimum it would allow you to utilize the remaining spindles, which appear to be unused right now. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Chocked
On Friday 29 July 2005 10:33, ohp@pyrenet.fr wrote: > Who copied? > > I've been to mysql site 2 mn ago (did'nt occur since at least 6 months) > title says : Mysql: The world most advanced opensource database. I just checked and it states (exactly what it has for years) "The world's most popular open source database" > > Isn't it the title for postgresql? > > It seems weird for both projects to have the same claim (although it's > true for postgreql...) > > Regards -- Darcy Buskermolen Wavefire Technologies Corp. http://www.wavefire.com ph: 250.717.0200 fx: 250.763.1759 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Chocked
OHP, > title says : Mysql: The world most advanced opensource database. Just to head this off: no, it doesn't. It says: MySQL: The world's most popular open source database ^ That's been their slogan for quite a while. It's not precisely accurate either, depending on your version of "popular", but it does provide a nice contrast between us and them. -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] Chocked
Who copied? I've been to mysql site 2 mn ago (did'nt occur since at least 6 months) title says : Mysql: The world most advanced opensource database. Isn't it the title for postgresql? It seems weird for both projects to have the same claim (although it's true for postgreql...) Regards -- Olivier PRENANT Tel: +33-5-61-50-97-00 (Work) 15, Chemin des Monges+33-5-61-50-97-01 (Fax) 31190 AUTERIVE +33-6-07-63-80-64 (GSM) FRANCE Email: ohp@pyrenet.fr -- Make your life a dream, make your dream a reality. (St Exupery) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] --enable-thread-safety on Win32
"Dave Page" writes: > However In all but one place in libpq, we don't use errno anyway > (actually 2, but one is a bug anyway) because we use GetLastError() > instead (which tested thread safe as well FWIW). The only place it's > used is PQoidValue(): > result = strtoul(res->cmdStatus + 7, &endptr, 10); > if (!endptr || (*endptr != ' ' && *endptr != '\0') || errno == > ERANGE) > return InvalidOid; > else > return (Oid) result; > We don't believe strtoul() works with GetLastError() unfortunately. One > (hackish) solution would be to check that it doesn't return 0 or > ULONG_MAX. I'm not sure why we bother with an overflow check there at all. It'd be worth checking that there is a digit at cmdStatus + 7, but as long as there is one, it's difficult to see why an overflow check is needed. The only justification that comes to mind is that if someday there are versions of Postgres that have 64-bit OIDs, you could get an overflow here if you had a 32-bit-OID libpq talking to a 64-bit server. However, I don't see a particularly good reason to return InvalidOid instead of an overflowed value anyway in that situation. For PQoidValue, InvalidOid is supposed to mean "there is no OID in this command status" not "there is an OID but I cannot represent it". regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] --enable-thread-safety on Win32
> -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Dave Page > Sent: 28 July 2005 16:16 > To: Bruce Momjian; Tom Lane > Cc: PostgreSQL-development > Subject: Re: [HACKERS] --enable-thread-safety on Win32 > > > > OK, but I would then like someone to actually run the tests we do in > > thread_test.c and make sure they _would_ pass on Win32. > > OK, I will work on this, and subsequently fixing configure etc. OK, I have the thread test working with the fully pthreads library on Windows, and everything passes except errno (well, and getpwuid which we don't have anyway). It is supposed to be thread safe when apps are either built against libcmt.lib or msvcrt.dll (which we use), however it still seems to fail on Mingw. From what I can find, the 'usual' way to make errno thread safe is by using _beginthreadex() instead of CreateThread(), but that is done by the application of course, not libpq (in the test case, it would be done by pthreads). See http://www.microsoft.com/msj/0799/Win32/Win320799.aspx for a discussion of this if interested. However In all but one place in libpq, we don't use errno anyway (actually 2, but one is a bug anyway) because we use GetLastError() instead (which tested thread safe as well FWIW). The only place it's used is PQoidValue(): result = strtoul(res->cmdStatus + 7, &endptr, 10); if (!endptr || (*endptr != ' ' && *endptr != '\0') || errno == ERANGE) return InvalidOid; else return (Oid) result; We don't believe strtoul() works with GetLastError() unfortunately. One (hackish) solution would be to check that it doesn't return 0 or ULONG_MAX. Any suggestions? Aside from this issue, the hacked test app, and the changes to make all this compile are done. Regards, Dave ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] Dbsize backend integration
Patch applied. Thanks. /contrib/dbsize removed. New functions: pg_tablespace_size pg_database_size pg_relation_size pg_complete_relation_size pg_size_pretty --- Dave Page wrote: > > > > -Original Message- > > From: Bruce Momjian [mailto:[EMAIL PROTECTED] > > Sent: 06 July 2005 04:11 > > To: Tom Lane > > Cc: Dave Page; Christopher Kings-Lynne; Robert Treat; Dawid > > Kuroczko; Andreas Pflug; PostgreSQL-patches; PostgreSQL-development > > Subject: Re: [HACKERS] [PATCHES] Dbsize backend integration > > > > Tom Lane wrote: > > > > > > pg_relation_size plus pg_complete_relation_size is fine. Ship it... > > > > OK. > > Updated version attached. > > Regards, Dave. Content-Description: dbsize.c [ Attachment, skipping... ] Content-Description: dbsize.patch [ Attachment, skipping... ] -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] Patch to fix plpython on OS X
Peter Eisentraut <[EMAIL PROTECTED]> writes: > Am Sonntag, 24. Juli 2005 17:53 schrieb Tom Lane: >> I'm wondering why we still have a README there at all --- it's entirely >> superseded by the SGML documentation. >> >> http://developer.postgresql.org/docs/postgres/regress-evaluation.html > I think we kept it there so people can read it during the installation. Yeah. I desisted from deleting it after I noticed that there are provisions for re-generating it over in the doc/src/sgml Makefile. However, I'm now wondering why it's not handled exactly like INSTALL --- ie, don't keep it in CVS, but auto-generate it during tarball build. The current manual procedure definitely isn't keeping it up to date. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] Patch to fix plpython on OS X
Am Sonntag, 24. Juli 2005 17:53 schrieb Tom Lane: > I'm wondering why we still have a README there at all --- it's entirely > superseded by the SGML documentation. > > http://developer.postgresql.org/docs/postgres/regress-evaluation.html I think we kept it there so people can read it during the installation. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster