Re: [HACKERS] Weird ecpg failures on buildfarm NetBSD members
Kris Jurka <[EMAIL PROTECTED]> writes: > On Mon, 9 Jul 2007, Tom Lane wrote: >> Today's puzzler for the curious: > It turns out that this failure was caused by pulling in pg's own printf > implementation to the resulting ECPG program. Hah! Nice detective work, Kris. > Calling printf("%.*f\n", -1, 14.7) results in "14" from pg_printf and > "14.70" from NetBSD's. So does this represent a bug or shortcoming in pg_printf? A quick look at the spec says that "A negative precision is taken as if the precision were omitted", and rounding to int doesn't sound like the appropriate behavior for bare %f. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Weird ecpg failures on buildfarm NetBSD members
On Mon, 9 Jul 2007, Tom Lane wrote: Today's puzzler for the curious: Last night the buildfarm reported two ECPG-Check failures http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=salamander&dt=2007-07-08%2017:30:00 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=canary&dt=2007-07-08%2017:30:01 which promptly went away again. Judging by the timestamps these must have been induced by Joe's PQconnectionUsedPassword() patch and fixed by my subsequent tweaking, but how the heck did that result in an ecpg failure? I think that the cause must have had something to do with his inclusion of postgres_fe.h into libpq-fe.h, which I took out on the grounds that it was an unacceptable pollution of client code namespace. But exactly why/how did that break ecpg, and why did the failure only manifest on NetBSD machines? It turns out that this failure was caused by pulling in pg's own printf implementation to the resulting ECPG program. The failing test (dyntest.pgc) prints its output using: printf ("%.*f\n", PRECISION, DOUBLEVAR); Calling printf("%.*f\n", -1, 14.7) results in "14" from pg_printf and "14.70" from NetBSD's. This would only happen on machines where we don't use the system provided printf which is why it was only seen on NetBSD although in could have been seen on mingw as well. Kris Jurka ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] "Running readonly queried on PITR slaves" status update
Hi After struggling with understanding xlog.c and friends far enough to be able to refactor StartupXLOG to suit the needs of concurrent recovery, I think I've finally reached a workable (but still a bit hacky) solution. My design is centered around the idea of a bgreplay process that takes over the role of the bgwriter in readonly mode, and continously replays WALs as they arrive. But since recovery during startup is still necessary (We need to bring a filesystem-level backup into a consistent state - past minRecoveryLoc - before allowing connections), this means doing recovery in two steps, from two different processes. I've changed StartupXLOG to only recover up to minRecoveryLoc in readonly mode, and to skip all steps that are not required if no writes to the database will be done later (Especially creating a checkpoint at the end of recovery). Instead, it posts the pointer to the last recovered xlog record to shared memory. bgreplay than uses that pointer for an initial call to ReadRecord to setup WAL reading for the bgreplay process. Afterwards, it repeatedly calls ReplayXLOG (new function), which always replays at least one record (If there is one, otherwise it returns false), until it reaches a safe restart point. Currently, in my test setup, I can start a slave in readonly mode and it will do initial recovery, bring postgres online, and continously recover from inside bgreplay. There isn't yet any locking between wal replay and queries. I'll add that locking during the new few days, which should result it a very early prototype. The next steps will then be finding a way to flush backend caches after replaying code that modified system tables, and (related) finding a way to deal with the flatfiles. I'd appreciate any comments on this, especially those pointing out problems that I overlooked. greetings, Florian Pflug ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] ReadRecord, EndRecPtr and XLOG_SWITCH
Florian G. Pflug wrote: Please disregard - I was confusing xlogid with xlog segments, so most of my mail was nonsense. I've fixed my problem by storing not the EndRecPtr, but rather the ReadRecPtr, in shmem and rereading the last already applied record in my bgreplay process. Then I can just use ReadRecord(NULL), and things Just Work. Sorry for the noise & greetings Florian Pflug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] ReadRecord, EndRecPtr and XLOG_SWITCH
Hi When ReadRecord encounters an XLOG_SWITCH record, it does EndRecPtr.xrecoff += XLogSegSize - 1; EndRecPtr.xrecoff -= EndRecPtr.xrecoff % XLogSegSize; which seems to set xrecoff to either 0 (if it was 0) or to XLogSegSize (if it was > 0). Note that xrecoff == XLogSegSize is kind of "denormalized" - the normalized version would have xrecoff == 0, and xlogid = xlogid+1 Passing this "denormalized" EndRecPtr to ReadRecord again to read the next record than triggers a PANIC ("invalid record offset at ??/1000"). Passing NULL to ReadRecord to read the next record works, because it takes care to align the EndRecPtr to the next page, thereby fixing the "denormalization". Is there a reason not to do the same for non-NULL arguments to ReadRecord? Or is there some failure case that the current behaviour protects against? The reason I stumbled over this is that I want to restart archive recovery from a "bgreplay" process - I tried passing the EndRecPtr via shmem, and using it as my initial argument to ReadRecord, and thereby stumbled over this behaviour. greetings, Florian Pflug ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PQescapeBytea* version for parameters
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> Do we want something like this which provides a PQescapeByteaParam for >> escaping bytea strings before passing them as text-mode parameters in >> PQexecParam? > > Seems a lot easier and more efficient to just pass out-of-line bytea > parameters as binary mode. Well that's definitely true. The case in hand was a PHP where the PHP driver doesn't seem to automatically use binary mode and doesn't provide any way for the application to select it either. It expects the user code to handle the escaping for all parameters using PQEscape* functions. But there is no candidate function to handle bytea ascii parameters. I'm sure it can be done in PHP directly though. Incidentally it seems even using PQEscapeBytea with standard conforming strings set is still corrupting the byteas so there may be an actual bug somewhere. Haven't had a chance to look into it yet though. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] psql/pg_dump vs. dollar signs in identifiers
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > Unless you're doing muti-line regex, what's the point of a $ anywhere > but the end of the expression? Am I missing something? Likewise with ^. Leaving out the backslashes, you can do things like (foo$|baz|qux)(baz|qux|) to say that all 9 combinations of those two tokens are valid except that foo must be followed by the empty second half. But it can always be refactored into something more normal like (foo|((baz|qux)(baz|qux)?)) > I'm inclined to escape $ as Tom suggested. Yeah, I have a tendency to look for the most obscure counter-example if only to be sure I really understand precisely how obscure it is. I do agree that it's not a realistic concern. Especially since I never even realized we handled regexps here at all :) IIRC some regexp engines don't actually treat $ specially except at the end of the regexp at all. Tom's just suggesting doing the same thing here where complicated regexps are even *less* likely and dollars as literals more. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] psql/pg_dump vs. dollar signs in identifiers
On Mon, Jul 09, 2007 at 07:04:27PM +0100, Gregory Stark wrote: > "Tom Lane" <[EMAIL PROTECTED]> writes: > > > Now, because we surround the pattern with ^...$ anyway, I can't offhand > > see a use-case for putting $ with its regexp meaning into the pattern. > > It's possible to still usefully use $ in the regexp, but it's existence at the > end means there should always be a way to write the regexp without needing > another one inside. Unless you're doing muti-line regex, what's the point of a $ anywhere but the end of the expression? Am I missing something? Likewise with ^. I'm inclined to escape $ as Tom suggested. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) pgplWIvqL4KXG.pgp Description: PGP signature
Re: [HACKERS] psql/pg_dump vs. dollar signs in identifiers
Gregory Stark <[EMAIL PROTECTED]> writes: > Incidentally, are these really regexps? I always thought they were globs. They're regexps under the hood, but we treat . as a schema separator and translate * to .*, which makes it look like mostly a glob scheme. But you can make use of brackets, |, +, ... regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] psql/pg_dump vs. dollar signs in identifiers
"Tom Lane" <[EMAIL PROTECTED]> writes: > Now, because we surround the pattern with ^...$ anyway, I can't offhand > see a use-case for putting $ with its regexp meaning into the pattern. It's possible to still usefully use $ in the regexp, but it's existence at the end means there should always be a way to write the regexp without needing another one inside. Incidentally, are these really regexps? I always thought they were globs. And experiments seem to back up my memory: postgres=# \d foo* Table "public.foo^bar" Column | Type | Modifiers +-+--- i | integer | postgres=# \d foo.* Did not find any relation named "foo.*". > Comments? The first half of the logic applies to ^ as well. There's no use case for regexps using ^ inside. You would have to use quotes to create the table but we could have \d foo^* work: postgres=# \d foo^* Did not find any relation named "foo^*". -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Warm standby stall -- what debug info would help?
>>> On Mon, Jul 9, 2007 at 11:36 AM, in message <[EMAIL PROTECTED]>, Tom Lane <[EMAIL PROTECTED]> wrote: > "Kevin Grittner" <[EMAIL PROTECTED]> writes: >> [2007-07-07 18:24:27.692 CDT] 5962 LOG: restored log file "0001000= >> C00DA" from archive >> [2007-07-07 18:24:28.051 CDT] 5962 LOG: restored log file "0001000= >> C00DB" from archive >> [2007-07-09 08:21:50.200 CDT] 5904 LOG: received fast shutdown request >> [2007-07-09 08:21:50.201 CDT] 5962 FATAL: could not restore file "0001= >> 000C00DC" from archive: return code 15 > > Evidently it was waiting for the restore_command script to give it back > a file. So the problem is within your restore script. Eyeing the > script, the only obvious thing that could block it is existence of > /var/pgsql/data/county/$countyName/wal-files/rsync-in-progress Sorry for the noise. It wasn't the rsync file but something even more obvious that I managed misread. The rsync was failing to copy the files from the counties to the directory where recovery reads them. I could have sworn I checked that, but I clearly messed up. -Kevin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Warm standby stall -- what debug info would help?
On Mon, 2007-07-09 at 12:36 -0400, Tom Lane wrote: > "Kevin Grittner" <[EMAIL PROTECTED]> writes: > > [2007-07-07 18:24:27.692 CDT] 5962 LOG: restored log file "0001000= > > C00DA" from archive > > [2007-07-07 18:24:28.051 CDT] 5962 LOG: restored log file "0001000= > > C00DB" from archive > > [2007-07-09 08:21:50.200 CDT] 5904 LOG: received fast shutdown request > > [2007-07-09 08:21:50.201 CDT] 5962 FATAL: could not restore file "0001= > > 000C00DC" from archive: return code 15 > > Evidently it was waiting for the restore_command script to give it back > a file. So the problem is within your restore script. Agreed. Kevin, Can you try pg_standby please? It would help me to diagnose problems faster if that doesn't work. TIA. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] psql/pg_dump vs. dollar signs in identifiers
An example being discussed on the jdbc list led me to try this: regression=# create table a$b$c (f1 int); CREATE TABLE regression=# \d a$b$c Did not find any relation named "a$b$c". It works if you use quotes: regression=# \d "a$b$c" Table "public.a$b$c" Column | Type | Modifiers +-+--- f1 | integer | The reason it doesn't work without quotes is that processSQLNamePattern() thinks this: * Inside double quotes, or at all times if force_escape is true, * quote regexp special characters with a backslash to avoid * regexp errors. Outside quotes, however, let them pass through * as-is; this lets knowledgeable users build regexp expressions * that are more powerful than shell-style patterns. and of course $ is a regexp special character, so it bollixes up the match. Now, because we surround the pattern with ^...$ anyway, I can't offhand see a use-case for putting $ with its regexp meaning into the pattern. And since we do allow $ as a non-first character of identifiers, there is a use-case for expecting it to be treated like an ordinary character. So I'm thinking that $ ought to be quoted whether it's inside double quotes or not. This change would affect psql's describe commands as well as pg_dump -t and -n patterns. Comments? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Warm standby stall -- what debug info would help?
>>> On Mon, Jul 9, 2007 at 10:54 AM, in message <[EMAIL PROTECTED]>, "Kevin Grittner" <[EMAIL PROTECTED]> wrote: > We're running a number of warm standby instances on one server. One of them > stalled on Saturday. When I found that this morning, I confirmed that the > files were in the directory from which it should be pulling the WAL files. > The logs showed normal processing up to the stall, with no messages > afterwards. I tried a restart and it resumed warm standby status and caught > up quickly. No, actually it stalled on the same WAL file. I've got another one in the same state that I haven't touched yet. I'll work on gathering what info I can think of, but if there's something in particular you would like to see, let me know. More in a bit. Should this be on the bugs list instead of hackers? -Kevin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Warm standby stall -- what debug info would help?
"Kevin Grittner" <[EMAIL PROTECTED]> writes: > [2007-07-07 18:24:27.692 CDT] 5962 LOG: restored log file "0001000= > C00DA" from archive > [2007-07-07 18:24:28.051 CDT] 5962 LOG: restored log file "0001000= > C00DB" from archive > [2007-07-09 08:21:50.200 CDT] 5904 LOG: received fast shutdown request > [2007-07-09 08:21:50.201 CDT] 5962 FATAL: could not restore file "0001= > 000C00DC" from archive: return code 15 Evidently it was waiting for the restore_command script to give it back a file. So the problem is within your restore script. Eyeing the script, the only obvious thing that could block it is existence of /var/pgsql/data/county/$countyName/wal-files/rsync-in-progress regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Idea: Comments on system catalogs?
On Wed, Jul 04, 2007 at 01:03:20PM +0200, Dawid Kuroczko wrote: > Hello. > > I think it could be a nice idea to put descriptions from > http://www.postgresql.org/docs/8.2/static/catalogs.html > into system catalogs itself. I.e., make a bunch of > > COMMENT ON COLUMN pg_class.relname >IS 'Name of the table, index, view, etc.'; > ... > COMMENT ON COLUMN pg_class.relkind >IS 'r = ordinary table, i = index, S = sequence, v = view, c = > composite type, t = TOAST table'; > > and so on. > > I think it could be helpful, when you're writing your own selects > on system catalogs. > > Perhaps it could be optional (as a contrib .sql file). > > If you like the idea, I could prepare a script which will > convert documentation into .sql file with series of > COMMENT ON .. IS ...; Actually, this does exist for some things in the catalog; I suspect it just wasn't done in the past (perhaps Postgres didn't originally have comments). I think it would be a useful addition. But I think it'd need to be more than just a .sql file (initdb would probably need to be modified). Ideally, we'd be able to suck the info out of the appropriate .sgml files. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) pgpX4NaplLfEd.pgp Description: PGP signature
[HACKERS] Warm standby stall -- what debug info would help?
We're running a number of warm standby instances on one server. One of them stalled on Saturday. When I found that this morning, I confirmed that the files were in the directory from which it should be pulling the WAL files. The logs showed normal processing up to the stall, with no messages afterwards. I tried a restart and it resumed warm standby status and caught up quickly. It seems like this is probably a PostgreSQL bug of some sort, although maybe someone can spot a problem in our recovery script. If it happens again, what information should I gather before the restart to help find the cause? -Kevin PGBACKUP:/var/pgsql/data/county/ozaukee/data/pg_log # tail postgresql-2007-07-07_00.log [2007-07-07 09:44:41.392 CDT] 5962 LOG: restored log file "0001000C00D2" from archive [2007-07-07 12:24:53.597 CDT] 5962 LOG: restored log file "0001000C00D3" from archive [2007-07-07 12:24:53.984 CDT] 5962 LOG: restored log file "0001000C00D4" from archive [2007-07-07 12:24:54.351 CDT] 5962 LOG: restored log file "0001000C00D5" from archive [2007-07-07 14:10:42.208 CDT] 5962 LOG: restored log file "0001000C00D6" from archive [2007-07-07 14:10:42.634 CDT] 5962 LOG: restored log file "0001000C00D7" from archive [2007-07-07 15:23:41.717 CDT] 5962 LOG: restored log file "0001000C00D8" from archive [2007-07-07 18:24:26.933 CDT] 5962 LOG: restored log file "0001000C00D9" from archive [2007-07-07 18:24:27.692 CDT] 5962 LOG: restored log file "0001000C00DA" from archive [2007-07-07 18:24:28.051 CDT] 5962 LOG: restored log file "0001000C00DB" from archive PGBACKUP:/var/pgsql/data/county/ozaukee/data/pg_log # cat postgresql-2007-07-09_00.log [2007-07-09 08:21:50.200 CDT] 5904 LOG: received fast shutdown request [2007-07-09 08:21:50.201 CDT] 5962 FATAL: could not restore file "0001000C00DC" from archive: return code 15 [2007-07-09 08:21:50.485 CDT] 5904 LOG: startup process (PID 5962) exited with exit code 1 [2007-07-09 08:21:50.485 CDT] 5904 LOG: aborting startup due to startup process failure [2007-07-09 08:21:50.555 CDT] 5960 LOG: logger shutting down PGBACKUP:/var/pgsql/data/county/ozaukee/data/pg_log # cat postgresql-2007-07-09_082151.log | grep -v 'starting up' [2007-07-09 08:21:51.718 CDT] 19076 LOG: database system was interrupted while in recovery at log time 2007-07-07 15:00:02 CDT [2007-07-09 08:21:51.718 CDT] 19076 HINT: If this has occurred more than once some data may be corrupted and you may need to choose an earlier recovery target. [2007-07-09 08:21:51.718 CDT] 19076 LOG: starting archive recovery [2007-07-09 08:21:51.725 CDT] 19076 LOG: restore_command = "/usr/local/backup/recovery.sh %f %p" [2007-07-09 08:21:52.079 CDT] 19076 LOG: restored log file "0001000C00D9" from archive [2007-07-09 08:21:52.079 CDT] 19076 LOG: checkpoint record is at C/D920 [2007-07-09 08:21:52.079 CDT] 19076 LOG: redo record is at C/D920; undo record is at 0/0; shutdown FALSE [2007-07-09 08:21:52.079 CDT] 19076 LOG: next transaction ID: 0/10700115; next OID: 1387338 [2007-07-09 08:21:52.079 CDT] 19076 LOG: next MultiXactId: 1; next MultiXactOffset: 0 [2007-07-09 08:21:52.079 CDT] 19076 LOG: automatic recovery in progress [2007-07-09 08:21:52.081 CDT] 19076 LOG: redo starts at C/D968 [2007-07-09 08:21:52.429 CDT] 19076 LOG: restored log file "0001000C00DA" from archive [2007-07-09 08:21:52.860 CDT] 19076 LOG: restored log file "0001000C00DB" from archive PGBACKUP:/var/pgsql/data/county/ozaukee/data # grep -Ev '^([[:space:]]+)?($|#)' postgresql.conf listen_addresses = '*' # what IP address(es) to listen on; port = 5445 # (change requires restart) max_connections = 50# (change requires restart) shared_buffers = 160MB # min 128kB or max_connections*16kB temp_buffers = 10MB # min 800kB work_mem = 30MB # min 64kB maintenance_work_mem = 160MB# min 1MB max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round bgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round bgwriter_all_percent = 10.0 # 0-100% of all buffers scanned/round bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round wal_buffers = 1MB # min 32kB checkpoint_segments = 10# in logfile segments, min 1, 16MB each checkpoint_timeout = 30min # range 30s-1h archive_timeout = 3600 # force a logfile segment switch after this seq_page_cost = 0.1 # measured on an arbitrary scale random_page_cost = 0.1 # same scale as above effective_cache_size = 3GB redirect_stderr = on
[HACKERS] Weird ecpg failures on buildfarm NetBSD members
Today's puzzler for the curious: Last night the buildfarm reported two ECPG-Check failures http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=salamander&dt=2007-07-08%2017:30:00 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=canary&dt=2007-07-08%2017:30:01 which promptly went away again. Judging by the timestamps these must have been induced by Joe's PQconnectionUsedPassword() patch and fixed by my subsequent tweaking, but how the heck did that result in an ecpg failure? I think that the cause must have had something to do with his inclusion of postgres_fe.h into libpq-fe.h, which I took out on the grounds that it was an unacceptable pollution of client code namespace. But exactly why/how did that break ecpg, and why did the failure only manifest on NetBSD machines? I don't really have time to investigate this, but would like to know what happened. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Threaded Python on BSD ...
On Jul 9, 2007, at 12:24 PM, Marko Kreen wrote: On 7/9/07, Hans-Juergen Schoenig <[EMAIL PROTECTED]> wrote: does anybody remember why threaded python is not allowed on some flavors of BSD? AFAIR the problem is they use separate libc for threaded things, and main postgres is (and will be) linked with non-threaded libc. -- marko ok, so some linking tweaks should be enough to make this work. this is doable (to make BSD fundamentalists happy here). i was just thinking of some BSD compliance thing which would be worse ... many thanks, hans -- Cybertec Geschwinde & Schönig GmbH Gröhrmühlgasse 26, 2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at
Re: [HACKERS] Threaded Python on BSD ...
On 7/9/07, Hans-Juergen Schoenig <[EMAIL PROTECTED]> wrote: does anybody remember why threaded python is not allowed on some flavors of BSD? AFAIR the problem is they use separate libc for threaded things, and main postgres is (and will be) linked with non-threaded libc. -- marko ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Threaded Python on BSD ...
hello all ... does anybody remember why threaded python is not allowed on some flavors of BSD? i was surprised to read this in the configure script ... # threaded python is not supported on bsd's echo "$as_me:$LINENO: checking whether Python is compiled with thread support" >&5 echo $ECHO_N "checking whether Python is compiled with thread support... $ECHO_C" >&6 pythreads=`${PYTHON} -c "import sys; print int('thread' in sys.builtin_module_names)"` if test "$pythreads" = "1"; then echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 case $host_os in openbsd*|freebsd*) { { echo "$as_me:$LINENO: error: threaded Python not supported on this platform" >&5 echo "$as_me: error: threaded Python not supported on this platform" >&2;} { (exit 1); exit 1; }; } is there an issue with BSD itself or is it just a matter of linking the backend against pthreads? the problem is that this is a bit of a showstopper for skytools on BSD ... many thanks, hans -- Cybertec Geschwinde & Schönig GmbH Gröhrmühlgasse 26, 2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at
Re: [HACKERS] Implementation of new operators inside the PostgreSQL
rupesh bajaj wrote: We are currently generating the patch and are ready to discuss about all issues in the hackers list. Since this process will take some time, is it permissible for us to release our version on our institute < http://dsl.serc.iisc.ernet.in > website? Yes, sure. You should read the BSD license PostgreSQL is released under, if you're not familiar with it already: http://www.postgresql.org/about/licence.html In short, you're free to modify and release whatever you want, as long as you keep the copyright notices and the license text intact. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Implementation of new operators inside the PostgreSQL
Hi, We are currently generating the patch and are ready to discuss about all issues in the hackers list. Since this process will take some time, is it permissible for us to release our version on our institute < http://dsl.serc.iisc.ernet.in > website? Thanks and Regards, Rupesh & Sharat On 7/8/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote: rupesh bajaj wrote: > I am mailing on behalf of the Database Systems Lab, Indian Institute of > Science(IISc), Bangalore, India. We have implemented three new operators > internal to the PostgreSQL 8.1.2 engine in order to support queries on > multilingual data(english and hindi as of now). It can be extended to > support any other language. In the process, we have augmented the parser, > rewriter, planner and optimizer to support such queries. We want to make a > release of our version now. > > Could you please let me know if there is any standard procedure to be > followed for release of PostgreSQL. Also please let me know if we can > release our version on the official PostgreSQL site - www.postgresql.org The normal procedure is to discuss the feature and design on pgsql-hackers first, and then send a patch against CVS HEAD to pgsql-patches for review. The first thing you need to do is to convince people that the feature is worth having. What does it provide that you can't do with the current feature set? How does it work from user's point of view? After that you need to discuss the design. Are all those changes to the parser, rewriter, planner and optimizer really necessary? How does it interact with all the other features, like tsearch2 and indexes? Since you've already done the work, you can just submit the patch as it is for people to look at, in addition to the above, but it's extremely unlikely that it will be accepted as it is. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
Re: [HACKERS] tsearch2: language or encoding
On Fri, 2007-07-06 at 15:43 +0900, Tatsuo Ishii wrote: > I'm wondering if a tsearch's configuration is bound to a language or > an encoding. If it's bound to a language, there's a serious design > problem, I would think. An encoding or charset is not necessarily > bound to single language. We can find such that example everywhere(I'm > not talking about Unicode here). LATIN1 inclues English and several > european languages. EUC-JP includes English and Japanese etc. And > we specify encoding for char's property, not language, I would say the > configuration should be bound to an encoding. Perhaps the encoding could suggest a default language, but I see no direct connection in many cases between language and encoding, especially for European languages and encodings. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Bgwriter strategies
On Fri, 2007-07-06 at 10:55 +0100, Heikki Linnakangas wrote: > We need to get the requirements straight. > > One goal of bgwriter is clearly to keep just enough buffers clean in > front of the clock hand so that backends don't need to do writes > themselves until the next bgwriter iteration. But not any more than > that, otherwise we might end up doing more writes than necessary if some > of the buffers are redirtied. The purpose of the WAL/shared buffer cache is to avoid having to write all of the data blocks touched by a transaction to disk before end of transaction, thus increasing request response time. That purpose is only fulfilled iff using the shared buffer cache does not require us to write out someone else's dirty buffers, while avoiding our own. The bgwriter exists specifically to clean the dirty buffers, so that users do not have to clean theirs or anybody else's dirty buffers. > To deal with bursty workloads, for example a batch of 2 GB worth of > inserts coming in every 10 minutes, it seems we want to keep doing a > little bit of cleaning even when the system is idle, to prepare for the > next burst. The idea is to smoothen the physical I/O bursts; if we don't > clean the dirty buffers left over from the previous burst during the > idle period, the I/O system will be bottlenecked during the bursts, and > sit idle otherwise. In short, bursty workloads are the normal situation. When capacity is not saturated the bgwriter can utilise the additional capacity to reduce statement response times. It is standard industry practice to avoid running a system at peak throughout for long periods of time, so DBT-2 does not represent a normal situation. This is because the response times are only predictable on a non-saturated system and most apps have some implicit or explicit service level objective. However, the server needs to cope with periods of saturation, so must be able to perform efficiently during those times. So I see there are two modes of operation: i) dirty block write offload when capacity is available ii) efficient operation when server is saturated. DBT-2 represents only the second mode of operation; the two modes are equally important, yet mode i) is the ideal situation. > To strike a balance between cleaning buffers ahead of possible bursts in > the future and not doing unnecessary I/O when no such bursts come, I > think a reasonable strategy is to write buffers with usage_count=0 at a > slow pace when there's no buffer allocations happening. Agreed. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Bgwriter strategies
On Thu, 2007-07-05 at 21:50 +0100, Heikki Linnakangas wrote: > All test runs were also patched to count the # of buffer allocations, > and # of buffer flushes performed by bgwriter and backends. Here's those > results (I hope the intendation gets through properly): > > imola-336 imola-337 imola-340 > writes by checkpoint38302 30410 39529 > writes by bgwriter 350113 2205782 1418672 > writes by backends1834333 265755 787633 > writes total 748 2501947 2245834 > allocations 2683170 2657896 2699974 These results may show that the minimum bgwriter_delay of 10ms may be too large for the workloads: whatever the strategy used the bgwriter spends too much time sleeping when it should be working. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org