AW: [HACKERS] CRCs
Instead of a partial row CRC, we could just as well use some other bit of identifying information, say the row OID. Given a block CRC on the heap page, we'll be pretty confident already that the heap page is OK, we just need to guard against the possibility that it's older than the index item. Checking that there is a valid tuple at the slot indicated by the index item, and that it has the right OID, should be a good enough (and cheap enough) test. I would hardly call an additional 4 bytes for OID per index entry cheap. Andreas
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Lamar Owen wrote: Ok, I have a first set of 7.1beta3 RPMs uploading now. ... pgaccess currently will not run unless you reconfigure to use -i in the startup. This is also being fixed in the RPMset -- there is a change necess ary in postgresql.config, I just have to do the change. In my experience, pgaccess will use the Unix socket if the hostname is left blank. Is this not the case with your RPMs? -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C "For I know that my redeemer liveth, and that he shall stand at the latter day upon the earth" Job 19:25
[HACKERS] subselect bug?
While below is ok: select * from table_a a where (select data_a from table_a where id = a.id) (select data_b from table_a where id = a.id); but this fails: select * from table_a a where ((select data_a from table_a where id = a.id) (select data_b from table_a where id = a.id)); ERROR: parser: parse error at or near "" Does anybody know why? -- Tatsuo Ishii
[HACKERS] locale and multibyte together in 7.1
I use Postgres 7.1, FreeBSD 4.0 I configure, build and install it with: ./configure --enable-locale --enable-multibyte --with-perl gmake gmake install initdb -E KOI8 The problem is: when database encoding and client encoding are different then 'locale' features, such as 'upper' etc don't work. When these two encodings are equal - all is OK. Example, commets are marked by --: tolik=# \l List of databases Database | Owner | Encoding ---+---+-- cmw | cmw | ALT template0 | tolik | KOI8 template1 | tolik | KOI8 tolik | tolik | ALT -- database 'tolik' has ALT (one of russian) encoding (4 rows) tolik=# \c You are now connected to database tolik as user tolik. tolik=# \encoding KOI8 -- I change client encoding to KOI8, another russian encoding tolik=# select upper ('×ÙÂÏÒ'); -- argument is russian word in lowercase upper --- ×ÙÂÏÒ -- result don't change (1 row) tolik=# \encoding ALT-- I set client encoding equals to DB encoding tolik=# select upper ('×ÙÂÏÒ'); upper --- ÷ùâïò -- Now it works, result is the same word in uppercase :( (1 row) I did'nt observe this feature in 6.* versions of Postgres. Any ideas? Or help? -- Anatoly K. Lasareff Email:[EMAIL PROTECTED] http://tolikus.hq.aaanet.ru:8080Phone: (8632)-710071
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Lamar Owen writes: Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass regression on my home RedHat 6.2 machine, which has all locale environment variables disabled (/etc/sysconfig/i18n deleted and a reboot). Some thoughts: Re: rpm-pgsql-7.1beta3.patch | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib |postgresql-7.1beta3/src/Makefile.shlib | --- postgresql-7.1beta3.orig/src/Makefile.shlib Wed Dec 6 14:37:08 2000 | +++ postgresql-7.1beta3/src/Makefile.shlib Mon Jan 15 01:50:04 2001 |@@ -160,7 +160,7 @@ | | ifeq ($(PORTNAME), linux) |shlib:= |lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION) | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname) | + LINK.shared = $(COMPILER) -shared -Wl | endif This cannot possibly be right. | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib~ |postgresql-7.1beta3/src/Makefile.shlib~ ??? | -#!/usr/local/bin/perl -w | +#!/usr/bin/perl -w (and more of these for Python) I think this should be fixed to read #! /usr/bin/env perl Any comments? Re: spec file | # I hope this works | | %ifarch ia64 | ln -s linux_i386 src/template/linux | %endif It definitely won't... | # If libtool installed, copy some files | if [ -d /usr/share/libtool ] | then | cp /usr/share/libtool/config.* . | fi This is useless (because the config.* files are not in src/ anymore) and (if it were fixed) not recommendable because config.{guess,sub} is not compatible to itself, *especially* in terms of Linux recognition. You really should use the ones PostgreSQL comes with. | %ifarch ppc | NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v "\-O"|xargs -n 100` | NEW_CFLAGS="$NEW_CFLAGS -O0" This is no longer necessary. | ./configure --enable-hba --enable-locale --with-CXX --prefix=/usr\ There is no option called '--enable-hba'. And I think you're supposed to use %{configure}. | %if %tkpkg | --with-tk --with-x \ | %endif There is no '--with-x'. '--with-tk' is the default if '--with-tcl' was given; you should use '--without-tk' if you don't want it. | %if %jdbc | --with-java \ | %endif There is no such option. | %ifarch alpha | --with-template=linux_alpha \ | %endif This won't work and is not necessary. | make COPT="$NEW_CFLAGS" DESTDIR=$RPM_BUILD_ROOT/usr all You should set CFLAGS when you run configure. (%{configure} will do that.) DESTDIR is only useful when you run 'make install'. And DESTDIR should not include /usr. | make all PGDOCS=unpacked -C doc Not sure what this is supposed to do, but I don't think it will do what you expect. The docs are installed automatically. | mkdir -p $RPM_BUILD_ROOT/usr/{include/pgsql,lib,bin} | mkdir -p $RPM_BUILD_ROOT%{_mandir} You don't need that, the directories are made automatically. | make DESTDIR=$RPM_BUILD_ROOT -C src install No '-C src'. | # copy over the includes needed for SPI development. | pushd src/include | /lib/cpp -M -I. -I../backend executor/spi.h | \ | xargs -n 1| \ | grep \\W| \ | grep -v ^/| \ | grep -v spi.o | \ | grep -v spi.h | \ | sort | \ | cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql I think the standard installed set of headers is sufficient. | %if %pgaccess | # pgaccess installation Pgaccess is installed automatically when you configure --with-tcl. | # Move the PL's to the right place | mv $RPM_BUILD_ROOT/usr/lib/pl*.so $RPM_BUILD_ROOT/usr/share/postgresql You should not put architecture specific files into share/. I'm sure this is in violation of FHS. (I'm amazed createlang finds it there.) Re: sub-packages * pg_id should be in server * What were the last thoughts about renaming the nothing package to -clients? * pg_upgrade won't work, so you might as well not install it. It will probably be disabled before we release. * You're missing pg_config in the -devel package. These are the things I could find at first glance. ;-) -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] subselect bug?
Tatsuo Ishii [EMAIL PROTECTED] writes: select * from table_a a where ((select data_a from table_a where id = a.id) (select data_b from table_a where id = a.id)); ERROR: parser: parse error at or near "" I think I finally got this right ... see if you can break the revised grammar I just committed ... regards, tom lane
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Peter Eisentraut wrote: Lamar Owen writes: Ok, I have a first set of 7.1beta3 RPMs uploading now. These RPMs pass regression on my home RedHat 6.2 machine, which has all locale environment variables disabled (/etc/sysconfig/i18n deleted and a reboot). Some thoughts: Re: rpm-pgsql-7.1beta3.patch | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname) | + LINK.shared = $(COMPILER) -shared -Wl | endif This cannot possibly be right. It's what you recommended a while back. See the discussions on -soname from the libpq.so.2.1 versus libpq.so.2.0 thread awhile back. | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib~ postgresql-7.1beta3/src/Makefile.shlib~ ??? Leftover Kedit baggage. Those are easy to forget to rm during patch build, particularly at 2:00AM -- but there's no harm in it being there for now. And it won't be there in -2. | -#!/usr/local/bin/perl -w | +#!/usr/bin/perl -w (and more of these for Python) I think this should be fixed to read #! /usr/bin/env perl No, for a RedHat or any other Linux distribution, /usr/bin is where perl and python (or their symlinks) will always live. Although you missed the redundant patches in the regression tree :-). Re: spec file | # I hope this works | | %ifarch ia64 | ln -s linux_i386 src/template/linux | %endif It definitely won't... 7.0 required it. Baggage from the previous build. | # If libtool installed, copy some files | if [ -d /usr/share/libtool ] | then | cp /usr/share/libtool/config.* . | fi This is useless (because the config.* files are not in src/ anymore) and (if it were fixed) not recommendable because config.{guess,sub} is not compatible to itself, *especially* in terms of Linux recognition. You really should use the ones PostgreSQL comes with. Trond can answer that one more effectively than can I, as that's his insertion. Of course, I've got to reorg the destination to match the source tree's reorg. | %ifarch ppc | NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v "\-O"|xargs -n 100` | NEW_CFLAGS="$NEW_CFLAGS -O0" This is no longer necessary. Depends on the convolutions the particular build of rpm itself is doing. This is a fix for the broken rpm setup found on Linux-PPC, as found by Tom Lane. It would be marvelous if this would be expendable at this juncture. | ./configure --enable-hba --enable-locale --with-CXX --prefix=/usr\ There is no option called '--enable-hba'. And I think you're supposed to use %{configure}. If it works now, I'll use it. Version 7.0 and prior wouldn't work with %{configure}. And --enable-hba has existed at some point in time. | %if %tkpkg | --with-tk --with-x \ | %endif There is no '--with-x'. '--with-tk' is the default if '--with-tcl' was given; you should use '--without-tk' if you don't want it. There was in the past a --with-x. So I need to change that to check for the negation of tkpkg and use --without-tk if so. | %if %jdbc | --with-java \ | %endif There is no such option. Hmmm. I don't remember when that one got placed there | %ifarch alpha | --with-template=linux_alpha \ | %endif This won't work and is not necessary. More 7.0 and prior baggage. The patches for alpha at one point (6.5 through 7.0.3) have required this -- of course, with the need for the alpha patches gone, the need for the special config step is also gone. One more piece of baggage I missed. | make COPT="$NEW_CFLAGS" DESTDIR=$RPM_BUILD_ROOT/usr all You should set CFLAGS when you run configure. (%{configure} will do that.) DESTDIR is only useful when you run 'make install'. And DESTDIR should not include /usr. Yes, if you'll notice I fixed the DESTDIR in the install. But, of course, it's not needed (nor is it used) in the build itself. Again, I'll use %{configure} when I verify that it works properly (if it does, that will be a very Good Thing for all involved). But, again, you're seeing baggage that 7.0 and prior required in order to build (well, except DESTDIR). | make all PGDOCS=unpacked -C doc Not sure what this is supposed to do, but I don't think it will do what you expect. The docs are installed automatically. Well, they are _now_. But 7.0 and prior. again, more old baggage. | mkdir -p $RPM_BUILD_ROOT/usr/{include/pgsql,lib,bin} | mkdir -p $RPM_BUILD_ROOT%{_mandir} You don't need that, the directories are made automatically. They are _now_. But before, when the make install put things in the 'wrong' place, it was required to make the directories before doing the copies and moves necessary. | make DESTDIR=$RPM_BUILD_ROOT -C src install No '-C src'. Not anymore, at least. *sigh* There has been alot of baggage accumulate in the spec due to 7.0 and prior's slightly brain-dead build. I got rid of a lot of it -- but
[HACKERS] $PGDATA/base/???
On older versions of PG, 7.0 included, in the $PDGATA/base folder you could see the names of the databases for that $PGDATA. Now all I see is: $ ls -l total 16 drwx-- 2 postgres wheel 1536 Jan 12 15:42 1 drwx-- 2 postgres wheel 1536 Jan 12 15:41 18719 drwx-- 2 postgres wheel 1536 Jan 12 15:42 18720 drwx-- 2 postgres wheel 1536 Jan 15 15:59 18721 Is there a way to relate this to the names of the databases? Why the change? Or am I missing something key here.. - Brandon b. palmer, [EMAIL PROTECTED] pgp: www.crimelabs.net/bpalmer.pgp5
Re: [HACKERS] $PGDATA/base/???
bpalmer wrote: On older versions of PG, 7.0 included, in the $PDGATA/base folder you could see the names of the databases for that $PGDATA. Now all I see is: No longer. Is there a way to relate this to the names of the databases? Why the change? Or am I missing something key here.. See the thread on the renaming in the archives. In short, this is part of Vadim's work on WAL -- the new naming makes certain things easier for WAL. Utilities to relate the new names to the actual database/table names _do_ need to be written, however. The information exists in one of the system catalogs now -- it just has to be made accessible. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Lamar Owen [EMAIL PROTECTED] writes: | %ifarch ppc | NEW_CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v "\-O"|xargs -n 100` | NEW_CFLAGS="$NEW_CFLAGS -O0" This is no longer necessary. Depends on the convolutions the particular build of rpm itself is doing. This is a fix for the broken rpm setup found on Linux-PPC, as found by Tom Lane. It would be marvelous if this would be expendable at this juncture. It is. 7.1 builds cleanly on PPC without any CFLAGS hackery. I think we can even survive the -fsigned-char stupidity now ;-) I think the standard installed set of headers is sufficient. Is it? It _wasn't_ sufficient for SPI development at 7.0. Have the headers and the headers install been fixed to install _all_ necessary development headers, SPI included? No, nothing's been done about that AFAIK. I'm not sure the RPMs should be taking it on themselves to solve the problem, however. regards, tom lane
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Tom Lane wrote: Lamar Owen [EMAIL PROTECTED] writes: doing. This is a fix for the broken rpm setup found on Linux-PPC, as found by Tom Lane. It would be marvelous if this would be expendable at this juncture. It is. 7.1 builds cleanly on PPC without any CFLAGS hackery. I think we can even survive the -fsigned-char stupidity now ;-) Oh, good. Makes it much cleaner. Care to test that theory? :-) I think the standard installed set of headers is sufficient. Is it? It _wasn't_ sufficient for SPI development at 7.0. Have the headers and the headers install been fixed to install _all_ necessary development headers, SPI included? No, nothing's been done about that AFAIK. I'm not sure the RPMs should be taking it on themselves to solve the problem, however. Just trying to make the postgresql-devel rpm complete, as per request. Since the folk who build from source usually still have the source tree around to do SPI development in, it's not as big of an issue for that install. Of course, if the consensus is that the RPM's simply track what the source tarball does, then that can also be arranged. But, the precedent of the RPMset fixing difficulties with the source install has already been set with the upgrading procedure. Arguments about why those wishing to do SPI development should install a full source tree aside, I'm simply providing an RPM-specific requested feature. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Tom Lane wrote: Let me know when you think the 7.1 RPM specfile is stable enough to be worth testing, and I'll try to build PPC RPMs. Ok. Should be coincident with -2. I'm planning to have a -2 out later this week. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Trond Eivind Glomsrd writes: We have a libtool tuned to work with lots of platforms, like ia64, s390 etc... this makes sure it's used. We don't use libtool. Nor does libtool care about the processor. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] $PGDATA/base/???
Is there a way to relate this to the names of the databases? Why the change? Or am I missing something key here.. See the thread on the renaming in the archives. In short, this is part of Vadim's work on WAL -- the new naming makes certain things easier for WAL. Utilities to relate the new names to the actual database/table names _do_ need to be written, however. The information exists in one of the system catalogs now -- it just has to be made accessible. Yes, I am hoping to write this utility before 7.1 final. Maybe it will have to be in /contrib. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Peter Eisentraut [EMAIL PROTECTED] writes: Trond Eivind Glomsrd writes: We have a libtool tuned to work with lots of platforms, like ia64, s390 etc... this makes sure it's used. We don't use libtool. Doing so would be a good thing. Nor does libtool care about the processor. As you can see from the actual code segment, only the config.{guess,sub} files are copied. -- Trond Eivind Glomsrd Red Hat, Inc.
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Lamar Owen writes: | diff -uNr postgresql-7.1beta3.orig/src/Makefile.shlib postgresql-7.1beta3/src/Makefile.shlib | - LINK.shared = $(COMPILER) -shared -Wl,-soname,$(soname) | + LINK.shared = $(COMPILER) -shared -Wl | endif This cannot possibly be right. It's what you recommended a while back. See the discussions on -soname from the libpq.so.2.1 versus libpq.so.2.0 thread awhile back. The patch I recommended was - LDFLAGS_SL:= -Bdynamic -shared -soname $(shlib) + LDFLAGS_SL:= -Bdynamic -shared -soname +lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION) but that's not what your patch does. The issue is fixed, you shouldn't patch anything. I think this should be fixed to read #! /usr/bin/env perl No, for a RedHat or any other Linux distribution, /usr/bin is where perl and python (or their symlinks) will always live. I was thinking in terms of fixing this in the source tree. There is no '--with-x'. '--with-tk' is the default if '--with-tcl' was given; you should use '--without-tk' if you don't want it. There was in the past a --with-x. But it never did anything. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Peter Eisentraut [EMAIL PROTECTED] writes: Trond Eivind Glomsrd writes: We don't use libtool. Doing so would be a good thing. Not if our code is more portable than libtool's. And this is the case? libtool covers pretty much everything... and you don't need to use it for every target. Nor does libtool care about the processor. As you can see from the actual code segment, only the config.{guess,sub} files are copied. But you argued that this is because your config.guess supports s390 and ia64 (which ours does as well) It may do so now, but I'm pretty sure it hasn't always done so... and even if it does, it doesn't hurt. -- Trond Eivind Glomsrd Red Hat, Inc.
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Lamar Owen [EMAIL PROTECTED] writes: Peter Eisentraut wrote: Trond Eivind Glomsrd writes: We have a libtool tuned to work with lots of platforms, like ia64, s390 etc... this makes sure it's used. We don't use libtool. Nor does libtool care about the processor. In particular, this was and is a RedHat-made change. It does not break anything that I am aware of, and allows the distributions to do their thing as well. Note that this wasn't included in Red Hat Linux 7... it's been done since then, and I don't remember doing it myself (which of course doesn't mean I didn't do it :) - it might have been done for the S/390 port, by the people working on that. So, Trond, what sort of tunings have been performed that make the files in question need to be copied? I'm sure you have a good reason; I am just curious as to what it is. For most apps, it's just a question of configure working vs. configure failing on IA64 (there is no "tuning" as such, my choice of words wasn't too good). There may be something similar for S/390. -- Trond Eivind Glomsrd Red Hat, Inc.
Re: [HACKERS] RPMS for 7.1beta3 being uploaded.
Trond Eivind Glomsrd wrote: Lamar Owen [EMAIL PROTECTED] writes: In particular, this was and is a RedHat-made change. It does not break anything that I am aware of, and allows the distributions to do their thing as well. Note that this wasn't included in Red Hat Linux 7... it's been done since then, and I don't remember doing it myself (which of course doesn't mean I didn't do it :) - it might have been done for the S/390 port, by the people working on that. A non-conditional version (the conditional is my change) was included as far back as RedHat 6.2. For most apps, it's just a question of configure working vs. configure failing on IA64 (there is no "tuning" as such, my choice of words wasn't too good). There may be something similar for S/390. Can we test both ways (after your current project is done)? -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] CRCs
Andreas SB Zeugswetter wrote: Tom Lane wrote: Instead of a partial row CRC, we could just as well use some other bit of identifying information, say the row OID. ... Checking that there is a valid tuple at the slot indicated by the index item, and that it has the right OID, should be a good enough (and cheap enough) test. I would hardly call an additional 4 bytes for OID per index entry cheap. "Cheap enough" is very different from "cheap". Undetected corruption may be arbitrarily expensive when it finally manifests itself. That said, maybe storing just the low byte or two of the OID in the index would be good enough. Also, maybe the OID would be there by default, but could be ifdef'd out if the size of the indices affects you noticeably, and you know that your equipment (unlike most) really does implement strict write ordering. Nathan Myers [EMAIL PROTECTED]
[HACKERS] Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?
Tom Lane wrote: Why does LockClassinfoForUpdate() insist on doing heap_mark4update? Because I want to guard the target pg_class tuple by myself. I don't think we could rely on the assumption that the lock on the corresponding relation is held. For example, AlterTableOwner() doesn't seem to open the corresponding relation. As far as I can see, this accomplishes nothing except to break concurrent index builds. If I do create index tenk1_s1 on tenk1(stringu1); create index tenk1_s2 on tenk1(stringu2); in two psqls at approximately the same time, the second one fails with ERROR: LockStatsForUpdate couldn't lock relid 274157 This is my fault. The error could be avoided by retrying to acquire the lock like "SELECT FOR UPDATE" does. Regards. Hiroshi Inoue
Re: [HACKERS] copy from stdin; bug?
yes, here are the output: datname |datdba|encoding|datpath -+--++- template1|31| 5|template1 map | 1003| 5|map helyes | 1003| 5|helyes i found that if i put a space behind the letters ([o with accent][a-z][\t]) before the tab, it works correct... but without the space it corrupt the database... The encoding of your databases are all UNICODE. So you need to input data as UTF-8 in this case. I guess you are trying to input ISO-8859-1 encoded data that is the source of the problem. Here are possible solutions: 1) input data as UTF-8 2) crete a new databse using encoidng LATIN1. createdb -E LATIN1... 3) upgrade to 7.1 that has the capability to do an automatic conversion between UTF-8 and ISO-8859-1. -- Tatsuo Ishii
[HACKERS] Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?
Hiroshi Inoue [EMAIL PROTECTED] writes: Tom Lane wrote: Why does LockClassinfoForUpdate() insist on doing heap_mark4update? Because I want to guard the target pg_class tuple by myself. I don't think we could rely on the assumption that the lock on the corresponding relation is held. For example, AlterTableOwner() doesn't seem to open the corresponding relation. Possibly AlterTableOwner is broken. Not sure that it matters though, because heap_update won't update a tuple anyway if another process committed an update first. That seems to me to be sufficient locking; exactly what is the mark4update adding? (BTW, I notice that a lot of heap_update calls don't bother to check the result code, which is probably a bug ...) As far as I can see, this accomplishes nothing except to break concurrent index builds. If I do create index tenk1_s1 on tenk1(stringu1); create index tenk1_s2 on tenk1(stringu2); in two psqls at approximately the same time, the second one fails with ERROR: LockStatsForUpdate couldn't lock relid 274157 This is my fault. The error could be avoided by retrying to acquire the lock like "SELECT FOR UPDATE" does. I have a more fundamental objection, which is that if you think that this is necessary for index creation then it is logically necessary for *all* types of updates to system catalog tuples. I do not like that answer, mainly because it will clutter the system considerably --- to no purpose. The relation-level locks are necessary anyway for schema updates, and they are sufficient if consistently applied. Pre-locking the target tuple is *not* sufficient, and I don't think it helps anyway if not consistently applied, which it certainly is not at the moment. regards, tom lane
[HACKERS] Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?
Tom Lane wrote: Hiroshi Inoue [EMAIL PROTECTED] writes: Tom Lane wrote: Why does LockClassinfoForUpdate() insist on doing heap_mark4update? Because I want to guard the target pg_class tuple by myself. I don't think we could rely on the assumption that the lock on the corresponding relation is held. For example, AlterTableOwner() doesn't seem to open the corresponding relation. Possibly AlterTableOwner is broken. Not sure that it matters though, because heap_update won't update a tuple anyway if another process committed an update first. That seems to me to be sufficient locking; exactly what is the mark4update adding? I like neither unexpected errors nor doing the wrong thing by handling tuples which aren't guaranteed to be up-to-date. After mark4update, the tuple is guaranteed to be up-to-date and heap_update won't fail even though some commands etc neglect to lock the correspoding relation. Isn't it proper to guard myself as much as possible ? (BTW, I notice that a lot of heap_update calls don't bother to check the result code, which is probably a bug ...) As far as I can see, this accomplishes nothing except to break concurrent index builds. If I do create index tenk1_s1 on tenk1(stringu1); create index tenk1_s2 on tenk1(stringu2); in two psqls at approximately the same time, the second one fails with ERROR: LockStatsForUpdate couldn't lock relid 274157 This is my fault. The error could be avoided by retrying to acquire the lock like "SELECT FOR UPDATE" does. I have a more fundamental objection, which is that if you think that this is necessary for index creation then it is logically necessary for *all* types of updates to system catalog tuples. I do not like that answer, mainly because it will clutter the system considerably --- to no purpose. The relation-level locks are necessary anyway for schema updates, and they are sufficient if consistently applied. Pre-locking the target tuple is *not* sufficient, and I don't think it helps anyway if not consistently applied, which it certainly is not at the moment. regards, tom lane
[HACKERS] Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?
Hiroshi Inoue [EMAIL PROTECTED] writes: I like neither unexpected errors nor doing the wrong thing by handling tuples which aren't guaranteed to be up-to-date. After mark4update, the tuple is guaranteed to be up-to-date and heap_update won't fail even though some commands etc neglect to lock the correspoding relation. Isn't it proper to guard myself as much as possible ? If one piece of the system "guards itself" and others do not, what have you gained? Not much. What I want is a consistently applied coding rule that protects all commands; and the simpler that coding rule is, the more likely it is to be consistently applied. I do not think that adding mark4update improves matters when seen in this light. The code to do it is bulky and error-prone, and I have no confidence that it will be done right everywhere. In fact, at the moment I'm not convinced that it's done right anywhere. The uses of mark4update for system-catalog updates are all demonstrably broken right now, and the ones in the executor make use of a hugely complex and probably buggy qualification re-evaluation mechanism. What is the equivalent of qual re-evaluation for a system catalog tuple, anyway? regards, tom lane
Re: [HACKERS] subselect bug?
Tatsuo Ishii [EMAIL PROTECTED] writes: select * from table_a a where ((select data_a from table_a where id = a.id) (select data_b from table_a where id = a.id)); ERROR: parser: parse error at or near "" I think I finally got this right ... see if you can break the revised grammar I just committed ... Thanks. Works fine now. -- Tatsuo Ishii
[HACKERS] 7.1beta3-2 RPMset uploading.
Uploading now. Should show up on ftp.postgresql.org soon. Look in /pub/dev/test-rpms. BETA TEST USE ONLY. Tom, try out a PPC build on this one. I know of one problem that I have to fix -- postgresql-perl fails dependencies for libpq.so (I backed out the patch to Makefile.shlib). A --nodeps install installs it OK, and the test.pl script (/usr/share/perl5/test.pl) passes its tests. Fixes include: plpgsql and pltcl are now in /usr/lib where they belong. The includes in the devel RPM were split, now they are all in /usr/include/postgresql. This is a change from prior releases. Baggage from prior RPM's removed from spec file. pg_config in -devel rpm. pg_upgrade removed. And others -- see the changelog in the spec file. BETA TEST USE ONLY! -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
[HACKERS] Re: Why is LockClassinfoForUpdate()'s mark4update a good idea?
Tom Lane wrote: Hiroshi Inoue [EMAIL PROTECTED] writes: I like neither unexpected errors nor doing the wrong thing by handling tuples which aren't guaranteed to be up-to-date. After mark4update, the tuple is guaranteed to be up-to-date and heap_update won't fail even though some commands etc neglect to lock the correspoding relation. Isn't it proper to guard myself as much as possible ? If one piece of the system "guards itself" and others do not, what have you gained? Not much. ??? The system guarding itself won't gain bad result at least. If one piece of system "guards others" and others do not, both may gain bad results. Locking a class info by locking the corrsponding relation is such a mechanism. However I don't think we could introduce this mechanism to all system catalogs. I implemented LockClassinfoForUpdate() by the following reason. 1) pg_class is the most significant relation. 2) LockClassinfoForUpdate() adds few new conflicts by locking the pg_class tuple because locking the corresponding relation locks the pg_class entity implicitly unless some stuff neglects to lock corresponding relation. Regards. Hiroshi Inoue
[HACKERS] Re: MS Access vs IS NULL (was Re: [BUGS] Bug in SQL functions that use a NULL parameter directly)
Anyone recall anything about that? A quick search of my archives didn't turn up the discussion that I thought I remembered. Hmm. Maybe now we know what you dream about at night ;) - Thomas
Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
Tom Lane writes: [EMAIL PROTECTED] writes: on which configure didn't detect the absence of libz.so Really? Details please. It's hard to see how it could have messed up on that. I didn't look well enough -- I apologize. The library is there, but ld.so believes it is not: typhoon postmaster ld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory Killed Odd. Can you show us the part of config.log that relates to zlib? configure:4179: checking for zlib.h configure:4189: gcc -E conftest.c /dev/null 2conftest.out configure:4207: checking for inflate in -lz configure:4226: gcc -o conftest conftest.c -lz -lgen -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses 15 configure:4660: checking for crypt.h This doesn't tell me much. But I modified configure to exit right after this, without removing conftest*, and when I ran conftest it came back with the same message: typhoon ./conftest ld.so.1: ./conftest: fatal: libz.so: open failed: No such file or directory Killed It's strange that configure's check to see if zlib is linkable should succeed, only to have the live startup fail. It is. In this line: if { (eval echo configure:4226: \"$ac_link\") 15; (eval $ac_link) 25; } test -s conftest${ac_exeext}; then why is conftest tested for size instead of being executed? Is it possible that you ran configure with a different library search path (LD_LIBRARY_PATH or local equivalent) than you are using now? No, I didn't alter it. I am using the system-wide settings. It's suspicious that the error message mentions libz.so when the actual file name is libz.so.1, but I still don't see how that could result in configure's link test succeeding but the executable not running. That puzzles me as well. It seems to be because there is no libz.so on the system. For if I do this: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/customer/selkovjr/lib ln -s /usr/openwin/lib/libz.so.1 ~/lib/libz.so the libz problem is gone, only to be followed by the next one: typhoon ./conftest ld.so.1: ./conftest: fatal: libreadline.so: open failed: No such file or directory The odd thing is, there is no libreadline.so* on this system. Here's the corresponding part of config.log: configure:3287: checking for library containing readline configure:3305: gcc -o conftest conftest.c -ltermcap -lcurses 15 Undefined first referenced symbol in file readline/var/tmp/ccxxiW3R.o ld: fatal: Symbol referencing errors. No output written to conftest collect2: ld returned 1 exit status configure: failed program was: #line 3294 "configure" #include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char readline(); int main() { readline() ; return 0; } configure:3327: gcc -o conftest conftest.c -lreadline -ltermcap -lcurses 15 This system is probaly badly misconfigured, but it would be great if configure could see that. By the way, would you mind if I asked you to log in and take a look? Is there a phone number where I can get you with the password? I am not sure whether such tests could be of any value, but it's the only Sun machine available to me for testing. Thank you, --Gene