Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
Tom Lane writes: [EMAIL PROTECTED] writes: on which configure didn't detect the absence of libz.so Really? Details please. It's hard to see how it could have messed up on that. I didn't look well enough -- I apologize. The library is there, but ld.so believes it is not: typhoon postmaster ld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory Killed Odd. Can you show us the part of config.log that relates to zlib? configure:4179: checking for zlib.h configure:4189: gcc -E conftest.c /dev/null 2conftest.out configure:4207: checking for inflate in -lz configure:4226: gcc -o conftest conftest.c -lz -lgen -lnsl -lsocket -ldl -lm -lreadline -ltermcap -lcurses 15 configure:4660: checking for crypt.h This doesn't tell me much. But I modified configure to exit right after this, without removing conftest*, and when I ran conftest it came back with the same message: typhoon ./conftest ld.so.1: ./conftest: fatal: libz.so: open failed: No such file or directory Killed It's strange that configure's check to see if zlib is linkable should succeed, only to have the live startup fail. It is. In this line: if { (eval echo configure:4226: \"$ac_link\") 15; (eval $ac_link) 25; } test -s conftest${ac_exeext}; then why is conftest tested for size instead of being executed? Is it possible that you ran configure with a different library search path (LD_LIBRARY_PATH or local equivalent) than you are using now? No, I didn't alter it. I am using the system-wide settings. It's suspicious that the error message mentions libz.so when the actual file name is libz.so.1, but I still don't see how that could result in configure's link test succeeding but the executable not running. That puzzles me as well. It seems to be because there is no libz.so on the system. For if I do this: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/customer/selkovjr/lib ln -s /usr/openwin/lib/libz.so.1 ~/lib/libz.so the libz problem is gone, only to be followed by the next one: typhoon ./conftest ld.so.1: ./conftest: fatal: libreadline.so: open failed: No such file or directory The odd thing is, there is no libreadline.so* on this system. Here's the corresponding part of config.log: configure:3287: checking for library containing readline configure:3305: gcc -o conftest conftest.c -ltermcap -lcurses 15 Undefined first referenced symbol in file readline/var/tmp/ccxxiW3R.o ld: fatal: Symbol referencing errors. No output written to conftest collect2: ld returned 1 exit status configure: failed program was: #line 3294 "configure" #include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char readline(); int main() { readline() ; return 0; } configure:3327: gcc -o conftest conftest.c -lreadline -ltermcap -lcurses 15 This system is probaly badly misconfigured, but it would be great if configure could see that. By the way, would you mind if I asked you to log in and take a look? Is there a phone number where I can get you with the password? I am not sure whether such tests could be of any value, but it's the only Sun machine available to me for testing. Thank you, --Gene
Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
Tom Lane writes: [EMAIL PROTECTED] writes: ... SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1 on which configure didn't detect the absence of libz.so Really? Details please. It's hard to see how it could have messed up on that. Tom, I didn't look well enough -- I apologize. The library is there, but ld.so believes it is not: typhoon postmaster ld.so.1: postmaster: fatal: libz.so: open failed: No such file or directory Killed This may very well be just my ISP's problem. Anyway, the details are: 1. My (relevant) environment: LD_LIBRARY_PATH=/usr/openwin/lib:/usr/lib:/usr/ucblib:/usr/ccs/lib PGLIB=/home/customer/selkovjr/pgsql/lib PGDATA=/home/customer/selkovjr/pgsql/data PATH=/usr/local/vendor/SUNWspro/bin:/usr/local/bin:/usr/local/gnu/bin:/usr/local/GNU/bin:/usr/sbin:/usr/bin:/usr/ccs/bin:/usr/ucb:/etc:/usr/etc:/usr/openwin/bin:/home/customer/selkovjr/bin:./usr/local/bin::/home/customer/selkovjr/pgsql/bin 2. I built postgres (from the snapshot of Jan 13) with: ./configure --prefix=/home/customer/selkovjr/pgsql make make install 3. initdb worked. 4. The library in question is in /usr/openwin/lib: typhoon ls -l /usr/openwin/lib | grep libz -rwxr-xr-x 1 root bin97836 Sep 23 1999 libz.a -rwxr-xr-x 1 root bin70452 Sep 23 1999 libz.so.1 I can't think of anything else. Is there a one-liner to test libz? I believe I have successfully tested and run 6.5.3 in the same environment. --Gene
Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
I am sorry I wasn't listening -- I may have helped by at least answering the direct questions and by testing. I have, in fact, positively tested both my and Oleg's code in the today's snapshot on a number of linux and FreeBSD systems. I failed on this one: SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1 on which configure didn't detect the absence of libz.so I don't think my applications are affected by Oleg's changes. But I understand the tension that occurred during the past few days and even though I am now satisfied with the agreement you seem to have achieved, I could have hardly influenced it in any reasonable way. I am as sympathetic with the need for a smooth an solid code control as I am with promoting great features (or, in this case, just keeping a feature alive). So, if I were around at the time I was asked to vote, I wouldn't know how. I usually find it difficult to take sides in "Motherhood vs. Clean Air" debates. It is true that throwing a core during a regression test does gives one a black eye. It is also true that there are probably hundreds of possible users, ignorant of the GiST, trying to invent surrogate solutions. As far as I am concerned, I will be satisfied with whatever solution you arrive at. I am pleased that in this neighborhood, reason prevails over faith. --Gene
Re: [HACKERS] Indexing for geographic objects?
Tom Lane wrote: Oleg Bartunov [EMAIL PROTECTED] writes: We've done some work with GiST indices and found a little problem with optimizer. test=# set enable_seqscan = off; SET VARIABLE test=# explain select * from test where s @ '1.05 .. 3.95'; NOTICE: QUERY PLAN: Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12) EXPLAIN % ./bench.pl -d test -b 100 -i total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs I'd venture that the major problem here is bogus estimated selectivities for rtree/gist operators. Yes, the problem is, I didn't have the foggiest idea how to estimate selectivity, nor I had any stats when I developed the type. Before 7.0, I had some success using selectivity estimators of another datatype (I think that was int, but I am not sure). In 7.0, most of those estimators were gone and I have probably chosen the wrong ones or none at all, just so I could get it to work again. The performance was good enough for my taste, so I have even forgotten that was an issue. I know, I know: 'good enough' is never good. I apoligize. --Gene
Re: [HACKERS] Indexing for geographic objects?
Franck Martin wrote: I have already created geographical objects which contains MBR(Minimum Bounding Rectangle) in their structure, so it is a question of rewriting your code to change the access to the cube structure to the MBR structure inside my geoobject. (cf http://fmaps.sourceforge.net/) Look in the CVS for latest. I have been slack lately on the project, but I'm not forgetting it. I see where you are aiming. I definitely want to be around when it starts working. Quickly I ran through the code, and I think your cube is strictly speaking a box, which also a MBR. Yes, cube is definitely a misnomer -- it suggests things are equihedral, which they aren't. I am still looking for a short name or an acronym that would indicate it is a box with an arbitrary number of dimensions. With your application, you will surely benefit from a smaller and faster code geared specifically for 3D. However I didn't see the case of intersection, which is the main question when you want to display object that are visible inside a box. The procedure is there, it is called cube_inter, but there is no operator for it. I suppose your code is under GPL, and you have no problem for me to use it, providing I put your name and credits somewhere. No problem at all -- I will be honored if you use it. Was I careless enough not to include a license? It's not exactly a GPL -- it's completely unrestricted. I should have said that somewhere. Good luck, --Gene
Re: [HACKERS] Indexing for geographic objects?
Oleg Bartunov [EMAIL PROTECTED] wrote: I'm also interested in GiST and would be happy if somebody could provide workable example. I have an idea to use GiST indices for our fulltextsearch system. I have recently replied to Franck Martin in regards to this indexing question, but I didn't think the subject was popular enough for me to contaminate the list(s). You prove me wrong. Here goes: To: Franck Martin [EMAIL PROTECTED] From: [EMAIL PROTECTED] Reply-to: [EMAIL PROTECTED] Subject: Re: [HACKERS] Indexing for geographic objects? In-reply-to: [EMAIL PROTECTED] Comments: In-reply-to Franck Martin [EMAIL PROTECTED] message dated "Sat, 25 Nov 2000 10:43:16 +1300." Mime-Version: 1.0 (generated by tm-edit 7.108) Date: Sat, 25 Nov 2000 02:56:03 -0600 It is probably possible to hook up an extension directly with the R-tree methods available in postgres -- if you stare at the code long enough and figure how to use the correct strategies. I chose an easier path years ago and I am still satisfied with the results. Check out the GiST -- a general access method built on top of R-tree to provide a user-friendly interface to it and to allow indexing of more abstract types, for which straight R-tree is not directly applicable. I have a small set of complete data types, of which a couple illustrate the use of GiST indexing with the geometrical objects, in: http://wit.mcs.anl.gov/~selkovjr/pg_extensions/ If you are using a pre-7.0 postrgres, grab the file contrib.tgz, otherwise take contrib-7.0.tgz. The difference is insignificant, but the pre-7.0 version will not fit the current schema. Unpack the source into postgresql-*/contrib and follow instructions in the README files. The types of interest for you will be seg and cube. You will find pointers to the original sources and docs in the CREDITS section of the README file. I also have a version of the original example code in pggist-patched.tgz, but I did not check if it works with current postgres. It should not be difficult to fix it if it doesn't -- the recent development in the optimizer area made certain things unnecessary. You might want to check out a working example of the segment data type at: http://wit.mcs.anl.gov/EMP/indexing.html (search the page for 'KM') I will be glad to help, but I would also recommend to send more sophisticated questions to Joe Hellerstein, the leader of the original postgres team that developed GiST. He was very helpful whenever I turned to him during the early stages of my data type project. --Gene
Re: [HACKERS] Unhappy thoughts about pg_dump and objects inherited from template1
Jan Wieck wrote: Tom Lane wrote: Philip Warner [EMAIL PROTECTED] writes: Where would you store the value if not in pg_database? No other ideas at the moment. I was just wondering whether there was any way to delete it entirely, but seems like we want to have the value for template0 available. The old way of hardwiring knowledge into pg_dump was definitely not as good. To make pg_dump failsafe, we'd IMHO need to freeze all objects that come with template0 copying. Here's another (somewhat) unhappy thought: what if there are objects in template1 or other databases that one doesn't want to dump or restore? This is very much the case for user-defined types that usually consist of multiple dozens of components. Currently, pg_dump picks them up based on their oid, whether or not they are sitting in template1, and dumps them in a non-restorable and non-portable manner along with the user data. Consequently, I have to write filters to pluck the type code out from the dump. The filters are ugly, unreliable and have to be maintained in sync with the types. Picture this, though: if int and float where user-defined types -- would anyone be happy seeing them in every dump? Or, even worse, responding to "object already exists" kind of problems during restore? Not that I couldn't get by like this; but since everybody seems unhappy too, maybe it's a good moment to consider a special 'dump' attribute for every object in the schema? The attribute could be looked at by dump and restore tools and set by whatever rules one may find appropriate. --Gene