Re: [GENERAL] Problems with kerberos 4 authenication
Rodney McDuff [EMAIL PROTECTED] writes: I've compiled postgresql 6.3.2 with kerberos 4 support (using the KTH-KRB Ebones distribution) on a Alpha running DU4.0D. I've been using KTH Kerberos IV with PostgreSQL for a long time, and it's always been working great, until very recently (about which more later). Right now, I use PostgreSQL 6.4.2, under NetBSD on i386 and Sparc systems, with no problems. I create a postgres_dbms principle in /etc/srvtab (and arranged for the postmaster to be able to read this file) and made the appropriate modifications to pg_hba.conf file. (It's "principal", not "principle", by the way.) You probably shouldn't do it this way, since it means opening up access to your main srvtab file more than you should be comfortable with. Use ksrvutil to create a separate srvtab for PostgreSQL, and modify the Makefile.global file in the main PostgreSQL src directory after configure, before make. I've then use kinit to get a krbtgt (ticket granting ticket) which shows up using klist. I then type "psql database" and get a "User authentication failed" error. Running the postmaster in debug mode shows up "pg_krb4_recvauth: kerberos error: Can't decode authenticator (krb_rd_req)" (which is a kstatus of RD_AP_UNDEC=31). But whats really werid is that I successfuly get a postgres_dbms ticket from the KDC (which shows up in both the kerberos logs and under a a klist). I see the exact same behavior with the current CVS version of PostgreSQL, and have been trying to find time to study it more carefully and post a description of the problem. I assume something was done to the communication between front end and back end that broke Kerberos. I can't recall if I ever ran the actual 6.3.2 -- I've been tracking CVS mostly -- but I can confirm that 6.4.2 is OK, so you might want to upgrade to that before going further with your problem. -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"
[GENERAL] GIS/GPS Experiences with pgsql?
On Wed, 17 Feb 1999, Peter T Mount wrote: [snip] If the TIGER/Line data is raster, and each feature (polygon, line, circle, etc) doesn't exceed the block size, then postgresql should be able to handle it. [snip] Vector not raster. Right? Actually, it's just text. Here's a sample record: 10003 43140280 B Smallwood Road A31 13131891899301893018 9501 9501 227 222 -82521645+33638976 -82528956+33639940 ...the CD-ROM "database" is about 600MB. It should present no problem to extract the important data w/perl. In related news, I read on slashdot.org today, in the "Bruce Perens Resigns From OSI" article: "...I'm Bruce Perens. You may know me as the primary author of the Debian Free Software Guidelines and the Open Source Definition. I wrote the Electric Fence malloc() debugger, and some pieces of Debian. And you may remember me for having brought the TIGER map database to free software. If you want copies of that, you can get them through Dale Scheetz..." Anybody know WTF he is talking about? --bryan -- Failure is not an option. It comes bundled with your Microsoft product. --- Bryan R. Mattern [EMAIL PROTECTED] http://www.datapace.com
Re: [GENERAL] GIS/GPS Experiences with pgsql?
On Thu, 18 Feb 1999, Gregory Maxwell wrote: On Wed, 17 Feb 1999, Peter T Mount wrote: [snip] If the TIGER/Line data is raster, and each feature (polygon, line, circle, etc) doesn't exceed the block size, then postgresql should be able to handle it. [snip] Vector not raster. Right? Yes, the Tiger data is vector. Peter -- Peter T Mount [EMAIL PROTECTED] Main Homepage: http://www.retep.org.uk PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres Java PDF Generator: http://www.retep.org.uk/pdf
RE: [GENERAL] How to improve query performance?
The only suggestion I have is to do the Sort after you get the data back, Perl's pretty good at that. Let me know what the timings are. I went to the site and it looks like it only take ~3-5 seconds to get the data to my browser and format it. -DEJ -Original Message- I did up an online survey over the weekend, and its gotten a little on the...slow side :( Unfortunately, I can see where I can speed it up any, so I'm asking for any suggestions, if its possible. Explain on the query I'm using shows: Sort (cost=5455.34 size=0 width=0) - Aggregate (cost=5455.34 size=0 width=0) - Group (cost=5455.34 size=0 width=0) - Sort (cost=5455.34 size=0 width=0) - Seq Scan on op_sys (cost=5455.34 size=39024 width=12) The Query itself is: my $OSlisting = "\ select count(sys_type) as tot_sys_type,sys_type \ from op_sys \ where sys_type is not null \ group by sys_type \ order by tot_sys_type desc;"; The table looks like: Table= op_sys +--+-- +---+ | Field | Type | Length| +--+-- +---+ | ip_number| text | var | | sys_type | text | var | | browser_type | text | var | | entry_added | datetime | 8 | | probe| bool | 1 | +--+-- +---+ Indices: op_sys_ip op_sys_type The table holds ~120k records right now, and the above query returns ~1100. To get a feel for the speed it returns, see http://www.hub.org/OS_Survey I can't think of any way to improve the speed, and yes, I do a 'vacuum analyze' on it periodically (did one just before the above EXPLAIN)... Other other note...its a v6.4.2 server, running on a PII with 384Meg of RAM and FreeBSD 3.0-STABLE... Marc G. Fournier Systems Administrator @ hub.org primary: [EMAIL PROTECTED] secondary: scrappy@{freebsd|postgresql}.org