Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and
Bruce Momjian pgman@candle.pha.pa.us writes: TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using non-default logging should not use referential integrity with default-logging tables. I have to say this smells way too much like MySQL for me to feel comfortable. But that's just my opinion. :) -Doug ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Shared memory and memory context question
[EMAIL PROTECTED] writes: 1. Change memory context to TopMemoryContext and palloc everything there. (However, I believe this still isn't shared between processes?) Nope. 2. Use the shmem functions in src/backend/storage/ipc/shmem.c to create a chunk of shared memory and use this (Although I would like to avoid writing my own memory manager to carve up the space). 3. Somehow create shared memory using the shmem functions, and set a memory context to live *inside* this shared memory, which my trigger functions can then switch to. Then use palloc() and pfree() without worrying.. You'd have to do one of the above, but #2 is probably out because all shared memory is allocated to various purposes at startup and there is none free at runtime (as I understand it). For #3, how do you plan to have a memory context shared by multiple backends with no synchronization? If two backends try to do allocation or deallocation at the same time you will get corruption, as I don't think palloc() and pfree() do any locking (they currently never allocate from shared memory). You should probably think very carefully about whether you can get along without using additional shared memory, because it's not that easy to do. -Doug ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Multiple logical databases
Mark Woodward [EMAIL PROTECTED] writes: The point is, that I have been working with this sort of use case for a number of years, and being able to represent multiple physical databases as one logical db server would make life easier. It was a brainstorm I had while I was setting this sort of system for the [n]th time. It sounds like all that would be needed is a kind of smart proxy--has a list of database clusters on the machine and the databases they contain, and speaks enough of the protocol to recognize the startup packet and reroute it internally to the right cluster. I've heard 'pgpool' mentioned here; from a quick look at the docs it looks similar but not quite what you want. So your databases would listen on 5433, 5434, etc and the proxy would listen on 5432 and route everything properly. If a particular cluster is not up, the proxy could just error out the connection. Hmm, that'd be fun to write if I ever find the time... -Doug ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Multiple logical databases
Mark Woodward [EMAIL PROTECTED] writes: It is similar to a proxy, yes, but that is just part of it. The setup and running of these systems should all be managed. All that requires is some scripts that wrap pg_ctl and bring the right instances up and down, perhaps with a web interface on top of them. I don't see any need to put that functionality in the proxy. -Doug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI Installer for
Devrim GUNDUZ [EMAIL PROTECTED] writes: http://pgfoundry.org/projects/pgnixinstaller/ We are actively looking for developers for the project. Please drop me an e-mail if you want to join this project. We will use Python, so you need to be a Python guy to join the project. We are in planning phase, if you join us earlier, we will be able to share more ideas. What value does this bring to systems that have a good package system and up-to-date repositories? I can install Postgres today on Ubuntu using a GUI tool, and install another GUI tool to configure and adminsiter it. For systems like Solaris I can see it maybe being a win. Are you going to work with the underlying system's package manager, or put everything in /usr/local? -Doug ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI
Devrim GUNDUZ [EMAIL PROTECTED] writes: On Mon, 2006-01-30 at 20:03 -0500, Doug McNaught wrote: What value does this bring to systems that have a good package system and up-to-date repositories? I can install Postgres today on Ubuntu using a GUI tool, and install another GUI tool to configure and adminsiter it. You can install, but what if you need different configure options than the package provides? This means a rebuild of the package. Instead, we will build and install that package via the installer. That's actually a pretty cool idea--compile and generate debs/rpms that reflect the user's choices, then install them. But the dependency on a compiler adds a twist of complexity--sorry, you need to install the following system packages (gcc, etc) before you can install Postgres as you've configured it. Not horrible, but perhaps intimidating for the GUI crowd? :) Is gcc in the bog-standard default install on FC these days? Certainly you can install pre-built binaries without a compiler, and let the user choose database location, autovacuum settings and stuff like that. Good luck! -Doug ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI
Marc G. Fournier [EMAIL PROTECTED] writes: On Tue, 31 Jan 2006, Devrim GUNDUZ wrote: On my RHEL boxes, I do never ever recompile the kernel since Red Hat does not provide support if I do so :) Is everything 'loadable modules' then? I can't imagine you have some mammoth kernel running on your system, do you? with every conceivable piece of hardware configured in? Yes, vendor kernels are very modular--most drivers, packet filtering, scsi etc are all loadable modules. You can of course build your own kernel with only the drivers you need built-in, but it usually doesn't make very much difference. The module system works, in general, extremely well. -Doug ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] EINTR error in SunOS
Martijn van Oosterhout kleptog@svana.org writes: I would vote for the kernel, if the server didn't respond within 5 seconds, to simply return EIO. At least we know how to handle that... You can do this now by mounting 'soft' and setting the timeout appropriately. Whether it's really the best idea, well... -Doug ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] EINTR error in SunOS
Doug Royer [EMAIL PROTECTED] writes: From the Linux 'nfs' man page: intr If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted. Solaris 'mount_nfs' man page intr | nointr Allow (do not allow) keyboard interrupts to kill a process that is hung while waiting for a response on a hard-mounted file system. The default is intr, which makes it possible for clients to interrupt applications that may be waiting for a remote mount. The Solaris and Linux defaults seem to be the opposite of each other. Actually they're the same, though differently worded. Major timeout means the server has not responded for N milliseconds, not that the client has decided to time out the request. If 'hard' is set, the client will keep trying indefinitely, though you can interrupt it if you've specified 'intr'. So I think we are saying the same thing. You can get EINTR with hard+intr mounts. Yes, *only* if the user specifically decides to send a signal, or if it uses SIGALRM or whatever. I agree that if you expect 'intr' to be used, your code needs to handle EINTR. I am not sure what you get with soft mounts on a timeout. The Linux manpage implies you get EIO. -Doug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] EINTR error in SunOS
Doug Royer [EMAIL PROTECTED] writes: The MOUNT options are opposite. Linux NFS mount - defualts to no-intr Solaris NFS mount - default to intr Oh, right--I didn't realize that was what you were talking about. -Doug ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] EINTR error in SunOS
Doug Royer [EMAIL PROTECTED] writes: Yes - if you assume that EINTR only happens on NFS mounts. My point is that independent of NFS, the error checking that I have found in the code is not complete even for non-NFS file systems. The read() and write() LINUX man pages do NOT specify that EINTR is an NFS-only error. EINTR The call was interrupted by a signal before any data was read. Right, but I think that's because read() and write() also work on sockets and serial ports, which are always interruptible. I have not heard of local-disk filesystem code on any Unix I've seen ever giving EINTR--a process waiting for disk is always in D state, which means it's not interruptible by signals. If I have the time maybe I'll grovel through the Linux sources and verify this, but I'm pretty sure of it. I'm not a PG internals expert by any means, but my $0.02 on this is that we should: a) recommend NOT using NFS for the database storage b) if NFS must be used, recommend 'hard,nointr' mounts c) treat EINTR as an I/O error (I don't know how easy this would be) d) say if you mount 'soft' and lose data, tough luck for you -Doug ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] EINTR error in SunOS
Doug Royer [EMAIL PROTECTED] writes: The 'intr' option to NFS is not the same as EINTR. It it means 'if the server does not respond for a while, then return an EINTR', just like any other disk read() or write() does when it fails to reply. No, you're thinking of 'soft'. 'intr' (which is actually a modifier to the 'hard' setting) causes the I/O to hang until the server comes back or the process gets a signal (in which case EINTR is returned). -Doug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Feature freeze date for 8.1
Tom Lane [EMAIL PROTECTED] writes: Dave Held [EMAIL PROTECTED] writes: How about an optional second connection to send keepalive pings? It could be unencrypted and non-blocking. If authentication is needed on the ping port (which it doesn't seem like it would need to be), it could be very simple, like this: This seems to have nothing whatever to do with the stated problem? Yeah--one of the original scenarios was firewall drops DB connection because it's inactive. Pinging over a second socket does nothing to address this. If you want to make sure network connection X is up, testing network connection Y, which happens to be between the same two processes, is only helpful in a limited set of circumstances. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] [HACKERS] plPHP in core?
Robert Treat [EMAIL PROTECTED] writes: If by stripped down you mean without postgresql database support then I'll grant you that, but it is no different than other any other pl whose parent language requires postgresql to be installed. If packagers are able to handle those languages than why can't they do the same with PHP ? Other languages don't require PG to be installed in order to compile them. For example, you can build Perl (with no Postgres on the system), build Postgres and then build DBD::Pg as a completely separate step. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] sigint psql
John DeSoi [EMAIL PROTECTED] writes: I'm communicating with psql via a pipe stream. This works pretty well, but one problem I have is trying to cancel an operation. If I send a sigint, psql dies. In looking at the source I gather this is because it assumes I'm in non-interactive mode (pset.notty is true). I was wondering if there was some way to work around this short of recompiling the source. I need to do the same thing on Windows. Thanks for any suggestions, On Unix you could run 'psql' through a pty rather than a pipe. No idea what the Windows equivalent would be. -Doug ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] win32 performance - fsync question
E.Rodichev [EMAIL PROTECTED] writes: On Thu, 17 Feb 2005, Christopher Kings-Lynne wrote: Fsync is so that when your computer loses power without warning, you will have no data loss. If you turn it off, you run the risk of losing data if you lose power. Chris This problem is addressed by file system (fsck, journalling etc.). Is it reasonable to handle it directly within application? No, it's not addressed by the file system. fsync() tells the OS to make sure the data is on disk. Without that, the OS is free to just keep the WAL data in memory cache, and a power failure could cause data from committed transactions to be lost (we don't report commit success until fsync() tells us the file data is on disk). -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Help me recovering data
Christopher Kings-Lynne [EMAIL PROTECTED] writes: This might seem like a stupid question, but since this is a massive data loss potential in PostgreSQL, what's so hard about having the checkpointer or something check the transaction counter when it runs and either issue a db-wide vacuum if it's about to wrap, or simply disallow any new transactions? I think autovac-in-backend is the preferred solution to this, and it's definitely on the TODO list... -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] spi and other languages
Sibtay Abbas [EMAIL PROTECTED] writes: We can call SQL statements like SPI_Execute(SELECT * FROM sometable) from the spi interface. My question is that can we enter other procedural languages as well, like pgplsql statements. You can call a function written in another procedural langauge by executing a SELECT: SPI_Execute(SELECT myfunction()) But you can't execute arbitrary code in other langauges using SPI (of from the client side, for that matter)--it has to be wrapped up in a function. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] how can i add my own procedural language?
Thomas Hallgren [EMAIL PROTECTED] writes: Sibtay, You normally don't deal with parsing, planning etc. at all from within a language handler. Unless you're implementing a language from scratch rather than linking in an existing interpreter. In which case, the PL/pgSQL source is a good example. -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] Port report--Linux/sparc32
I just compiled 8.0beta5 on my old Sparc 5. All tests passed. This is running Debian 3.0 with a 2.2.20 kernel. Sure took a long time. :) I can test on an ia32/RedHat 6.2 machine if that would be helpful. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] [HACKERS] Adding Reply-To: listname to Lists
Marc G. Fournier [EMAIL PROTECTED] writes: No, the poster will still be included as part of the headers ... what happens, at least under Pine, is that I am prompted whther I want to honor the reply-to, if I hit 'y', then the other headers *are* strip'd and the mail is set right back to the list ... I'm in the Reply-To considered harmful camp. I also don't see any real evidence that the current setup is causing problems. -Doug ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [Testperf-general] Re: [HACKERS] ExclusiveLock
Bort, Paul [EMAIL PROTECTED] writes: One other thought: How does static RAM compare to disk speed nowadays? A 1Gb flash drive might be reasonable for the WAL if it can keep up. Flash RAM wears out; it's not suitable for a continuously-updated application like WAL. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [Plperlng-devel] Re: [HACKERS] Concern about new PL/Perl
Joshua D. Drake [EMAIL PROTECTED] writes: Thank you ... I knew you guys celebrated later then us, just didn't know why ... do you guys celebrate Remembrance Day same as us, or different too? Ours is Nov 11 ... I don't even know what Rememberance Day is ;) WWI ended on November 11, 1918. The British and their (ex-) colonies take the holiday more seriously than we do, possibly because they lost a million dead in that war... We have it too, but we call it Veterans' Day. -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] java.lang.ClassNotFoundException
Ulrich, Azar S. [EMAIL PROTECTED] writes: and I do have classpath set to where pg73jdbc3.jar is located via: export CLASSPATH=.:~/lib You need to put the JAR file itself in the classpath: export CLASSPATH=~/lib/pg73jdbc3.jar:other stuff -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Supporting Encryption in Postgresql
Paul Tillotson [EMAIL PROTECTED] writes: Given that the client does not write pages to the disk, this would be back-end encryption. Just out of curiosity, what threat model does this sort of encryption protect against? Surely any attacker who can read the files off the disk can also get the password used to encrypt them. Or would this be provided by the client and kept in RAM only? If I have root- or postgres-level access to the machine, I can snarf the encryption key out of RAM even if it's never written to disk. I don't see what this (backend page-level encryption) would buy you over just using an encrypted partition, which is already available on most OSs... -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Indexed views?
Tiago Wright [EMAIL PROTECTED] writes: Yes, thanks Tom. This is precisely what I was missing. I searched the archives for the reason why this is so, but I found only one message mentioning the MVCC mechanism. Can you point me in the right direction? I would like to understand the issue. Short answer: MVCC tuple visibility status isn't (and can't be) stored in the index. So the backend has to visit the actual tuple to see if it is visible to the transaction that is currently running. IMHO, a change in this area could deliver great performance improvements. Hard to say how it would work, but come up with a good design and quality patch and it'll probably go in. :) -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Indexed views?
Alvaro Herrera [EMAIL PROTECTED] writes: On Tue, Sep 07, 2004 at 07:58:56PM -0400, Doug McNaught wrote: Hard to say how it would work, but come up with a good design and quality patch and it'll probably go in. :) Probably not. This has been discussed before; what's needed is that the visibility information is stored also in the index. This is hard and inefficient to do, because it requires updating the index at the same time that the heap is updated. Which is a bad proposition as soon as there is more than one index, and when there is a seqscan involved (i.e. no index), because it means a lot of extra I/O. Yeah, hence the smiley. -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] fsync and hardware write cache
[EMAIL PROTECTED] writes: Something to think about: if you run PostgreSQL with fsync on, but you use the hardware write cache on your disk drives, how likely are you to lose data? Obviously, this is a fairly limited problem, as it only applies to power down (which you can control) or power loss where the risks may be reduced but not eliminated with a UPS. Does it make sense to add a platform specific call that will flush a write cache when fsync is enable? AIUI, recent versions of the Linux kernel are supposed to do this for you, but not all drives honor the flush command, so you're still at the mercy of your disk vendor... -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] fsync vs open_sync
[EMAIL PROTECTED] writes: Just out of interest, what happens to the difference if you use *ext3* (perhaps with data=writeback) Actually, I was working for a client, so it wasn't a general exploritory, but I can say that early on we discovered that ext3 was about the worst file system for PostgreSQL. We gave up on it and decided to use ext2. I'd be interested in which ext3 mount options you used--I can see how anything other than 'data=writeback' could be a performance killer. I've been meaning to run a few tests myself, but haven't had the time... -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] FW: postgres shmat request fails on win98 (cygwin)
Mike G [EMAIL PROTECTED] writes: Hello, I will bring it up with the postgresql hackers. PS - Sorry for the new posting. I read these via digest. Neither the Cygwin nor the (beta) Windows-native version of Postgresql is supported in any way on Windows 9x/ME AFIAK. Anyone trying to run it there is strictly on their own. -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Selecting a specific row
Cason, Kenny [EMAIL PROTECTED] writes: Is there an easy way to select, say, the 15th row in a table? I can't use a sequence number because rows will sometimes be deleted resulting in the 15th row now being a different row. I need to be able to select the 15th row regardless of whether it is the same 15th row as the last select. You can use LIMIT and OFFSET in your SELECT for this, but bear in mind that rows are not stored in any guaranteed order, so use ORDER BY if you need to. -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Version Numbering -- The great debate
Tom Lane [EMAIL PROTECTED] writes: Huh? That is exactly counter to most people's expectations about version numbering. N.0 is the unstable release, N.1 is the one with some bugs shaken out. If we release a 7.5 people will expect it to be less buggy than 7.4, and I'm not sure we can promise that. I agree with this, and from my non-authoritative viewpoint as a user and rabid advocate, I think we should be going with 8.0 for this release as well. :) -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] User Quota Implementation
Jonah H. Harris [EMAIL PROTECTED] writes: this leads me to the first question I asked... do you want me to pull the latest cvs and patch it... or distribute my patch for 7.4.3? Latest CVS, no question. It would be going into 7.6 (or whatever) T the earliest... -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] xeon processors
Christopher Browne [EMAIL PROTECTED] writes: Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Jaime Casanova) would write: Can anyone tell me if postgresql has problems with xeon processors? If so, there is any fix or project of fix it? Well, there's a known issue that IA-32 systems with more than 4GB of memory use an extension known as PAE to bank-switch between the banks of memory. AIUI, it's not really bank switching in the old 8-bit sense. Rather, there is a big linear 36-bit physical address space, and the processor's page tables have been extended so they can point to any page in this space. Any one process still sees a 4GB (32-bit) address space since that's how big the registers are. Any time you switch banks, there's a fair little bit of work to be done. That includes multitasking systems that need to context switch a few thousand times per second. I don't think this is any more overhead than a normal context switch--cache misses, TLB flush etc. The fix for this problem is to rewrite all of your applications so that they become conscious of which bits of memory they're using so they can tune their own behaviour. This, of course, requires discarding useful notions such as virtual memory that are _assumed_ by most modern operating systems. This is only if you need to address more than 32-bits worth of address in a single process, which is not always the case on server-class systems (on scientific/numerical workloads, it's often a big win). In which case you are certainly right: The fix is to buy hardware that hasn't been hacked up so badly. 64-bit systems get cheaper all the time... :) -Doug ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Accessing Specific Schemas
Cason, Kenny [EMAIL PROTECTED] writes: I'm having trouble accessing specific schemas and wonder if maybe I haven't installed something properly in 7.4.2. Here is what is happening: SELECT * FROM INFORMATION_SCHEMA.SCHEMATA; ERROR: parser: parse error at or near . This error appears when I try to access ANY specific schema. Help! Sounds like you're talking to an older server that doesn't understand schemas. What does select version() return? -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] initdb initial password
Magnus Hagander [EMAIL PROTECTED] writes: I would like to add capability to initdb to accept the password for the superuser account at invocation. Right now, I can use --pwprompt or -W to have it ask for a password. But for the win32 GUI installed I'd like to ask for the password in the installer, and pass it to initdb. Considering how it's done in different places, what's the preferred way to do this? Commandline parameter? Environment variable? Other (what would that be?) There's a reason why it's done that way, which is that the others are all insecure. At least on some Unixen. Other binaries accept the password as an environment variable. Are you saying that it's secure to pass it as environment variable to psql/pgdump/etc but not to initdb? If so, care to enlighten me as to why this is different (I'm clearly not seeing why..)? The environment variable is there for backward compatibility, but it's deprecated. There's no reason to enable that functionality in new code. -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] enabling tcpip_socket by default
Greg Stark [EMAIL PROTECTED] writes: Doug McNaught [EMAIL PROTECTED] writes: Java doesn't support Unix domain sockets. If you want to use JDBC, you have to use TCP sockets. That doesn't follow. That just means you can't implement a unix domain socket driver using only Java. Is there some reason the driver has to be pure a Java driver? Well, no. But then you get into the horror that is JNI, and the need to maintain that glue layer (there's a substantial impedance mismatch between libpq and the JDBC API) on all platforms. I had always assumed the JDBC driver isn't currently pure java and is just an API wrapper around libpq. Writing and maintaining a pure java driver would be much more work and be much slower for no practical gain. The JDBC driver is and always has been pure Java, and I'm damn glad. If your app is all Java, all you have to do is add the driver jar to the classpath and you're done--no libpq, no .so version mismatches, etc. AFAIK (I'm not one of the developers) there are no plans to write a JNI layer just for Unix sockets. And I doubt very much that it's that much slower in most cases, especially with modern JIT compilers. You'll take a bit of a hit from the TCP overhead, but in practice for most apps it won't be an issue. -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Postgres Optimization: IO and Data Organization
[EMAIL PROTECTED] (James Thornton) writes: I want to understand how Postgres organizes data and handles IO operations so that I will better know how to optimize a Postgres database server. I am looking for answers to specific questions and pointers to where this stuff is documented. If you haven't read the Postgres docs in detail that would be a good place to start. :) How does Postgres organize its data? For example, is it grouped together on the disk, or is it prone to be spread out over the disk? Does vacuum reorganize the data? (Seeking to minimize disk head movement.) Tables and indexes are stored in disk files in the filesystem, so PG relies on the OS to lay out data on the disk. How does Postgres handle sequential IO? Does it treat is specially such as issuing large IO operations that span block boundaries? The WAL (write-ahead log), a sort of journal, is written sequentially. Im not too familiar with whether WAL writes are ganged together if possible, but I would think so. How does Postgres handle direct IOs (operations directly to disk, bypassing the buffer cache)? Will it issue multiple asynchronous IO operations? No direct I/O, no async I/O. A background checkpoint process handles a lot of the data writeback I/O. Is Postgres always one process per client, or can it spawn additional processes to parallelise some operations such as a nested loops join operation? One process per client connection. Right now there is no spawning of additional worker processes. Is there a recommended file system to use for Postgres data, such as ext2 or another non-journaling FS? You definitely want a journaled FS or the equivalent, since losing filesystem metadata on a crash can ruin your whole day, not to mention the fsck times... There doesn't seem to be a clear winner in the which FS debate. If you use ext3, it's probably fastest to mount with 'data=writeback' for your DB partition, since you can rely on PG to journal the data writes. Most other FS's only journal metadata anyway. Hope this helps! -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] plpgsql Integer Concat To String
Paul Punett [EMAIL PROTECTED] writes: Hi, I want to concatenate a it counter to a string in a loop in plpgsql. DECLARE counter integer := 1; IdSet char : = 'UniqueId' IdForEachRun varchar; BEGIN IdForEachRun := IdSet || counter;(PROBLEM HERE) IdForEachRun := IdSet || counter::text; -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostgreSQL port to pure Java?
Shridhar Daithankar [EMAIL PROTECTED] writes: Jokes and facts aside, I can't help it to think how better it would have been, if postgresql was in C++. We could easily plug multiple implementations of underlying subsystems without mucking much in base code.. That's easy to do in any language if your code is carefully designed for it (look at the Linux kernel for an example in C) and hard to do even in C++, if the design isn't suited. So your assertion that C++ would have magically enabled pluggabilty doesn't hold water. Wild thought anyways.. Indeed. -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] [GENERAL] Should we consider empty fields as NULL values when
Nagib Abi Fadel [EMAIL PROTECTED] writes: The issue is that since the column col1 is defined as character with not null attribute, shouldn't we deny such inserts (i mean inserting empty fields)??? NULL and the empty string '' are *completely* different things. If you want to disallow empty strings as well as NULL, use a CHECK constraint on that column. -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostgreSQL port to pure Java?
Ivelin Ivanov [EMAIL PROTECTED] writes: Has this subject been discussed before? I did not find any references to it in the archives. I think the phrase not gonna happen was invented for this subject. :) -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] PostgreSQL 7.3.4 gets killed by SIG_KILL
Magnus Naeslund(t) [EMAIL PROTECTED] writes: Doug McNaught wrote: Linux is probably killing your process because it (the kernel) is low on memory. Unfortunately, this happens more often with older versions of the kernel. Add more RAM/swap or figure out how to make your query use less memory... -Doug Well this just isn't the case. There is no printout in kernel logs/dmesg (as it would be if the kernel killed it in an OOM situation). I have 1 GB of RAM, and 1.5 GB of swap (swap never touched). Ahh, that's an additional piece of information hat you didn't supply earlier. ;) Though your system memory is ample, is it possible that you're hitting a ulimit() on the stack size or heap size or something? I'm not sure what signal you'd get in such a case, though. Is it possible to somehow find out what process sent the KILL (or if it's the kernel) ? Not that I know of, unless it's in a logfile somewhere. You could try strace(8) on the backend running the query--that might give you some more info. I find this very weird to say the least... Yah. You might also consider running a more recent kernel, especially with such a big machine. 2.2.X never did play that well with large amounts of RAM... -Doug ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] PostgreSQL 7.3.4 gets killed by SIG_KILL
Magnus Naeslund(t) [EMAIL PROTECTED] writes: I have this big table running on an old linux install (kernel 2.2.25). I've COPYed some tcpip logs into a table created as such: Linux is probably killing your process because it (the kernel) is low on memory. Unfortunately, this happens more often with older versions of the kernel. Add more RAM/swap or figure out how to make your query use less memory... -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] *sigh*
Randolf Richardson [EMAIL PROTECTED] writes: What about adding a total number of rows value to the internal header of each table which gets incremented/decremented after each row is INSERT/DELETE has been committed. This way, a generic count(*) by itself could simply return this value without any delay at all. Because different sessions have a (validly) different concept of what that number should be, due to MVCC. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] *sigh*
Gaetano Mendola [EMAIL PROTECTED] writes: Doug McNaught wrote: Because different sessions have a (validly) different concept of what that number should be, due to MVCC. The count(*) information can be revisioned too, am I wrong ? I'm able to create a trigger that store the count(*) information in a special table, why not implement the same in a way builded in ? This has been discussed extensively before (last time was a few months ago); please search the archives if you're curious. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] 7.5 Plans
Shridhar Daithankar [EMAIL PROTECTED] writes: Tom Lane wrote: 1. You can't easily generate a clean diff of your local version against the original imported from postgresql.org. The changes you actually made get buried in a mass of useless $Foo$ diff lines. Stripping those out is possible in theory but painful. Is that the reason linux does not use CVS? I thought so at least. It's probably one of a very large number of reasons. For a project on the scale of Linux, CVS basically sucks--it's too centralized anad has no concept of changesets. -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] A rough roadmap for internationalization fixes
Tom Lane [EMAIL PROTECTED] writes: Peter Eisentraut [EMAIL PROTECTED] writes: I wasn't aware that glib had this. I'll look. Of course the trouble with relying on glibc is that we'd have no solution for platforms that don't use glibc. glib != glibc. glib is the low-level library used by GTK and GNOME for basic data structures, character handling etc. It's LGPL AFAIK, which would seem to rule out diredct use from a licensing perspective. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Release cycle length
Joshua D. Drake [EMAIL PROTECTED] writes: Hello, Personally I am for long release cycles, at least for major releases. In fact as of 7.4 I think there should possibly be a slow down in releases with more incremental releases (minor releases) throughout the year. That would pretty much mean changing the minor releases only for serious bugfixes philosphy. Is that what you are advocating? People are running their companies and lives off of PostgreSQL, they should be able to rely on a specific feature set, and support from the community for longer. If 7.3.4 works for you, there's nothing to stop you running it until the end of time... If you can't patch in bugfixes yourself, you should be willing to pay for support. Commercial companies like Red Hat don't support their releases indefinitely for free; why should the PG community be obligated to? Also, we very rarely remove features--AUTOCOMMIT on the server is about the only one I can think of. -Doug ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] 7.4RC1 planned for Monday
Marc G. Fournier [EMAIL PROTECTED] writes: On Thu, 30 Oct 2003, David Fetter wrote: Any chance of putting up a torrent for it? I'd be happy to host, but I'd have to get the link on the downloads page somehow :) Put up a what ... ? Google for BitTorrent. It's a pretty darn cool app but I don't think the PG source tarball is really big enough to be worth the trouble... It's great for things like CD images though. -Doug ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] O_DIRECT in freebsd
Christopher Kings-Lynne [EMAIL PROTECTED] writes: FreeBSD 4.9 was released today. In the release notes was: 2.2.6 File Systems A new DIRECTIO kernel option enables support for read operations that bypass the buffer cache and put data directly into a userland buffer. This feature requires that the O_DIRECT flag is set on the file descriptor and that both the offset and length for the read operation are multiples of the physical media sector size. Is that of any use? Linux and Solaris have had this for a while. I'm pretty sure it's been discussed before--search the archives. I think the consensus was that it might be useful for WAL writes, but would be a fair amount of work and would introduce portability issues... -Doug ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] O_DIRECT in freebsd
scott.marlowe [EMAIL PROTECTED] writes: I would think the biggest savings could come from using directIO for vacuuming, so it doesn't cause the kernel to flush buffers. Would that be just as hard to implement? Two words: cache coherency. -Doug ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Thoughts on maintaining 7.3
Joshua D. Drake [EMAIL PROTECTED] writes: Yes, please. Please, please do not force all users to accept new features in stable trees. What if the feature does break compatibility with old features? What if it is truly a new feature? One example would be that we are considering reworking pg_dump/restore a bit to support batch uploads and interactive mode. It would not break compatibility with anything but would greatly enhance one's ability to actually backup and restore large volume sets. Well, since those are separate programs and not intimately tied to the backend, you could distribute them separately for people who need them... -Doug ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] massive quotes?
Jon Jensen [EMAIL PROTECTED] writes: On Wed, 10 Sep 2003, Alvaro Herrera wrote: On Wed, Sep 10, 2003 at 10:35:18PM +0200, Andreas Pflug wrote: I never agreed that a client solution would be satisfying. While frontends might try to hide some uglyness of the syntax to the user for single functions, editing large scripts with many functions is still suffering from massive quotes. Oh, and you will be feeding those script to the backend through what? I don't know what he'd be using, but I use Perl/DBI for things like that. Sure, I could spawn psql instances, but it's a lot less efficient and is quite different from using DBI directly. But Perl/DBI does escaping for you, so all you'd have to do is: $sth = $dbh-prepare (CREATE FUNCTION foo(x text) RETURNS text AS ? LANGUAGE 'plpgsql'); $sth-execute($function_body); where $function_body is the unescaped form of the function. So there's no need for a COPY-style mechanism, you can use the current CREATE FUNCTION syntax without having to escape everything yourself. The same argument applies to JDBC. -Doug ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] plpgsql doesn't coerce boolean expressions to boolean
Andreas Pflug [EMAIL PROTECTED] writes: Tom Lane wrote: 2. Throw an error if the expression doesn't return boolean. I'd opt for 2. It's quite common that newer compilers will detect more bogus coding than older ones. There might be existing functions that break from this because they rely on the current feature, but there are probably others that will throw an exception, revealing bad coding (and delivering correct results just by chance, I've seen this more than once...) I agree, and option 2 also makes sure that bad code will fail cleanly, rather than possibly changing behavior and causing data loss/corruption. I agree with another poster that deprecation in 7.4 and removal in 7.5 might make sense. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] plpgsql doesn't coerce boolean expressions to boolean
Tom Lane [EMAIL PROTECTED] writes: Doug McNaught [EMAIL PROTECTED] writes: I agree with another poster that deprecation in 7.4 and removal in 7.5 might make sense. How would we deprecate it exactly? Throw a NOTICE? I was thinking of just a mention in the release notes that we've found this problem and intend to fix it after giving people a release cycle to adjust. Not that people read release notes... :( -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] pgsql in shared lib
ivan [EMAIL PROTECTED] writes: hi, ist possible to compile postgres (after same small modification) to shared so, or dll , and usr it like normal postgres , but without any server and so on. Not without very major code changes. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Win32 native port
Dann Corbit [EMAIL PROTECTED] writes: Cygwin requires a license for commercial use. Use in the sense of distributing applications linked against it, yes. In this case I don't think it's a problem. The output of 'flex' and 'bison' is not required to be GPL (there is a specific exception in the Bison license for this), and we're not distributing any Cygwin code, or any binaries linked with it, merely using it to generate parts of the PG source tree. -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] [DEFAULT] Daily digest v1.3954 (22 messages)
Serguei A. Mokhov [EMAIL PROTECTED] writes: On Wed, 3 Sep 2003, Alvaro Herrera wrote: On Wed, Sep 03, 2003 at 09:19:33PM -0400, Serguei A. Mokhov wrote: On the contrary, it could show the transaction level for the case of nested transactions: foo**=# Ugh... pretty ugly. What if you are within the 50th transaction block? Not that it will be common, but ... Make it a two-line thig then ;-) or a number. And how often is anyone going to do a 50-level-deep transaction interactively in psql anyhow? If the prompt is too long you should be using a script anyway... Anyhow, it's moot at the moment since we don't have nesting. -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Doubt w.r.t vacuum
Shridhar Daithankar [EMAIL PROTECTED] writes: Hi, I was just wondering over it. This is for difference between vacuum full and vacuum analyze. Can somebody enlighten, 1. IIRC vacuum recovers/reuses dead tuples generated from update but can not do so for delete? Why? YDNRC. 2. Vacuum full locks entire table, is it possible that it locks a page at a time and deal with it. It will make vacuum full non-blocking at the cost of letting it run for a longer time. Or is it that the defragmentation algorithm needs more than a page? This I don't know, but I imagine that if what you suggest was easy to do it would have been done, and there would have been no need for two different kinds of VACUUM. -DOUG ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Really odd corruption problem: cannot open pg_aggregate: No such file or directory
Adam Haberlach [EMAIL PROTECTED] writes: So, one of the many machines that I support seems to have developed an incredibly odd and specific corruption that I've never seen before. Whenever a query requiring an aggregate is attempted, it spits out: cannot open pg_aggregate: No such file or directory and fails. Why not use 'strace' to see what file the backend is actually trying to open? -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Pre-allocation of shared memory ...
Bruce Momjian [EMAIL PROTECTED] writes: Tom Lane wrote: AFAIK the only good way around this problem is to use another OS with a more rational design for handling low-memory situations. No other Unix does anything remotely as brain-dead as what Linux does. Or bug your favorite Linux kernel hacker to fix the kernel. Is there no sysctl way to disable such kills? The -ac kernel patches from Alan Cox have a sysctl to control memory overcommit--you can set it to track memory usage and fail allocations when memory runs out, rather than the random kill behavior. I'm not sure whether those have made it into the stock kernel yet, but the vendor kernels (such as Red Hat's) might have it too. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Cursors and backwards scans and SCROLL
Christopher Kings-Lynne [EMAIL PROTECTED] writes: I'd be in favour of creating whole sets of backwards-compatibility GUC's whenever we break backwards compatibility. eg. use_72_compat = yes use_73_compat = yes That sounds like a recipe for a maintenance nightmare to me. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] regression failure in CVS HEAD
Neil Conway [EMAIL PROTECTED] writes: About 1 in every 5 runs of the (parallel) regression tests are failing for me with CVS HEAD: the triggers, inherit, vacuum, sanity_check, and misc tests fail. I can make the failures occur fairly consistently by running make check over and over again until the problem crops up. The platform is Linux 2.4, gcc 3.2. I've attached the regression.diffs file. Any ideas on what the cause might be? Hardware? If it's a software bug, you'd generally expect it to happen each and every time... -Doug ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Yaarrgh! CVS remote buffer overflow
It's all over Slashdot: http://security.e-matters.de/advisories/012003.html -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] [HACKERS] I feel the need for speed. What am I doing wrong?
Dann Corbit [EMAIL PROTECTED] writes: No analyze for 7.1.3. Just ran vacuum a few minutes before the query. No boost at all. VACUUM or VACUUM ANALYZE? Standalone ANALYZE was not in 7.1 but VACUUM ANALYZE does what you need to do... -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] OS/400 support?
Justin Clift [EMAIL PROTECTED] writes: OS/400 is the operating system on the IBM AS/400 series of midrange computers: Info: http://search400.techtarget.com/sDefinition/0,,sid3_gci331973,00.html IBM AS/400 page: http://www-132.ibm.com/content/home/store_IBMPublicUSA/en_US/eServer/iSeries/ Not sure if it's Unix-y or not. Just had the question come through the Advocacy site request form. Will see if anyone else has further info, and then ask the requestor for further details if not. AFAIK it's not Unix-y *at* *all*. The best/only way to run PG on that hardware without an immense porting effort would be to run Linux alongside OS/400 in an LPAR (logical partition). -Doug ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] big text field - message type 0x44
Lee Kindness [EMAIL PROTECTED] writes: Tom Lane writes: Okay, so it seems -D_REENTRANT is the appropriate fix. We could either add that to the template/solaris file, or just add a note to FAQ_Solaris advising that it be added to the configure switches if people intend to use libpq in threaded programs. Is there any cost or downside to just adding it always in template/solaris? However, _REENTRANT is not a Solarisism... On all (recent) UNIX systems it toggles on correct handling for thread specific instances of historically global variables (eg errno). It should be considered for all platforms if libpq is intended to be used from threaded programs. I know libpq is officially non-threadsafe, but is there anything in there that would actually cause a problem, assuming either a connection per thread or proper locking on the application's part? Most of the data in the library seems to be per-connection... -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] float4 problem
Oleg Bartunov [EMAIL PROTECTED] writes: May be I miss something, but seems there is a problem with float4 in 7.2.3 and 7.3RC1 (6.53 works fine): test=# create table t ( a float4); CREATE TABLE test=# insert into t values (0.1); INSERT 32789 1 test=# select * from t where a=0.1; a --- (0 rows) I'm guessing this is because 0.1 is not directly representable as a binary floating point number, and literal floating constants are float8 not float4, and 0.1::float4 != 0.1::float8. Same problem that causes queries on int2 fields not to use an index unless you cast the constants in the query... -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Compile problem on FreeBSD/Alpha
Christopher Kings-Lynne [EMAIL PROTECTED] writes: chriskl@alpha:~/pgsql-head$ bison --version GNU Bison version 1.28 Upgrade Bison to 1.50 or later. Earlier versions can't handle the size of the current grammar. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Optimizer Question/Suggestion - numbers after
Philip Warner [EMAIL PROTECTED] writes: At 07:39 PM 2/11/2002 +1100, Philip Warner wrote: The latter time is actually quote good; when the machine is more heavily loaded it goes up to 1ms. We currently vacuum/analyze daily, and analyze hourly. Why not vacuum hourly (regular non-blocking vacuum, not FULL)? -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [Fwd: Re: [HACKERS] PG functions in Java: maybe use gcj?]
Barry Lind [EMAIL PROTECTED] writes: If we had to supply gcj along with PostgreSQL in order for PostgreSQL to work, I guess that would mean gcj was incorporated in PostgreSQL - that would mean PostgreSQL would become subject to GPL protection. Not true--mere aggregation (shipping two things together) does not require all the aggregated software to be GPL'd. And you're not linking to the gcj compiler, just running it. This particular part of the problem is a non-issue. -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Request for supported platforms
Tom Lane [EMAIL PROTECTED] writes: Bruce Momjian [EMAIL PROTECTED] writes: Folks. start sending in those plaform reports, OS name and version number please. I've checked CVS tip on: HPUX 10.20, using both gcc and vendor's cc PPC Linux Mac OS X 10.1 I get the following on Linux/Sparc, Debian 3.0: make[3]: Entering directory `/home/doug/src/pgsql/src/backend/utils/mb/conversion_procs/ascii_and_mic' gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -I../../../../../../src/include -c -o ascii_and_mic.o ascii_and_mic.c ascii_and_mic.c:19: syntax error before `extern' ascii_and_mic.c:21: syntax error before `extern' make[3]: *** [ascii_and_mic.o] Error 1 make[3]: Leaving directory `/home/doug/src/pgsql/src/backend/utils/mb/conversion_procs/ascii_and_mic' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/doug/src/pgsql/src/backend/utils/mb/conversion_procs' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/doug/src/pgsql/src' make: *** [all] Error 2 My gcc version: doug@varsoon:~/src/pgsql$ gcc -v Reading specs from /usr/lib/gcc-lib/sparc-linux/2.95.4/specs gcc version 2.95.4 20011002 (Debian prerelease) This is CVS tip as of about 11:30 EST Saturday. Looking into it, we have in ascii_and_mic.c: PG_FUNCTION_INFO_V1(ascii_to_mic) PG_FUNCTION_INFO_V1(mic_to_ascii) Putting a semicolon after each such line fixes that compile, but there are other files under conversion_procs with the same problem. Is my gcc not expanding the macro properly? -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Request for supported platforms
Tom Lane [EMAIL PROTECTED] writes: Doug McNaught [EMAIL PROTECTED] writes: make[3]: Entering directory `/home/doug/src/pgsql/src/backend/utils/mb/conversion_procs/ascii_and_mic' gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -I../../../../../../src/include -c -o ascii_and_mic.o ascii_and_mic.c ascii_and_mic.c:19: syntax error before `extern' ascii_and_mic.c:21: syntax error before `extern' That should be fixed as of now. OK, compile went fine, but I get multiple regression test failures: test geometry ... FAILED select_views ... FAILED foreign_key ... FAILED limit... FAILED plpgsql ... FAILED copy2... FAILED temp ... FAILED domain ... FAILED rangefuncs ... FAILED prepare ... FAILED without_oid ... FAILED conversion ... FAILED truncate ... FAILED alter_table ... FAILED I have attached a gzipped copy of regression.diffs. Let me know if I can supply any other help. -Doug regression.diffs.gz Description: regression diffs ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Request for supported platforms
Doug McNaught [EMAIL PROTECTED] writes: OK, compile went fine, but I get multiple regression test failures: test geometry ... FAILED After realizing that my disk had filled up (thanks Alvaro) I reran the tests and 'geometry' is the only failure. I'm guessing this is due to floating-point differences? If this is OK, then Linux/Sparc (Debian 3.0) is a PASS. -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Postgresql and multithreading
D. Hageman [EMAIL PROTECTED] writes: This in many ways is a bogus argument in that 1) postgresql runs on more then just Linux and 2) amount of memmory that can be addressed by a process is tunable up to the point that it reaches a hardware limitation. 1) The OP specifically asked about Linux threads. 2) True up to a point--Linux (and most other Unices) reserve some part of the VM address space for the kernel. On 64-bit this is a non-issue, on 32-bit it's quite important now that you can put 4+GB in a machine. It also should be noted that when a process reaches such a size that it better have a good reason. Now let us do a gedanken experiment and say you do have a good reason - fork a couple of these and your machine will thrash like nothing else ... also that whole hardware limitation will come into play more sooner then later ... True enough. The only real use I can see for gobs of memory on a 32-bit PAE machine with PG is to give each process its own big hunk of 'sortmem' for doing large sorts. If you have 64 GB in the machine setting 'sortmem' to 1GB or so starts to look reasonable... -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Postgresql and multithreading
Steve Wolfe [EMAIL PROTECTED] writes: On the recurring debate of threading vs. forking, I was giving it a fwe thoughts a few days ago, particularly with concern to Linux's memory model. On IA32 platforms with over 4 gigs of memory, any one process can only see up to 3 or 4 gigs of that. Having each postmaster fork off as a new process obviously would allow a person to utilize very copious quantities of memory, assuming that (a) they were dealing with concurrent PG sessions, and (b) PG had reason to use the memory. I'm not entirely clear on threading in Linux - would it provide the same benefits, or would it suddenly lock you into a 3-gig memory space? Linux threads are basically processes that share the same VM space, so you'd be limited to 3GB or whatever, since that's what a VM space can see. -Doug ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] Security implications of (plpgsql) functions
Joe Conway [EMAIL PROTECTED] writes: Tom Lane wrote: A depth limit for PL-function recursion is perhaps feasible, but I can't say that I care for it a whole lot ... anyone have better ideas? Is there any way to recognize infinite recursion by analyzing the saved execution tree -- i.e. can we assume that a function that calls itself, with the same arguments with which it was called, constitutes infinite recursion? Solved the halting problem lately? ;) Someone determined to DoS could probably get around any practical implementation of your idea, using dummy argument, mutual recursion or whatever. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Use of sync() [was Re: Potential Large Performance Gain in WAL synching]
Tom Lane [EMAIL PROTECTED] writes: Doug McNaught [EMAIL PROTECTED] writes: In my understanding, it means all currently dirty blocks in the file cache are queued to the disk driver. The queued writes will eventually complete, but not necessarily before sync() returns. I don't think subsequent write()s will block, unless the system is low on buffers and has to wait until dirty blocks are freed by the driver. We don't need later write()s to block. We only need them to not hit disk before the sync-queued writes hit disk. So I guess the question boils down to what queued to the disk driver means --- has the order of writes been determined at that point? It's certainy possible that new write(s) get put into the queue alongside old ones--I think the Linux block layer tries to do this when it can, for one. According to the manpage, Linux used to wait until everything was written to return from sync(), though I don't *think* it does anymore. But that's not mandated by the specs. So I don't think we can rely on such behavior (not reordering writes across a sync()), though it will probably happen in practice a lot of the time. AFAIK there isn't anything better than sync() + sleep() as far as the specs go. Yes, it kinda sucks. ;) -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Proposed LogWriter Scheme, WAS: Potential Large Performance Gain in WAL synching
Tom Lane [EMAIL PROTECTED] writes: Curtis Faith [EMAIL PROTECTED] writes: The log file would be opened O_DSYNC, O_APPEND every time. Keep in mind that we support platforms without O_DSYNC. I am not sure whether there are any that don't have O_SYNC either, but I am fairly sure that we measured O_SYNC to be slower than fsync()s on some platforms. And don't we preallocate WAL files anyway? So O_APPEND would be irrelevant? -Doug ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Use of sync() [was Re: Potential Large Performance Gain in WAL synching]
Tom Lane [EMAIL PROTECTED] writes: In practice I am not sure there is a problem. The local man page for sync() says The writing, although scheduled, is not necessarily complete upon return from sync. Now if scheduled means will occur before any subsequently-commanded write occurs then we're fine. I don't know if that's true though ... In my understanding, it means all currently dirty blocks in the file cache are queued to the disk driver. The queued writes will eventually complete, but not necessarily before sync() returns. I don't think subsequent write()s will block, unless the system is low on buffers and has to wait until dirty blocks are freed by the driver. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] [GENERAL] Performance while loading data and indexing
Bruce Momjian [EMAIL PROTECTED] writes: Can anyone clarify if data=writeback is safe for PostgreSQL. Specifically, are the data files recovered properly or is this option only for a filesystem containing WAL? data=writeback means that no data is journaled, just metadata (which is like XFS or Reiser). An fsync() call should still do what it normally does, commit the writes to disk before returning. data=journal journals all data and is the slowest and safest. data=ordered writes out data blocks before committing a journal transaction, which is faster than full data journaling (since data doesn't get written twice) and almost as safe. data=writeback is noted to keep obsolete data in the case of some crashes (since the data may not have been written yet) but a completed fsync() should ensure that the data is valid. So I guess I'd probably use data=ordered for an all-on-one-fs installation, and data=writeback for a WAL-only drive. Hope this helps... -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] Performance while loading data and indexing
Tom Lane [EMAIL PROTECTED] writes: We'd be happiest with a filesystem that journals its own metadata and not the user data in the file(s). I dunno if there are any. ext3 with data=writeback? (See my previous message to Bruce). -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [GENERAL] Performance while loading data and indexing
Bruce Momjian [EMAIL PROTECTED] writes: Doug McNaught wrote: Tom Lane [EMAIL PROTECTED] writes: We'd be happiest with a filesystem that journals its own metadata and not the user data in the file(s). I dunno if there are any. ext3 with data=writeback? (See my previous message to Bruce). OK, so that makes ext3 crash safe without lots of overhead? Metadata is journaled so you shouldn't lose data blocks or directory entries. Some data blocks (that haven't been fsync()'ed) may have old or wrong data in them, but I think that's the same as ufs, right? And WAL replay should take care of that. It'd be very interesting to do some tests of the various journaling modes. I have an old K6 that I might be able to turn into a hit-the-reset-switch-at-ramdom-times machine. What kind of tests should be run? -Doug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] many idle processes
John Liu [EMAIL PROTECTED] writes: I tried to understand what causes too many pgsql idle processes. Can postmaster automatically aged and cleaning up those unused idle process? Those processes are attached to open client connections. If you don't like them, change your client to close connections when it's not using them, but your app will be slower since creating a new connection (and backend process) takes some time. Is there a catalog to track those psql processes - what their functions, who issues, etc.? There is one backend process per open client connection, plus the postmaster, which handles creating new connections. -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Vote totals for SET in aborted transaction
Marc G. Fournier [EMAIL PROTECTED] writes: Just as a stupid question here ... but, why do we wrap single queries into a transaction anyway? IMHO, a transaction is meant to tell the backend to remember this sequence of events, so that if it fails, you can roll it back ... with a single INSERT/UPDATE/DELETE, why 'auto-wrapper' it with a BEGIN/END? Well, a single query (from the user's perspective) may involve a funciton call that itself executes one or more other queries. I think you want these to be under transactional control. Plus, it's my understanding that the whole MVCC implementation depends on everything is in a transaction. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] PSQL \x \l command issues
Shad [EMAIL PROTECTED] writes: I just recently upgraded from 7.0.x to 7.2.1. I installed from postgresql-7.2.1-2PGDG.i386.rpm on a Linux Redhat 7.1 system. I was able to resolve most dependancies, except for it telling me that I needed libreadline.so.4, which ldconfig -p|grep readline showed me I already had, so forced a --nodeps on it. Here's a self explanitory paste of what happens when I use \x or \l in PSQL It looks like you may still have some of the old installation around--what does select version(); tell you? -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] new food for the contrib/ directory
Alexandre Dulaunoy [EMAIL PROTECTED] writes: first comment : * a special directory with ./contrib/gpl ? Doesn't really change anything. second comment : * I don't really understand your position regarding the GNU General Public License. The GPL is offering multiple advantages for a big project and software like PostgreSQL. For example : Not open for discussion. See the FAQ. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Index Scans become Seq Scans after VACUUM ANALYSE
mlw [EMAIL PROTECTED] writes: It took a bike ride to think about this one. The supposed advantage of a sequential read over an random read, in an active multitasking system, is a myth. Disagree. Execute a number of queries at the same time, the expected benefit of a sequential scan goes out the window. The OS will be fetching blocks, more or less, at random. If readahead is active (and it should be for sequential reads) there is still a pretty good chance that the next few disk blocks will be in cache next time you get scheduled. If your disk is thrashing that badly, you need more RAM and/or more spindles; using an index will just put even more load on the i/o system. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Suggestion for optimization
Dann Corbit [EMAIL PROTECTED] writes: It would be nice if total table cardinality could be maintained live. So (after the initial vacuum) we update the cardinality for each table in the system table (or perhaps add an entry to the table itself). There are two reasons why this is an important optimization. Firstly, it is a psychological benefit for both benchmarks and customers when doing a select count(*) from tablename. This is something that pops up all the time in benchmarks and customers do it too, in order to get a feel for speed. By storing the current number and incrementing for every insert and decrementing for every delete, the count(*) case with no where clause can return the value instantly. How would this work with MVCC? -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Suggestion for optimization
Dann Corbit [EMAIL PROTECTED] writes: If this is true (even after a commit) then MVCC is a very bad thing. No transactions occur, and two people ask the same question yet get different answers. Doesn't that scare anyone? That would mean (among other things) that Postgresql cannot be used for a data warehouse. Have you read the doc chapter about MVCC? Sounds like you don't quite understand how it works yet. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Suggestion for optimization
Dann Corbit [EMAIL PROTECTED] writes: How would this work with MVCC? Whenever a commit occurs, the pending inserts are totaled into the sum and the pending deletes are subtracted. It can be a list in memory or whatever. Maybe you are referring to the old (expired) rows begin stored until vacuum? Perhaps I really don't understand your question or the issues involved. Why does MVCC complicate issues? Because the row count depends on what transactions have committed when yours starts. Also, you will see the count(*) reflecting INSERTs in your transaction, but others won't until you commit. So there is no well-defined concept of cardinality under MVCC--it depends on which rows are visible to which transactions. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] Re : Solaris Performance - Profiling (Solved)
mlw [EMAIL PROTECTED] writes: I noticed poor performance on Solaris, does one see this problem when compiling PostgreSQL with gcc on solaris? Since it's libc that's the culprit, I would imagine so. As a suggestion, why not find the *best* version of qsort available, anywhere, and always use that version on all platforms? Because qsort() is *supposed* to be optimized by the vendor for their platform, perhaps even written in assembler. It makes sense to trust the vendor except when their implementation is provably pessimized. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [GENERAL] Re : Solaris Performance - Profiling (Solved)
mlw [EMAIL PROTECTED] writes: Because qsort() is *supposed* to be optimized by the vendor for their platform, perhaps even written in assembler. It makes sense to trust the vendor except when their implementation is provably pessimized. Perhaps *supposed* to be optimized, but, in reality, are they? Is it a realistic expectation? I think most vendors do a pretty good job. Don't forget, optimizing a routine like that depends a lot on the cache size and behavior of the CPU and other architecture-dependent stuff. qsort() is a great sort for very random data, when data is mostly in the correct order, it is very bad. Perhaps replacing it with an alternate sort would improve performance in general. Actually, the C standard says nothing about what algorithm should be used for qsort(); it's simply supposed to be a fast in-memory sort. The qsort() name is just a historical artifact. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [GENERAL] v7.2.1 Released: Critical Bug Fix
Dean Hill [EMAIL PROTECTED] writes: I was wondering if it is documented as to exactly how to do a minor upgrade. I've not been able to find it in the past, and I end up doing a full install, dump/reload. I'm running postgresql on nt/2000 using cygwin. Thanks -Dean Minor upgrades do not require a dump/restore; the on-disk file format remains the same. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] pg_dump 2GB limit?
Laurette Cisneros [EMAIL PROTECTED] writes: The archives search is not working on postgresql.org so I need to ask this question... We are using postgresql 7.2 and when dumping one of our larger databases, we get the following error: File size limit exceeded (core dumped) We suspect pg_dump. Is this true? Why would there be this limit in pg_dump? Is it scheduled to be fixed? This means one of two things: 1) Your ulimits are set too low, or 2) Your pg_dump wasn't compiled against a C library with large file support (greater than 2GB). Is this on Linux? -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pg_dump 2GB limit?
Laurette Cisneros [EMAIL PROTECTED] writes: Hi, I'm on Red Hat. Here's the uname info: Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown That's an old and buggy kernel, BTW--you should install the errata upgrades, What do I need to do to turn on large file support in the compile? Make sure you are running the latest kernel and libs, and AFAIK 'configure' should set it up for you automatically. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Again, sorry, caching.
Jeff Davis [EMAIL PROTECTED] writes: A library implies that the application is running long enough to actually hear the notofication. Web apps start up, read from the database, and before any cache is needed they're done and the next one starts up, reading again from the database. Only currently open connections receive the notification. If your web app works this way than you already don't care about performance. People doing scalable web apps these days use connection pooling and session data kept in memory, so you already have a persistent layer running (whether it's your JVM, Apache process for mod_perl or PHP, or whatever). Really big apps definitely have a long-running daemon process that handles caching, session management (so you can have multiple webservers) etc etc... -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] My only post with regard to query caching
Adam Haberlach [EMAIL PROTECTED] writes: MySQL is doing some things right. They are providing useful documentation. They are adding features that target the web market (they may be adding them incorrectly, however). If we expect PostgreSQL to beat MySQL in anything but My database is transactionally secure and We have a GECO optimizer pissing wars, we'll need to start becoming a little more competitive in the raw speed arena. I feel that this optimization, while it may not be trivial, is fairly low-hanging fruit that can help. I may even try to implement it, but I make no guarantees. Looks like the onus is on you and mlw to come up with a design for the query cache mechanism, based on knowledge of PG internals, that intelligently addresses ACID and MVCC issues, and propose it. I think the core developers would certainly be willing to look at such a design proposal. Then, if they like it, you get to implement it. ;) In other words, and I say this in the nicest possible way, talk is cheap. -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Platform comparison ...
Dale Anderson [EMAIL PROTECTED] writes: Hello Group, [snip: why would PG be better than MSSQL?] Better isn't meaningful except in the context of the problem you're trying to solve. There will be some problems where PG is right, some where MSSQL works better, and some where neither is the best choice. Reasons you might prefer PG: * No licensing costs, period * Runs on free operating systems * Runs on Unix, if you prefer that as a server environment * Object-relational technology * Extensibility (not only functions, but datatypes, index types, etc) * Open Source (no vendor lockin) Reasons you might prefer MSSQL: * Need for MS extensions * Easier setup (perhaps) for non-DBA/sysadmin types * Management's desire for single-source * Performance advantages for some workloads * Windows server environment (PG runs on Windows, but only through a Unix emulation layer--I personally wouldn't run it in production, but then again I wouldn't run Windows in production:) Both offer commercial support, ACID compliance, stored procedures/functions, and the other stuff that people expect from a real database. Hope this helps... -Doug -- Doug McNaught Wireboard Industries http://www.wireboard.com/ Custom software development, systems and network consulting. Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD... ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]