>
>> Actually, I think it's a lot more accurate to compare PostgreSQL and
>> MySQL as FreeBSD vs Linux from about 5 years ago. Back then FreeBSD was
>> clearly superior from a technology standpoint, and clearly playing
>> second-fiddle when it came to users. And now, Linux is actually
>> technicall
> Andrew Dunstan <[EMAIL PROTECTED]> writes:
>> Mark Woodward wrote:
>>> Again, there is so much code for MySQL, a MySQL emulation layer, MEL
>>> for
>>> short, could allow plug and play compatibility for open source, and
>>> closed
>>>
> Jim C. Nasby wrote:
>> Maybe a compatability layer isn't worth doing, but I certainly think
>> it's very much worthwhile for the community to do everything possible to
>> encourage migration from MySQL. We should be able to lay claim to most
>> advanced and most popular OSS database.
>>
>
> We'll
> Jim C. Nasby wrote:
>> On Wed, May 17, 2006 at 09:35:34PM -0400, John DeSoi wrote:
>>> On May 17, 2006, at 8:08 PM, Mark Woodward wrote:
>>>
>>>> What is the best way to go about creating a "plug and play,"
>>>> PostgreSQL
&g
Sorry to interrupt, but I have had the "opportinuty" to have to work with
MySQL. This nice little gem is packed away in the reference for
mysql_use_result().
"On the other hand, you shouldn't use mysql_use_result() if you are doing
a lot of processing for each row on the client side, or if the out
> Mark Woodward wrote:
>>>After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] ("Mark
>>>Woodward") belched out:
>
>>>I'm not keen on the Windows .ini file style sectioning; that makes it
>>>look like a mix between a shell scr
> After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] ("Mark
> Woodward") belched out:
>>> Mark Woodward wrote:
>> Like I have repeated a number of times, sometimes, there is more than
>> one
>> database cluster on a machine. The pr
> Mark Woodward wrote:
>> > Mark,
>> >
>> >> Well, I'm sure that one "could" use debian's solution, but that's the
>> >> problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
>> >>
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> My frustration level often kills any desire to contribute to open
>> source.
>> Sometimes, I think that open source is doomed. The various projects I
>> track and use are very frustrating, they remind me
> Mark,
>
>> Well, I'm sure that one "could" use debian's solution, but that's the
>> problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
>> the mechanisms? Will debian support FreeBSD? NetBSD? Is it in the
>> PostgreSQL admin manual?
>>
>> We are talking about a feature, like pg_
> On Mon, Feb 27, 2006 at 11:48:50AM -0500, Mark Woodward wrote:
>> Well, I'm sure that one "could" use debian's solution, but that's the
>> problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
>> the
>> mecha
> On Mon, Feb 27, 2006 at 09:39:59AM -0500, Mark Woodward wrote:
>> It isn't just "an" environment variable, it is a number of variables and
>> a
>> mechanism. Besides, "profile," from an admin's perspective, is for
>> managing users, not
> Mark Woodward wrote:
>> > If you require a policy, then YOU are free to choose the policy that
>> > YOU need. You're not forced to accept other peoples' policies that
>> > may conflict with things in your environment.
>>
>> The problem
> Mark Woodward wrote:
>
>> I'm not sure that I agree. At least in my experience, I wouldn't have
>> more
>> than one installation of PostgreSQL in a production machine. It is
>> potentially problematic.
>>
>
> I agree with you for production env
> Quoth [EMAIL PROTECTED] ("Mark Woodward"):
>>> Mark Woodward wrote:
>>>> As a guy who administers a lot of systems, sometimes over the span of
>>>> years, I can not understate the need for "a" place for the admin to
>>>> fi
> Mark Woodward wrote:
>
>> As a guy who administers a lot of systems, sometimes over the span of
>> years, I can not understate the need for "a" place for the admin to
>> find
>> what databases are on the machine and where they are located.
>>
>
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>>> pg_config --sysconfdir
>
>> Hmm, that doesn't show up with pg_config --help.
>
> It's in 8.1.
>
>> One of my difficulties with PostgreSQL is that there is no
>> "standardize
> Mark Woodward wrote:
>> The pg_config program needs to display more information, specifically
>> where the location of pg_service.conf would reside.
>
> pg_config --sysconfdir
Hmm, that doesn't show up with pg_config --help.
[EMAIL PROTECTED]:~$ pg_config --sys
The pg_config program needs to display more information, specifically
where the location of pg_service.conf would reside.
Also, I know I've been harping on this for years (literally), but since
the PosgteSQL programs already have the notion that there is some static
directory for which to locate f
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>
>> DNS isn't always a better solution than /etc/hosts, both have their pros
>> and cons. The /etc/hosts file is very useful for "instantaneous,"
>> reliable, and redundent name lookups. DNS se
> Mark Woodward wrote:
>> Don't get me wrong, DNS, as it is designed, is PERFECT for the
>> distributed nature of the internet, but replication of fairly static
>> data under the control of a central authority (the admin) is better.
>
> What about this zeroconf/
> Martijn van Oosterhout writes:
>> I think the major issue is that most such systems (like RFC2782) deal
>> only with finding the hostname:port of the service and don't deal with
>> usernames/passwords/dbname. What we want is a system that not only
>> finds the service, but tells you enough to co
> On Sun, 2006-02-19 at 10:00 -0500, Mark Woodward wrote:
>> > On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
>> >> Like I said, in this thread of posts, yes there are ways of doing
>> this,
>> >> and I've been doing it for years. I
> On Sun, Feb 19, 2006 at 10:00:01AM -0500, Mark Woodward wrote:
>> > It turns out what you like actually exists, lookup the "service"
>> > parameter in the connectdb string. It will read the values for the
>> > server, port, etc from a pg_service.conf fi
> On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
>> Like I said, in this thread of posts, yes there are ways of doing this,
>> and I've been doing it for years. It is just one of the rough eges that
>> I
>> think could be smoother.
>>
>
> Mark Woodward wrote:
>
>>>If I am a road warrior I want to be able to connect, run my dynamic dns
>>>client, and go.
>>>
>>>
>>>
>>In your scenario of working as a road warrior, you are almost
>>certainly not going to be able to ha
>
> If I am a road warrior I want to be able to connect, run my dynamic dns
> client, and go.
>
> HUPing the postmaster every 30 minutes sounds horrible, and won't work
> for what strikes me as the scenario that needs this most. And we surely
> aren't going to build TTL logic into postgres.
>
> I
> Mark Woodward wrote:
>
>>>Added to TODO:
>>>
>>>o Allow pg_hba.conf to specify host names along with IP
>>> addresses
>>>
>>> Host name lookup could occur when the postmaster reads the
>>> pg_hba.
>
> Added to TODO:
>
> o Allow pg_hba.conf to specify host names along with IP addresses
>
> Host name lookup could occur when the postmaster reads the
> pg_hba.conf file, or when the backend starts. Another
> solution would be to reverse lookup the connection
I think we've talked about this a couple times over the years, but I'm not
sure it was resolved or not.
The message post about load testing and SQLite showed PostgreSQL poorly.
Yea, I know, it was the Windows port not being optimized, I can see that,
but it raises something else. A good set of bas
> On 2/11/06, Andrej Ricnik-Bay wrote:
>> Has anyone here seen this one before? Do the values
>> appear realistic?
>>
>> http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison
>
> The values appear to originate from an intrsinsically flawed test setup.
>
> Just take the first test. The database has t
> Mark Woodward wrote:
>> My question was based on an observation that ANALYZE and VACUUM are
>> nessisary, both for different reasons. The system or tools must be
>> able to detect substantial changes in the database and at least run
>> analyze if failing to do so wou
> Mark Woodward wrote:
>> I know this is a kind of stupid question, but postgresql does not
>> behave well when the system changes in a major way without at least
>> an analyze. There must be something that can be done to protect the
>> casual user (or busy sometimes ab
I was think about how forgetting to run analyze while developing a table
loader program caused PostgreSQL to run away and use up all the memory.
Is there some way that postges or psql can know that it substantially
altered the database and run analyze?
I know this is a kind of stupid question, bu
> On Fri, Feb 10, 2006 at 09:57:12AM -0500, Mark Woodward wrote:
>> > In most practical situations, I think
>> > exceeding work_mem is really the best solution, as long as it's not
>> > by more than 10x or 100x. It's when the estimate is off by many
&g
> Rick Gigger <[EMAIL PROTECTED]> writes:
>> However if hashagg truly does not obey the limit that is supposed to
>> be imposed by work_mem then it really ought to be documented. Is
>> there a misunderstanding here and it really does obey it? Or is
>> hashagg an exception but the other work_mem a
> Martijn van Oosterhout writes:
>> When people talk about disabling the OOM killer, it doesn't stop the
>> SIGKILL behaviour,
>
> Yes it does, because the situation will never arise.
>
>> it just causes the kernel to return -ENOMEM for
>> malloc() much much earlier... (ie when you still actually
> Stephen Frost <[EMAIL PROTECTED]> writes:
>
>> * Tom Lane ([EMAIL PROTECTED]) wrote:
>> > Greg Stark <[EMAIL PROTECTED]> writes:
>> > > It doesn't seem like a bad idea to have a max_memory parameter that
>> if a
>> > > backend ever exceeded it would immediately abort the current
>> > > transactio
> On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
>> > "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> >> Again, regardless of OS used, hashagg will exceed "working memory" as
>> >> defined in postgresql.conf.
>&
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Again, regardless of OS used, hashagg will exceed "working memory" as
>> defined in postgresql.conf.
>
> So? If you've got OOM kill enabled, it can zap a process whether it's
> stri
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> I think it is still a bug. While it may manifest itself as a pg crash on
>> Linux because of a feature with which you have issue, the fact remains
>> that PG is exeeding its working memory limit.
>
> The p
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Still, I would say that is is extremly bad behavior for not having
>> stats, wouldn't you think?
>
> Think of it as a kernel bug.
While I respect your viewpoint that the Linux kernel should not kill an
o
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> -> HashAggregate (cost=106527.68..106528.68 rows=200
>> width=32)
>>Filter: (count(ucode) > 1)
>>-> Seq Scan on cdtitles (cost=0.00..96888.12
&g
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> PostgreSQL promptly uses all available memory for the query and
>> subsequently crashes.
>
> I'll bet a nickel this is on a Linux machine with OOM kill enabled.
> What does the postmaster log show
More info: the machine has 512M RAM and 512M swap
Work mem is set to:work_mem = 1024
This should't have crashed, should it?
> PostgreSQL promptly uses all available memory for the query and
> subsequently crashes.
>
> I'm sure it can be corrected with a setting, but should it crash?
>
> freedb=#
PostgreSQL promptly uses all available memory for the query and
subsequently crashes.
I'm sure it can be corrected with a setting, but should it crash?
freedb=# create table ucode as select distinct ucode from cdtitles group
by ucode having count(ucode)>1 ;
server closed the connection unexpected
>
>
> Q Beukes wrote:
>
>>Hello,
>>
>>Is there not some other alternative to pg_hba.conf?
>>
>>I have the problem where the system administrators at our company
>>obviously have access to the whole filesystem, and our database records
>>needs to be hidden even from them.
>>
>>With pg_hba.conf that
> Hello,
>
> Is there not some other alternative to pg_hba.conf?
>
> I have the problem where the system administrators at our company
> obviously have access to the whole filesystem, and our database records
> needs to be hidden even from them.
If they have full access, then they have FULL access
> On Mon February 6 2006 05:17, Mark Woodward wrote:
>> I posted some source to a shared memory sort of thing to the group, as
>> well as to you, I believe.
>
> Indeed, and it looks rather interesting. I'll have a look through it
> when
> I
> have a
> On Sun February 5 2006 16:16, Tom Lane wrote:
>> AFAICT the data structures you are worried about don't have any readily
>> predictable size, which means there is no good way to keep them in
>> shared memory --- we can't dynamically resize shared memory. So I think
>> storing the rules in a tabl
Hi!!
I was just browsing the message and saw yours. I have actually written a
shared memory system for PostgreSQL.
I've done some basic bench testing, and it seems to work, but I haven't
given it the big QA push yet.
My company, Mohawk Software, is going to release a bunch of PostgreSQL
extenss
>
> On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
>
>> On Feb 3, 2006, at 08:05, Mark Woodward wrote:
>>
>>> Using the "/etc/hosts" file or DNS to maintain host locations for
>>> is a
>>> fairly common and well known practice, but there
> On Feb 3, 2006, at 12:43, Rick Gigger wrote:
>
>> If he had multiple ips couldn't he just make them all listen only
>> on one specific ip (instead of '*') and just use the default port?
>
> Yeah, but the main idea here is that you could use ipfw to forward
> connections *to other hosts* if you wa
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>
>> The point is, that I have been working with this sort of "use case" for
>> a
>> number of years, and being able to represent multiple physical databases
>> as one logical db server would m
> Mark Woodward schrieb:
> ...
>> Unless you can tell me how to insert live data and indexes to a cluster
>> without having to reload the data and recreate the indexes, then I
>> hardly
>> think I am "misinformed." The ad hominem attack wasn't nessisa
> Mark Woodward wrote:
>> From an administration perspective, a single point of admin would
>> seem like a logical and valuable objective, no?
>
> I don't understand why you are going out of your way to separate your
> databases (for misinformed reasons, it appears) an
> On Thu, 2 Feb 2006, Mark Woodward wrote:
>
>> Now, the answer, obviously, is to create multiple postgresql database
>> clusters and run postmaster for each logical group of databases, right?
>> That really is a fine idea, but
>>
>> Say, in pgsql, I do th
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> One of the problems with the current PostgreSQL design is that all the
>> databases operated by one postmaster server process are interlinked at
>> some core level. They all share the same system tables. If one d
I am working on an issue that I deal with a lot, there is of course a
standard answer, but maybe it is something to think about for PostgreSQL
9.0 or something. I think I finally understand what I have been fighting
for a number of years. When I have been grousing about postgresql
configuration, th
> On Mon, 30 Jan 2006, Mark Woodward wrote:
>
>> It gets so frustrating sometimes, it isn't so black and white, there are
>> many levels of gray. The PostgreSQL project is trying so hard to be
>> neutral, that it is making itself irrelevant.
>
> We are mak
> On Mon, Jan 30, 2006 at 04:35:15PM -0500, Mark Woodward wrote:
>> It gets so frustrating sometimes, it isn't so black and white, there are
>> many levels of gray. The PostgreSQL project is trying so hard to be
>> neutral, that it is making itself irrelevant.
>
> On Sun, Jan 29, 2006 at 03:15:06PM -0500, Mark Woodward wrote:
>> > Postgres generally seems to favor extensibility over integration, and
>> I
>> > generally agree with that approach.
>>
>> I generally agree as well, but.
>>
>> I think th
> David Fetter <[EMAIL PROTECTED]> writes:
>> I also think this would make a great pgfoundry project :)
>
> Yeah ... unless there's some reason that it needs to be tied to PG
> server releases, it's better to put it on pgfoundry where you can
> have your own release cycle.
>
I don't need pfoundry,
>
>
> Mark Woodward wrote:
>
>>XML is not really much more than a language, it says virtually nothing
>>about content. Content requires custom parsers.
>>
>>
>
> Really? Strange I've been dealing with it all this time without having
> to contr
>
> [removing -patches since no patch was attached]
> This sounds highly specialised, and probably more appropriate for a
> pgfoundry project.
>
> In any case, surely the whole point about XML is that you shouldn't need
> to contruct custom parsers. Should we include a specialised parser for
> evey
I have a fairly simple extension I want to add to contrib. It is an XML
parser that is designed to work with a specific dialect.
I have a PHP extension called xmldbx, it allows the PHP system to
serialize its web session data to an XML stream. (or just serialize
variables) PHP's normal serializer
>> Well, if you want PostgreSQL to act a specific way, then you are going
>> to
>> have to set up the defaults somehow, right?
>
> Of course, which is why we could use a global table for most of it.
What if you wish to start the same database cluster with different settings?
>
>>
>> Which is clea
> Hello,
>
> As I have been laboring over the documentation of the postgresql.conf
> file for 8.1dev it seems that it may be useful to rip out most of the
> options in this file?
>
> Considering many of the options can already be altered using SET why
> not make it the default for many of them?
>
>
e pre-formatted database?
I would say the preformated database is easier to manage. There are
hundreds of individual zips files, in each of those files 10 or so data
files.
> Mark Woodward wrote:
>> It is 4.4G in space in a gzip package.
>>
>> I'll mail a DVD to two pe
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Actually, there isn't a setting to just dump the able definitions and
>> the
>> data. When you dump the schema, it includes all the tablespaces,
>> namespaces, owners, etc.
>
>> Just
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>>> I'm too lazy to run an experiment, but I believe it would. Datum is
>>> involved in almost every function-call API in the backend. In
>>> particular this means that it would affect performance-cr
> Am Donnerstag, den 04.08.2005, 10:26 -0400 schrieb Mark Woodward:
>> I haven't seen this option, and does anyone thing it is a good idea?
>>
>> A option to pg_dump and maybe pg_dump all, that dumps only the table
>> declarations and the data. No owners, tablespa
It is 4.4G in space in a gzip package.
I'll mail a DVD to two people who promise to host it for Hackers.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>>> 2. Performance. Doing this would require widening Datum to 64 bits,
>>> which is a system-wide performance hit on 32-bit machines.
>
>> Do you really think it would make a measurable difference
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Why is there collision? It is because the number range of an OID is
>> currently smaller than the possible usage.
>
> Expanding OIDs to 64 bits is not really an attractive answer, on several
> grounds:
>
&
I haven't seen this option, and does anyone thing it is a good idea?
A option to pg_dump and maybe pg_dump all, that dumps only the table
declarations and the data. No owners, tablespace, nothing.
This, I think, would allow more generic PostgreSQL data transfers.
---(end
>> It's been running for about an hour now, and it is up to 3.3G.
>>
>> pg_dump tiger | gzip > tiger.pgz
>
> | bzip2 > tiger.sql.bz2 :)
>
I find bzip2 FAR SLOWER than the gain in compression.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postm
> * Mark Woodward ([EMAIL PROTECTED]) wrote:
>> > How big dumped & compressed? I may be able to host it depending on
>> how
>> > big it ends up being...
>>
>> It's been running for about an hour now, and it is up to 3.3G.
>
> Not too bad.
> * Mark Woodward ([EMAIL PROTECTED]) wrote:
>> I just finished converting and loading the US census data into
>> PostgreSQL
>> would anyone be interested in it for testing purposes?
>>
>> It's a *LOT* of data (about 40+ Gig in PostgreSQL)
>
> How big du
> I was reminded again today of the problem that once a database has been
> in existence long enough for the OID counter to wrap around, people will
> get occasional errors due to OID collisions, eg
>
> http://archives.postgresql.org/pgsql-general/2005-08/msg00172.php
>
> Getting rid of OID usage i
, 2005 at 05:00:16PM -0400, Mark Woodward wrote:
>>
>>
>>>I just finished converting and loading the US census data into
>>> PostgreSQL
>>>would anyone be interested in it for testing purposes?
>>>
>>>It's a *LOT* of data (about 40+ Gig i
I just finished converting and loading the US census data into PostgreSQL
would anyone be interested in it for testing purposes?
It's a *LOT* of data (about 40+ Gig in PostgreSQL)
---(end of broadcast)---
TIP 6: explain analyze is your friend
>> -Original Message-
>> From: Marian POPESCU [mailto:[EMAIL PROTECTED]
>> Sent: Friday, April 01, 2005 8:06 AM
>> To: pgsql-hackers@postgresql.org
>> Subject: Re: [HACKERS] ARC patent
>>
>> >>>Neil Conway <[EMAIL PROTECTED]> writes:
>> >>>
>> >>>
>> FYI, IBM has applied for a patent on
> There is an updated survey of open source developers:
>
> http://flosspols.org/survey/survey_part.php?groupid=sd
>
It was very long, it says "45" questions, but many of those questions are
many parts with drop down menues.
Tedious!!
Also, it seems to be looking for sexual harrasment issues as
> On Mon, 28 Mar 2005, Mark Woodward wrote:
>
>>> Hi there,
>>>
>>> while learning inkscape I did a sketch of picture describing
>>> history of relational databases. It's available from
>>> http://mira.sai.msu.su/~megera/pgsql/
>>
>
> Hi there,
>
> while learning inkscape I did a sketch of picture describing
> history of relational databases. It's available from
> http://mira.sai.msu.su/~megera/pgsql/
Is there a direct line from INGRES to Postgres? I was under the impression
that Postgres is a "new" lineage started after INGR
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Sorry, that's not true. At least in the USA, any entity that can be
>> identified can own and control copyright. While it is true, however,
>> that
>> there can be ambiguity, an informal body, say
> Mark Woodward wrote:
>> As the copyright owner, "The PostgreSQL Global Development Group,"
>> has the right to license the documentation any way they see fit. For
>> PHP to sub-license the documentation, it legally has to be transfered
>> in writing. Ver
> Peter Eisentraut wrote:
>> Mark Woodward wrote:
>> > I would say that "The PostgreSQL Global Development Group" or its
>> > representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just
>> > has to give something written, that says
> Mark Woodward wrote:
>> I would say that "The PostgreSQL Global Development Group" or its
>> representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just
>> has to give something written, that says Christopher Kings-Lynne of
>> "your ad
> Tom Lane wrote:
>> You can't just randomly rearrange the pg_enc enum without forcing an
>> initdb, because the numeric values of the encodings appear in system
>> catalogs (eg pg_conversion).
>
> Oh, those numbers appear in the catalogs? I didn't relealize that.
>
> I will force an initdb.
>
Doe
>>>Uh, but that's what the BSD license allows --- relicensing as any other
>>>license, including commercial.
>>
>> The point remains that Chris, by himself, does not hold the copyright on
>> the PG docs and therefore cannot assign it to anyone.
>>
>> ISTM the PHP guys are essentially saying that th
> Mark Woodward wrote:
>> > Christopher Kings-Lynne wrote:
>> >> > I really don't intend to do that, and it does seem to happen a lot.
>> I
>> >> am
>> >> > the first to admit I lack tact, but often times I view the
>>
> I'm currently adding support for the v3 protocol in PHP pgsql extension.
> I'm wondering if anyone minds if I lift documentation wholesale from
> the PostgreSQL docs for the PHP docs for these functions. For instance,
> the fieldcodes allowed for PQresultErrorField, docs on
> PQtransactionStat
> Christopher Kings-Lynne wrote:
>> > I really don't intend to do that, and it does seem to happen a lot. I
>> am
>> > the first to admit I lack tact, but often times I view the decisions
>> made
>> > as rather arbitrary and lacking a larger perspective, but that is a
>> rant I
>> > don't want to g
101 - 195 of 195 matches
Mail list logo