Re: [GENERAL] pgpass file type restrictions

2017-10-19 Thread Andrew Dunstan


On 10/19/2017 09:20 AM, Desidero wrote:
> I agree that it would be better for us to use something other than
> LDAP, but unfortunately it's difficult to convince the powers that be
> that we can/should use something else that they are not yet prepared
> to properly manage/audit. We are working towards it, but we're not
> there yet. It's not really an exuse, but until the industry password
> policies are modified to outright ban passwords, many businesses will
> probably be in this position.
>
> In any event, is the use case problematic enough that it would prevent
> the proposed changes from being implemented? I could submit a patch to
> postgres hackers if necessary, but if it's undesirable I can figure
> out something else.
>

Please don't top-post on the PostgreSQL lists.

You said you wanted to allow anonymous pipes, but I think what you
really want is a named pipe.

I don't see any reason in principle to disallow use of a named pipe as a
password file. It could be a bit of a footgun, though, since writing to
the fifo would block until it was opened by the client, so you'd need to
be very careful about that.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpass file type restrictions

2017-10-19 Thread Andrew Dunstan


On 10/19/2017 02:12 AM, Tom Lane wrote:
> Desidero <desid...@gmail.com> writes:
>> I’m running into problems with the restriction on pgpass file types. When
>> attempting to use something like an anonymous pipe for a passfile, psql
>> throws an error stating that it only accepts plain files.
>> ...
>> Does anyone know why it’s set up to avoid using things like anonymous pipes
>> (or anything but "plain files")?
> A bit of digging in the git history says that the check was added here:
>
> commit 453d74b99c9ba6e5e75d214b0d7bec13553ded89
> Author: Bruce Momjian <br...@momjian.us>
> Date:   Fri Jun 10 03:02:30 2005 +
> 
> Add the "PGPASSFILE" environment variable to specify to the password
> file.
> 
> Andrew Dunstan
> 
> and poking around in the mailing list archives from that time finds
> what seems to be the originating thread:
>
> https://www.postgresql.org/message-id/flat/4123BF8C.5000909%40pse-consulting.de
>
> There's no real discussion there of the check for plain-file-ness.
> My first guess would have been that the idea was to guard against
> symlink attacks; but then surely the stat call needed to have been
> changed to lstat?  So I'm not quite sure of the reasoning.  Perhaps
> Andrew remembers.



That was written 13 years ago. I'm afraid my memory isn't that good.


>
>> If it matters,
>> I'm trying to use that so I can pass a decrypted pgpassfile into postgres
>> since my company is not allowed to have unencrypted credentials on disk
>> (yes, I know that it's kind of silly to add one layer of abstraction, but
>> it's an industry rule we can't avoid).
> I cannot get excited about that proposed use-case, though.  How is a pipe
> any more secure than a plain file with the same permissions?



If it's not allowed to reside on disk, put it on a RAM disk?


>
> My thought is that you shouldn't be depending on passwords at all, but
> on SSL credentials or Kerberos auth, both of which libpq supports fine.
>



Yeah, we need to be convincing people with high security needs to get
out of the password game. It's a losing battle.



cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/24/2016 09:41 AM, Tom Lane wrote:


However, it looks to me like row_to_json already does pretty much the
right thing with nested array/record types:

regression=# select 
row_to_json(row(1,array[2,3],'(0,1)'::int8_tbl,array[(1,2),(3,4)]::int8_tbl[]));
row_to_json
-
  
{"f1":1,"f2":[2,3],"f3":{"q1":0,"q2":1},"f4":[{"q1":1,"q2":2},{"q1":3,"q2":4}]}
(1 row)

So the complaint here is that json_populate_record fails to be an inverse
of row_to_json.



Right.




I'm not sure about Andrew's estimate that it'd be a large amount of work
to fix this.  It would definitely require some restructuring of the code
to make populate_record_worker (or some portion thereof) recursive, and
probably some entirely new code for array conversion; and making
json_populate_recordset behave similarly might take refactoring too.





One possible shortcut if we were just handling arrays and not nested 
composites would be to mangle the json array to produce a Postgres array 
literal. But if we're handling nested composites as well that probably 
won't pass muster and we would need to decompose all the objects fully 
and reassemble them into Postgres objects. Maybe it won't take as long 
as I suspect. If anyone actually does it I'll be interested to find out 
how long it took them :-)


cheers

andrew



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/24/2016 09:11 AM, David G. Johnston wrote:
On Wednesday, February 24, 2016, Andrew Dunstan <and...@dunslane.net 
<mailto:and...@dunslane.net>> wrote:



Having json(b)_populate_record recursively process nested complex
objects would be a large undertaking. One thing to consider is
that json arrays are quite different from Postgres arrays: they
are essentially one-dimensional heterogenous lists, not
multi-dimensional homogeneous matrices. So while a Postgres array
that's been converted to a json array should in principle be
convertible back, an arbitrary json array could easily not be.


An arbitrary json array should be one-dimensional and homogeneous - 
seems like that should be easy to import.  The true concern is that 
not all PostgreSQL arrays are capable of being represented in json.




Neither of these things are true AFAIK.

1. The following is a 100% legal json array, about as heterogenous as 
can be:


   [ "a" , 1, true, null, [2,false], {"b": null} ]


2. Having implemented the routines to convert Postgres arrays to json 
I'm not aware of any which can't be converted. Please supply an example 
of one that can't.



cheers

andrew




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/23/2016 02:54 PM, Tom Lane wrote:

Dan S  writes:

I have this table, data and query:
create table test
(
 id int,
 txt text,
 txt_arr text[],
 f float
);
insert into test
values
(1,'jkl','{abc,def,fgh}',3.14159),(2,'hij','{abc,def,fgh}',3.14159),(2,null,null,null),(3,'def',null,0);
select j, json_populate_record(null::test, j)
from
(
 select to_json(t) as j from test t
) r;
ERROR:  malformed array literal: "["abc","def","fgh"]"
DETAIL:  "[" must introduce explicitly-specified array dimensions.
Is it a bug or how am I supposed to use the populate function ?

AFAICS, json_populate_record has no intelligence about nested container
situations.  It'll basically just push the JSON text representation of any
field of the top-level object at the input converter for the corresponding
composite-type column.  That doesn't work if you're trying to convert a
JSON array to a Postgres array, and it wouldn't work for sub-object to
composite column either, because of syntax discrepancies.

Ideally this would work for arbitrarily-deeply-nested array+record
structures, but it looks like a less than trivial amount of work to make
that happen.


If I try an equivalent example with hstore it works well.

hstore hasn't got any concept of substructure in its field values, so
it's hard to see how you'd create an "equivalent" situation.

One problem with fixing this is avoiding backwards-compatibility breakage,
but I think we could do that by saying that we only change behavior when
(a) json sub-value is an array and target Postgres type is an array type,
or (b) json sub-value is an object and target Postgres type is a composite
type.  In both cases, current code would fail outright, so there's no
existing use-cases to protect.  For other target Postgres types, we'd
continue to do it as today, so for example conversion to a JSON column
type would continue to work as it does now.

I'm not sure if anything besides json[b]_populate_record needs to change
similarly, but we ought to look at all those conversion functions with
the thought of nested containers in mind.

regards, tom lane

PS: I'm not volunteering to do the work here, but it seems like a good
change to make.




Historically, we had row_to_json before we had json_populate_record, and 
a complete round-trip wasn't part of the design anyway AFAIR. Handling 
nested composites and arrays would be a fairly large piece of work, and 
I'm not available to do it either.


A much simpler way to get some round-trip-ability would be to have a row 
to json converter that would stringify instead of decomposing nested 
complex objects, much as hstore does. That would be fairly simple to do, 
and the results should be able to be fed straight back into 
json(b)_populate_record. I'm not volunteering to do that either, but the 
work involved would probably be measured in hours rather than days or 
weeks. Of course, the json produced by this would be ugly and the 
stringified complex objects would be opaque to other json processors. 
OTOH, many round-trip applications don't need to process the serialized 
object on the way around. So this wouldn't be a cure-all but it might 
meet some needs.


Having json(b)_populate_record recursively process nested complex 
objects would be a large undertaking. One thing to consider is that json 
arrays are quite different from Postgres arrays: they are essentially 
one-dimensional heterogenous lists, not multi-dimensional homogeneous 
matrices. So while a Postgres array that's been converted to a json 
array should in principle be convertible back, an arbitrary json array 
could easily not be.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread Andrew Dunstan


Please do not cross-post on the PostgreSQL lists. Pick the most 
appropriate list to post to and just post there.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON does not support infinite date values

2015-02-26 Thread Andrew Dunstan


On 02/26/2015 11:03 AM, Tim Smith wrote:

FYI although I remain a +1 on KISS and emitting infinity, for
those of you still yearning after a standards-based implementation,
there is a StackOverflow post which hints at sections 3.5 and 3.7 of
ISO8601:2004.

Unfortunatley I can't find a link to an ISO8601:2004 text, so you'll
have to make do with the SO quoted extracts instead
http://stackoverflow.com/questions/11408249/how-do-you-represent-forever-infinitely-in-the-future-in-iso8601




If you want to do that then store that in your date/timestamp data and 
we'll output it. But we're not going to silently convert infinity to 
anything else:


   andrew=# select to_json('9-12-31'::timestamptz);
   to_json
   --
 9-12-31T00:00:00-05:00


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON does not support infinite date values

2015-02-26 Thread Andrew Dunstan


On 02/26/2015 07:02 AM, Andres Freund wrote:

Hi,

On 2015-02-26 11:55:20 +, Tim Smith wrote:

As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?

impose its own nonsense constraints - breathe slowly in, and out, in,
and out.

It looks to me like ab14a73a6ca5cc4750f0e00a48bdc25a2293034a copied too
much code from xml.c - including a comment about XSD... Andrew, was that
intentional?




Possibly too much was copied, I don't recall a reason offhand for 
excluding infinity. I'm not opposed to changing it (jsonb will have the 
same issue). We do allow infinity (and NaN etc) when converting numerics 
to json, so perhaps doing it for dates and timestamps too would be more 
consistent.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON does not support infinite date values

2015-02-26 Thread Andrew Dunstan


On 02/26/2015 10:16 AM, Tom Lane wrote:

Andres Freund and...@2ndquadrant.com writes:

On 2015-02-26 11:55:20 +, Tim Smith wrote:

As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?

impose its own nonsense constraints - breathe slowly in, and out, in,
and out.
It looks to me like ab14a73a6ca5cc4750f0e00a48bdc25a2293034a copied too
much code from xml.c - including a comment about XSD... Andrew, was that
intentional?

Not wanting to put words in Andrew's mouth, but I thought the point of
those changes was that timestamps emitted into JSON should be formatted
per some ISO standard or other, and said standard (almost certainly)
doesn't know what infinity is.

At the same time, there is definitely no such requirement in the JSON spec
itself, so at least the error message is quoting the wrong authority.





Well, we could say that we'll use ISO 8601 format for finite dates and 
times, and 'infinity' otherwise. Then if you want to be able to 
interpret them as ISO 8601 format it will be up to you to ensure that 
there are no infinite values being converted.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON does not support infinite date values

2015-02-26 Thread Andrew Dunstan


On 02/26/2015 10:38 AM, Tom Lane wrote:


Yeah, I think so.  The sequence 'infinity'::timestamp to JSON to
ISO-8601-only consumer is going to fail no matter what; there is no
need for us to force a failure at the first step.  Especially when
doing so excludes other, perfectly useful use-cases.

So +1 for removing the error and emitting infinity suitably quoted.
Andrew, will you do that?




Yeah.

cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] ON_ERROR_ROLLBACK

2014-12-30 Thread Andrew Dunstan


On 12/30/2014 09:20 AM, Tom Lane wrote:

Bernd Helmle maili...@oopsware.de writes:

--On 29. Dezember 2014 12:55:11 -0500 Tom Lane t...@sss.pgh.pa.us wrote:

Given the lack of previous complaints, this probably isn't backpatching
material, but it sure seems like a bit of attention to consistency
would be warranted here.

Now that i read it i remember a client complaining about this some time
ago. I forgot about it, but i think there's value in it to backpatch.

Hm.  Last night I wrote the attached draft patch, which I was intending
to apply to HEAD only.  The argument against back-patching is basically
that this might change the interpretation of scripts that had been
accepted silently before.  For example
\set ECHO_HIDDEN NoExec
will now select noexec mode whereas before you silently got on mode.
In one light this is certainly a bug fix, but in another it's just
definitional instability.

If we'd gotten a field bug report we might well have chosen to back-patch,
though, and perhaps your client's complaint counts as that.

Opinions anyone?

r


I got caught by this with ON_ERROR_ROLLBACK on 9.3 just this afternoon 
before remembering this thread. So there's a field report :-)


+0.75 for backpatching (It's hard to imagine someone relying on the bad 
behaviour, but you never know).


cheers

andrew



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [HACKERS] COPY TO returning empty result with parallel ALTER TABLE

2014-11-04 Thread Andrew Dunstan


On 11/04/2014 01:51 PM, Tom Lane wrote:

Bernd Helmle maili...@oopsware.de writes:

--On 3. November 2014 18:15:04 +0100 Sven Wegener
sven.wege...@stealer.net wrote:

I've check git master and 9.x and all show the same behaviour. I came up
with the patch below, which is against curent git master. The patch
modifies the COPY TO code to create a new snapshot, after acquiring the
necessary locks on the source tables, so that it sees any modification
commited by other backends.

Well, i have the feeling that there's nothing wrong with it. The ALTER
TABLE command has rewritten all tuples with its own XID, thus the current
snapshot does not see these tuples anymore. I suppose that in
SERIALIZABLE or REPEATABLE READ transaction isolation your proposal still
doesn't return the tuples you'd like to see.

Not sure.  The OP's point is that in a SELECT, you do get unsurprising
results, because a SELECT will acquire its execution snapshot after it's
gotten AccessShareLock on the table.  Arguably COPY should behave likewise.
Or to be even more concrete, COPY (SELECT * FROM tab) TO ... probably
already acts like he wants, so why isn't plain COPY equivalent to that?


Yes, that seems like an outright bug.

cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres as In-Memory Database?

2013-11-19 Thread Andrew Dunstan


On 11/17/2013 07:02 PM, Stefan Keller wrote:
2013/11/18 Andreas Brandl m...@3.141592654.de 
mailto:m...@3.141592654.de wrote:

 What is your use-case?

It's geospatial data from OpenStreetMap stored in a schema optimized 
for PostGIS extension (produced by osm2pgsql).


BTW: Having said (to Martijn) that using Postgres is probably more 
efficient, than programming an in-memory database in a decent 
language: OpenStreetMap has a very, very large Node table which is 
heavily used by other tables (like ways) - and becomes rather slow in 
Postgres. Since it's of fixed length I'm looking at 
file_fixed_length_record_fdw extension [1][2] (which is in-memory) to 
get the best of both worlds.


--Stefan

[1] 
http://wiki.postgresql.org/wiki/Foreign_data_wrappers#file_fixed_length_record_fdw

[2] https://github.com/adunstan/file_fixed_length_record_fdw



First. please don't top-post on the PostgreSQL lists. See 
http://idallen.com/topposting.html


Second, what the heck makes you think that this is in any sense 
in-memory? You can process a multi-terabyte fixed length file. It's not 
held in memory.


cheers

andrew



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.

2013-05-16 Thread Andrew Dunstan


On 05/16/2013 05:59 PM, Paul Hammond wrote:

Hi all,

I've downloaded PLJava, the latest version, which doesn't seem to have 
a binary distribution at all for 9.2, so I'm trying to build it from 
the source for Postgres 9.2. I have the DB itself installed on Windows 
7 64 bit as a binary install. I've had to do a fair bit of hacking 
with the makefiles on cygwin to get PLJava to build, but I have 
succeeded in compiling the Java and JNI code, the pljava_all and 
deploy_all targets effectively.



Cygwin is not a recommended build platform for native Windows builds. 
See the docs for the recommended ways to build Postgres.





But I'm coming unstuck at the next target where it's doing the target 
c_all. It's trying to find the following makefile in the Postgres dist:


my postgres installation dir/lib/pgxs/src/makefiles/pgxs.mk: No such 
file or directory


What do I need to do to obtain the required files, and does anybody 
know why, given Postgres 9.2 is out some time, and 9.3 is in beta, why 
no prebuild binary PLJavas exist for 9.2?



Because nobody has built them?


FYI, PL/Java is not maintained by the PostgreSQL project.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [HACKERS] Namespace of array of user defined types is confused by the parser in insert?

2012-04-24 Thread Andrew Dunstan



On 04/24/2012 05:12 AM, Krzysztof Nienartowicz wrote:


These types are qualified when created - the error does not happen on
creation - there are two types in two different namespaces - it
happens only on insert where it is not possible to qualify the type's
namespace.
It looks like a bug in the planner to me.



If it is please present a self-contained case demonstrating the bug, 
preferably using psql rather than JDBC. Was all this done with the same 
user / search path throughout the session (no connection pooling, for 
example)? A complete log of the session (with log_statements, 
log_connections and log_disconnections all turned on) might also help.


cheers

andrew



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [HACKERS] Namespace of array of user defined types is confused by the parser in insert?

2012-04-23 Thread Andrew Dunstan


[redirected to pgsql-general]


On 04/23/2012 09:42 AM, Krzysztof Nienartowicz wrote:

Hello,
Sorry for re-posting - I initially posted this in pgsql.sql - probably
this group is more appropriate.



pgsql-general probably would be best. -hackers is for discussion of 
internals and development, not for usage questions.



[types have namespaces]


Is there any way of avoid this error different than having a single
type defined for all schemas?
Any hints appreciated..



Probably your best bet is to put the types explicitly in the public 
namespace when they are created, instead of relying on the search path 
that happens to be in force at the time:


   create type public.foo as ( ...);


Then, assuming that public is in your search path they will be picked up 
properly when used. Alternatively, you can namespace qualify them when used:


   create type public.bar as (f1 public.foo[], ...);



cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries

2011-02-08 Thread Andrew Dunstan



On 02/07/2011 06:38 AM, Thom Brown wrote:

On 7 February 2011 09:04, Itagaki Takahiroitagaki.takah...@gmail.com  wrote:

On Fri, Feb 4, 2011 at 21:32, Thom Brownt...@linux.com  wrote:

The issue is that generate_series will not return if the series hits
either the upper or lower boundary during increment, or goes beyond
it.  The attached patch fixes this behaviour, but should probably be
done a better way.  The first 3 examples above will not return.

There are same bug in int8 and timestamp[tz] versions.
We also need fix for them.
=# SELECT x FROM generate_series(9223372036854775807::int8,
9223372036854775807::int8) AS a(x);

Yes, of course, int8 functions are separate.  I attach an updated
patch, although I still think there's a better way of doing this.


=# SELECT x FROM generate_series('infinity'::timestamp, 'infinity', '1
sec') AS a(x);
=# SELECT x FROM generate_series('infinity'::timestamptz, 'infinity',
'1 sec') AS a(x);

I'm not sure how this should be handled.  Should there just be a check
for either kind of infinity and return an error if that's the case?  I
didn't find anything wrong with using timestamp boundaries:

postgres=# SELECT x FROM generate_series('1 Jan 4713 BC
00:00:00'::timestamp, '1 Jan 4713 BC 00:00:05'::timestamp, '1 sec') AS
a(x);
x

  4713-01-01 00:00:00 BC
  4713-01-01 00:00:01 BC
  4713-01-01 00:00:02 BC
  4713-01-01 00:00:03 BC
  4713-01-01 00:00:04 BC
  4713-01-01 00:00:05 BC
(6 rows)

Although whether this demonstrates a true timestamp boundary, I'm not sure.


postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x);
postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x);

They work as expected in 9.1dev.

Those 2 were to demonstrate that the changes don't affect existing
functionality.  My previous patch proposal (v2) caused these to return
unexpected output.



Isn't this all really a bug fix that should be backpatched, rather than 
a commitfest item?


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries

2011-02-08 Thread Andrew Dunstan



On 02/08/2011 08:19 PM, Itagaki Takahiro wrote:

On Wed, Feb 9, 2011 at 10:17, Andrew Dunstanand...@dunslane.net  wrote:

Isn't this all really a bug fix that should be backpatched, rather than a
commitfest item?

Sure, but we don't have any bug trackers...



Quite right, but the commitfest manager isn't meant to be a substitute 
for one. Bug fixes aren't subject to the same restrictions of feature 
changes.


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Retiring from the PostgreSQL core team

2010-05-13 Thread Andrew Dunstan



Jan Wieck wrote:

To whom it may concern,

this is to inform the PostgreSQL community of my retirement from my
PostgreSQL core team position.

Over the past years I have not been able to dedicate as much time to
PostgreSQL as everyone would have liked. The main reason for that was
that I was swamped with other work and private matters and simply didn't
have time. I did follow the mailing lists but did not participate much.



Your good humor and technical brilliance have been sorely missed.



Looking at my publicly visible involvement over the last two years or
so, there is little that would justify me being on the core team today.
I was not involved in the release process, in patch reviewing,
organizing and have contributed little.

However, in contrast to other previous core team members, I do not plan
to disappear. Very much to the contrary. I am right now picking up some
things that have long been on my TODO wish list and Afilias is doubling
down on the commitment to PostgreSQL and Slony. We can and should talk
about that stuff next week at PGCon in Ottawa. I will also stay in close
contact with the remaining core team members, many of whom have become
very good friends over the past 15 years.

The entire core team, me included, hoped that it wouldn't come to this
and that I could have returned to active duty earlier. Things in my
little sub universe didn't change as fast as we all hoped and we all
think it is best now that I focus on getting back to speed and do some
serious hacking.



We hope for many more good things from you yet!



I hope to see many of you in Ottawa.





You'll certainly see me!

Best wishes and many thanks for your good work over so many years.

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Andrew Dunstan



Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:
  

One problem is that because system oids are used, it isn't possible to
drop the language:
I assume we still want to allow the language to be uninstalled, for
security purposes.



Yes.  That behavior is not acceptable.  Why aren't you just adding
a CREATE LANGUAGE call in one of the initdb scripts?


  


Before we go too far with this, I'd like to know how we will handle the 
problems outlined here: 
http://archives.postgresql.org/pgsql-hackers/2008-02/msg00916.php


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Updating column on row update

2009-11-23 Thread Andrew Dunstan



Tom Lane wrote:

Thom Brown thombr...@gmail.com writes:
  

As for having plpgsql installed by default, are there any security
implications?



Well, that's pretty much exactly the question --- are there?  It would
certainly make it easier for someone to exploit any other security
weakness they might find.  I believe plain SQL plus SQL functions is
Turing-complete, but that doesn't mean it's easy or fast to write loops
etc in it.


  


That's a bit harder argument to sustain now we have recursive queries, ISTM.

cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Updating column on row update

2009-11-22 Thread Andrew Dunstan



Tom Lane wrote:

[ thinks for awhile... ]  Actually, CREATE LANGUAGE is unique among
creation commands in that the common cases have no parameters, at least
not since we added pg_pltemplate.  So you could imagine defining CINE
for a language as disallowing any parameters and having these semantics:
* language not present - create from template
* language present, matches template - OK, do nothing
* language present, does not match template - report error
This would meet the objection of not being sure what the state is
after successful execution of the command.  It doesn't scale to any
other object type, but is it worth doing for this one type?

  




I seriously doubt it. The only reason I could see for such a thing would 
be to make it orthogonal with other CINE commands.


Part of the motivation for allowing inline blocks was to allow for 
conditional logic. So you can do things like:


   DO $$

   begin
   if not exists (select 1 from pg_tables where schemaname = 'foo'
   and tablename = 'bar') then
  create table foo.bar (x int, y text);
   end if;
   end;

   $$;


It's a bit more verbose (maybe someone can streamline it) but it does 
give you CINE (for whatever flavor of CINE you want), as well as lots 
more complex possibilities than we can conceivably build into SQL.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] New shapshot RPMs (Sep 7 2008) are ready for testing

2008-09-07 Thread Andrew Dunstan



Devrim GÜNDÜZ wrote:

Hi,

I just released new RPM sets, which is based on today's CVS snapshot
(Sep 7, 12:00AM PDT).

These packages *do* require a dump/reload, even from previous 8.4
packages, since I now enabled --enable-integer-datetimes in PGDG RPMs by
default (and IIRC there is a catversion update in recent commits, too
lazy to check it now :) ).


  


Hasn't integer-datetimes been the default for a while? Of course, a 
catversion bump will force a dump/reload regardless of that.


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] New shapshot RPMs (Sep 7 2008) are ready for testing

2008-09-07 Thread Andrew Dunstan



Joshua D. Drake wrote:

Andrew Dunstan wrote:




Hasn't integer-datetimes been the default for a while? Of course, a 
catversion bump will force a dump/reload regardless of that.


Unfortunately not. It is the default on some versions of linux such as 
Debian/Ubuntu.





The point I was making is that for 8.4, unless you specifically 
configure with --disable-integer-datetimes, it is enabled by default on 
any platform that can support it. We committed that change on 30 March 
here: http://archives.postgresql.org/pgsql-committers/2008-03/msg00550.php


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Errors with run_build.pl - 8.3RC2

2008-01-22 Thread Andrew Dunstan




cinu wrote:
Hi All, 


I was running the run_Build.pl script that is specific
to Buildfarm and encountered errors. I am listing out
the names of the logfiles and the errors that I have
seen.
Can anyone give me some clarity on these errors?
Even though these errors are existing, at the end the
latest version is getting  downloaded and when I do a
regression testing it goes through. Can anyone give me
clarity on why these errors are occuring?

The logfiles with the errors are listed below, these
errors are for the version 8.3RC2:


  

[snip]

Any suggestions regarding the same is appreciated.


  


Errors in the logfiles are expected. Many of these errors are there by 
design - the regression tests include many tests for error conditions. 
For example, look at the corresponding logs for a green run here: 
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dungbeetledt=2008-01-21%2012:44:01


If the buildfarm script gets through to the end without complaining, 
then it succeeded.


Please note that in general questions regarding use of the buildfarm 
client belong on its mailing list, not on the pgsql lists. For more 
information about the list see here: 
http://pgfoundry.org/mailman/listinfo/pgbuildfarm-members.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-18 Thread Andrew Dunstan



Martijn van Oosterhout wrote:

On Tue, Dec 18, 2007 at 10:35:39AM -0500, Tom Lane wrote:
  

Martijn van Oosterhout [EMAIL PROTECTED] writes:


Ok, but that doesn't apply in this case, his database appears to be
LATIN1 and this character is valid for that encoding...
  

You know what, I think the test in the code is backwards.

is_mb = pg_encoding_max_length(encoding)  1;

if ((is_mb  (cvalue  255)) || (!is_mb  (cvalue  127)))




Yes.



It does seem to be a bit wierd. For single character encodings anything
up to 255 is OK, well, sort of. It depends on what you want chr() to do
(oh no, not this discussion again). If you subscribe to the idea that
it should use unicode code points then the test is completely bogus,
since whether or not the character is valid has nothing to with whether
the encoding is multibyte or not.
  


We are certainly not going to revisit that discussion at this stage. It 
was thrashed out months ago.

If you want the output of th chr() to (logically) depend on the encoding
then the test makes more sense, but ten it's inverted. Single-byte
encodings are by definition defined to 255 characters. And multibyte
encodings (other than UTF-8 I suppose) can only see the ASCII subset.
  


Right. There is a simple thinko on my part in the line of code Tom 
pointed to, which needs to be fixed.


cheers

andrew



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-28 Thread Andrew Dunstan



Greg Sabino Mullane wrote:
Just as a followup, I reported this as a bug and it is 
being looked at and discussed:


http://rt.perl.org/rt3//Public/Bug/Display.html?id=47576

Appears there is no easy resolution yet.


  


We might be able to do something with the suggested workaround. I will 
see what I can do, unless you have already tried.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-28 Thread Andrew Dunstan



Andrew Dunstan wrote:



Greg Sabino Mullane wrote:
Just as a followup, I reported this as a bug and it is being looked 
at and discussed:


http://rt.perl.org/rt3//Public/Bug/Display.html?id=47576

Appears there is no easy resolution yet.


  


We might be able to do something with the suggested workaround. I will 
see what I can do, unless you have already tried.





OK, I have a fairly ugly manual workaround, that I don't yet understand, 
but seems to work for me.


In your session, run the following code before you do anything else:

CREATE OR REPLACE FUNCTION test((text) RETURNS bool LANGUAGE plperl as $$
return shift =~ /\xa9/i ? 'true' : 'false';
$$;
SELECT test('a');
DROP FUNCTION test(text);

After that we seem to be good to go with any old UTF8 chars.

I'm looking at automating this so the workaround can be hidden, but I'd 
rather understand it first.


(Core guys: If we can hold RC1 for a bit while I get this fixed that 
would be good.)


cheers

andrew



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-13 Thread Andrew Dunstan



Greg Sabino Mullane wrote:
Ugh, in testing I see some nastiness here without any explicit 
require. It looks like there's an implicit require if the text 
contains certain chars.



Exactly.

  
Looks like it's going to be very hard, unless someone has some 
brilliant insight I'm missing :-(



The only way I see around it is to do:

$PLContainer-permit('require');
...
$PLContainer-reval('use utf8;');
...
$PLContainer-deny('require');

Not ideal. 


I tried something like that briefly and it failed. The trouble is, I 
think, that since the engine tries a require it fails on the op test 
before it even looks to see if the module is already loaded. If you have 
made something work then please show me, no matter how grotty.


Part of me says we do this because something like //i 
shouldn't suddenly fail just because you added an accented 
character. The other part of me says to just have people use plperlu.
At the very least, we should probably mention it in the docs as 
a gotcha.


  


I think we should search harder for a solution, but I don't have time 
right now. If you want to submit a warning for the docs in a patch we 
can get that in.


cheers

andrew

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-12 Thread Andrew Dunstan




Greg Sabino Mullane wrote:


Yes, we might want to consider making utf8 come pre-loaded for plperl. There 
is no direct or easy way to do it (we don't have finer-grained control than 
the 'require' opcode), but we could probably dial back restrictions, 
'use' it, and then reset the Safe container to its defaults. Not sure what 
other problems that may cause, however. CCing to hackers for discussion 
there.



  


UTF8 is automatically on for strings passed to plperl if the db encoding 
is UTF8. That includes the source text. Please be more precise about 
what you want.


BTW, the perl docs say this about the utf8 pragma:

  Do not use this pragma for anything else than telling Perl that your
  script is written in UTF-8.

There should be no need to do that - we will have done it for you. So 
any attempt to use the utf8 pragma in plperl code is probably broken anyway.


cheers

andrew





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-12 Thread Andrew Dunstan



Andrew Dunstan wrote:




Greg Sabino Mullane wrote:


Yes, we might want to consider making utf8 come pre-loaded for 
plperl. There is no direct or easy way to do it (we don't have 
finer-grained control than the 'require' opcode), but we could 
probably dial back restrictions, 'use' it, and then reset the Safe 
container to its defaults. Not sure what other problems that may 
cause, however. CCing to hackers for discussion there.



  


UTF8 is automatically on for strings passed to plperl if the db 
encoding is UTF8. That includes the source text. Please be more 
precise about what you want.


BTW, the perl docs say this about the utf8 pragma:

  Do not use this pragma for anything else than telling Perl that 
your

  script is written in UTF-8.

There should be no need to do that - we will have done it for you. So 
any attempt to use the utf8 pragma in plperl code is probably broken 
anyway.





Ugh, in testing I see some nastiness here without any explicit require. 
It looks like there's an implicit require if the text contains certain 
chars. I'll see what I can do to fix the bug, although I'm not sure if 
it's possible.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-12 Thread Andrew Dunstan



Andrew Dunstan wrote:



Ugh, in testing I see some nastiness here without any explicit 
require. It looks like there's an implicit require if the text 
contains certain chars. I'll see what I can do to fix the bug, 
although I'm not sure if it's possible.





Looks like it's going to be very hard, unless someone has some brilliant 
insight I'm missing :-(


Maybe we need to consult the perl coders.

cheers

andrew

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Re: [HACKERS] SOS, help me please, one problem towards the postgresql developement on windows

2007-05-01 Thread Andrew Dunstan


[removing -hackers, as this question really doesn't seem to belong there]

shieldy wrote:

thankyou very much.
but the method, you said, is adding a alias name, so it can not work.
and as i need to add many functions likes this, so the best way is to 
compile the whole postgresql. eventhough, i did, it didnot work, so i 
am puzzled, can add a function directly in the source file, and 
compile it, can it work??
BTW: I have just seach the source for addingthe built-in function, and 
found it need to add declaration in the include geo_decls.h, and add 
the function in the geo_ops.c. can it not be enough??



 



Why on earth are you not just creating your functions as a loadable C 
module? Unless you are doing something strange there seems little need 
for you to be taking the approach you are taking. The contrib directory 
has some examples of how to do this. PostgreSQL is designed to be 
extensible, but you are apparently ignoring the extensibility features.


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [HACKERS] Kill a Long Running Query

2007-04-25 Thread Andrew Dunstan

Mageshwaran wrote:

Hi ,
Any body tell me how to kill a long running query in postgresql, is 
there any statement to kill a query, and also tell me how to log slow 
queries to a log file.





First. please do not cross-post like this. Pick the correct list and use it.

Second, this query definitely does not belong on the -hackers list.

Third, please find a way of posting to lists that does not include a 
huge disclaimer and advertisements. If that is added by your company's 
mail server, you should look at using some other method of posting such 
as gmail.


Fourth, please read our excellent documentation. It contains the answers 
to your questions, I believe.


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Re: [HACKERS] 5 Weeks till feature freeze or (do you know where your patch is?)

2007-02-23 Thread Andrew Dunstan

Joshua D. Drake wrote:

Andrew Dunstan: Something with COPY? Andrew?

  


The only thing I can think of is to remove the support for ancient COPY 
syntax from psql's \copy, as suggested here: 
http://archives.postgresql.org/pgsql-hackers/2007-02/msg01078.php


That's hardly a feature - more a matter of tidying up.



Neil Conway: pgmemcache
Josh Drake: pgmemcache
  



what does this refer to?


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-16 Thread Andrew Dunstan

Tom Lane wrote:

Magnus Hagander [EMAIL PROTECTED] writes:
  

And actually, when I look at the API docs, our case now seems to be
documented. Or am I misreading our situation. I have:



  

If you call CreateFile on a file that is pending deletion as a result
of a previous call to DeleteFile, the function fails. The operating
system delays file deletion until all handles to the file are closed.
GetLastError returns ERROR_ACCESS_DENIED.



We are not calling CreateFile ... we're just trying to open the thing.




see src/port/open.c - pgwin32_open() calls CreateFile().

cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-11 Thread Andrew Dunstan

Richard Troy wrote:

On Thu, 11 Jan 2007, Tom Lane wrote:

...snip...
  

(You know, of course, that my opinion is that no sane person would run a
production database on Windows in the first place.  So the data-loss
risk to me seems less of a problem than the unexpected-failures problem.
It's not like there aren't a ton of other data-loss scenarios in that OS
that we can't do anything about...)





PLEASE OH PLEASE document every f-ing one of them! (And I don't mean
document Windows issues as comments in the source code. Best would be in
the official documentation/on a web page.) On occasion, I could *really*
use such a list! (If such already exists, please point me at it!)

Thing is, Tom, not everybody has the same level of information you have on
the subject...


  



Please don't. At least not on the PostgreSQL web site nor in the docs. 
And no, I don't run my production servers on Windows either.


For good or ill, we made a decision years ago to do a proper Windows 
port. I think that it's actually worked out reasonably well. All 
operating systems have warts. Not long ago I tended to advise people not 
to run mission critical Postgresql on Linux unless they were *very* 
careful, due to the over-commit issue.


In fact, I don't trust any OS. I use dumps and backups and replication 
to protect myself from them all.


In the present instance, the data loss risk is largely theoretical, as I 
understand it, as we don't expect a genuine EACCESS error.


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI Installer

2006-01-30 Thread Andrew Dunstan



Devrim GUNDUZ wrote:


OTOH, exluding Synaptic that I hate to use, FC / RH does not have a GUI
RPM interface for the repositories. So our installer will help them a
lot. Also, our installer will have an option to download and install the
prebuilt binaries from PostgreSQL FTP site (and possible other sites) 

 




There's yumex ... http://fedoranews.org/tchung/yumex/

cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [GENERAL] New project launched : PostgreSQL GUI

2006-01-30 Thread Andrew Dunstan



Marc G. Fournier wrote:



More seriously, I know under FreeBSD, one of the first things that 
gets done after installing is to customize the kernel to get rid of 
all the 'cruft' part of the generic kernel, I take it that this isn't 
something that ppl do with Linux?




The Linux kernel has loadable modules, so it's much less of an issue. 
For example, I just installed the Cisco VPN s/w on my FC4 box.  I didn't 
have to rebuild the kernel, all I have to do is to load the kernel 
module that puts a wedge in the IP stack.


The parts of the kernel that are optional are almost all loadable modules.

Some people do build static kernels. That makes sense when you have 
tightly controlled hardware and software requirements. I mostly don't 
bother.


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Andrew Dunstan

Vlad wrote:
i.e. the following perl code won't work correctly with DBD::Pg 1.40+
$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two); 

in the last call $sth1 prepared query will be actually executed, i.e.
one.test table used, not two.test as a programmer would expect!
 

Correctness seems to be in the eye of the beholder.
It does what I as a programmer would expect. The behaviour you 
previously saw was an unfortunate byproduct of the fact that up to now 
DBD::Pg has emulated proper prepared statements, whereas now it uses 
them for real. Any application that relies on that broken byproduct is 
simply erroneous, IMNSHO.

If you really need this, then as previously discussed on list, there is 
a way to turn off use of server-side prepared statements.

cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Andrew Dunstan

Andrew Dunstan wrote:

Vlad wrote:
i.e. the following perl code won't work correctly with DBD::Pg 1.40+
$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two);
in the last call $sth1 prepared query will be actually executed, i.e.
one.test table used, not two.test as a programmer would expect!
 

Correctness seems to be in the eye of the beholder.
It does what I as a programmer would expect. The behaviour you 
previously saw was an unfortunate byproduct of the fact that up to now 
DBD::Pg has emulated proper prepared statements, whereas now it uses 
them for real. Any application that relies on that broken byproduct is 
simply erroneous, IMNSHO.

If you really need this, then as previously discussed on list, there 
is a way to turn off use of server-side prepared statements.


Oops. I missed that the code used prepare_cached() rather than just 
prepare().

I am not sure this is reasonably fixable. Invalidating the cache is not 
a pleasant solution - the query might not be affected by the change in 
search path at all. I'd be inclined to say that this is just a 
limitation of prepare_cached() which should be documented.

cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-04 Thread Andrew Dunstan

Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
 

On Fri, 1 Apr 2005, Joshua D. Drake wrote:
   

Are we interested in having plPHP in core?
 

 

Is there a reason why it can no longer operate as a standalone language 
out of pgfoundry, like pl/java and pl/perl?
   


I have said this before. Let me say it again and please take note. I did 
not start the plperlng project on pgfoundry as an alternative to the 
core plperl. It is a developers sandbox, and it was always the intention 
to feed the work back to the core, as indeed we did for the 8.0 release. 
Frankly, if I had thought that there would be so much wilful 
misunderstanding of the intention I would not have done it. So please 
stop with this assertion that plperl runs from pgfoundry.

I am really at a loss to understand thie push to get PLs out of the 
core. Whose interest do you think it will serve? We just advertised the 
upgrade to plperl as a major selling point of the 8.0 release. The 
someone might do it differently or better argument is a cop-out. If 
you're in the management group your responsibility is to make sensible 
choices.

Lots of software acquires standard packages over time. Example: perl, 
which has an extremely well publicised and well-known extension system 
(CPAN) that has had for years a high resolution timer extension package 
available. From the 5.8 release that package has become part of the 
standard distribution. That doesn't stop anyone from developing a better 
or alternative hires timer.

If we had a very much larger postgres development community then it 
might make sense to foster some diversity among PL implementations. We 
don't, so it doesn't, IMNSHO.

PLs are sufficiently tightly tied to the core that it's probably 
easier to maintain them as part of our core CVS than otherwise.
(Ask Joe Conway about PL/R.  Thomas Hallgren is probably not that
happy about maintaining pl/java out of core, either.  And pl/perl
*is* in core.)
 

And we need the core support. I appreciate having the support and help 
of Tom, Joe, Bruce and others. I have little doubt Joshua Drake feels 
the same way.

I'm thinking that a pl/PHP is much more interesting for the long term
than, say, pl/tcl (mind you, I am a Tcl partisan from way back, but
I see that many people are not so enlightened).  Barring any licensing
problems I think this is something to pursue.
 

Yes, I regard it as an abomination unto man and god, but others want it. 
:-) If there are no license or build issues I'm in favor. Quite apart 
from anything else it might help grab some market share from all those 
apps built on php+mysql

One last thing: one of the enhancements in the wind for buildfarm is to 
run the PL tests. This will *only* be done for PLs that are in the core 
- I am not going to get into buildfarm having to run cvs update against 
more than one source. So if you want that to happen, keep the PLs where 
they are (and take on pl/php if possible). I'd also love to have pl/ruby 
- is that another one that is inhibited by license issues?

cheers
andrew
 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-04 Thread Andrew Dunstan

Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
 

... If there are no license or build issues I'm in favor.
   

Peter has pointed out that the problem of circular dependencies is a
showstopper for integrating plPHP.  The build order has to be
Postgres
PHP (since its existing DB support requires Postgres to build)
plPHP
so putting #1 and #3 into the same package is a no go.  Which is too
bad, but I see no good way around it.
	
 

Oh. I didn't see that - I assumed that it had been fixed. (My email has 
been AWOL for 48 hours).

cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-04 Thread Andrew Dunstan

Robert Treat wrote:
On Monday 04 April 2005 12:01, Tom Lane wrote:
 

Andrew Dunstan [EMAIL PROTECTED] writes:
   

... If there are no license or build issues I'm in favor.
 

Peter has pointed out that the problem of circular dependencies is a
showstopper for integrating plPHP.  The build order has to be
Postgres
PHP (since its existing DB support requires Postgres to build)
plPHP
so putting #1 and #3 into the same package is a no go.  Which is too
bad, but I see no good way around it.
   

AFAICT Peter's claim is false.  You can install plphp in the order of PHP, 
PostgreSQL,plPHP  which is the same for all of the other pl's.   

You don't need postgresql installed before php any more than you need it 
installed for perl (although you do need postgresql installed to compile some 
of the perl  php db interfaces, but that is all after the fact.)
 


I am told that the difference is that PHP gives you a choice of 
statically or dynamically linked db support. By contrast, in Perl, for 
example, DBD::Pg is always built dynamically (AFAIK). Your assessment 
appears to be true for the (very common) case where PHP's client side db 
support is dynamically linked.

cheers
andrew
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] [HACKERS] Adding Reply-To: listname to Lists

2004-11-28 Thread Andrew Dunstan
Doug McNaught said:
 Marc G. Fournier [EMAIL PROTECTED] writes:

 No, the poster will still be included as part of the headers ... what
 happens, at least under Pine, is that I am prompted whther I want to
 honor the reply-to, if I hit 'y', then the other headers *are* strip'd
 and the mail is set right back to the list ...

 I'm in the Reply-To considered harmful camp.  I also don't see any
 real evidence that the current setup is causing problems.



And the historical document referred to can be found here:

http://www.unicom.com/pw/reply-to-harmful.html

and an opposing view here:

http://www.metasystema.net/essays/reply-to.mhtml


cheers

andrew



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] [HACKERS] Remove MySQL Tools from Source?

2004-04-18 Thread Andrew Dunstan
Christopher Kings-Lynne said:
 But you would have to assign the copyright to them 

 If someone is going to make money from my code, I prefer it to be me,
 or  at least that everyone has a chance to do so rather than just one
 company.

 Well, then for the same reason we should write a Perl script that
 connects to MySQl and dumps in PGSql format.

 I think it's silly to try and read a MySQL dump and convert it - let's
 just dump straight from the source.

 Josh - I'm kind of keen to make this happen...


You might want to check out the DBIx::DB_Schema module at
http://search.cpan.org/~ivan/DBIx-DBSchema-0.23/DBSchema.pm

cheers

andrew



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [HACKERS] Remove MySQL Tools from Source?

2004-04-18 Thread Andrew Dunstan


Shachar Shemesh wrote:

Tom Lane wrote:

These tools are a not insignificant part of our Plan for World
Domination ;-) so it would be good if somebody stepped up to the
plate and volunteered to take care of 'em.  Anybody?
 

Which brings me to another question

I have a bunch of perl scripts, as well as one user-defined type, for 
porting from SQL Server. Where should I place these?

Inside the PG source seemswrong. Then again, gborg does not seem 
to be accepting new projects at the moment. I can put them on 
sourceforge/berlios etc, and ask for a link, if you like.




The replacement for gborg should be available within a few days.

cheers

andrew

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] Remove MySQL Tools from Source?

2004-04-18 Thread Andrew Dunstan


Andrew Dunstan wrote:

Christopher Kings-Lynne said:
 

But you would have to assign the copyright to them 

If someone is going to make money from my code, I prefer it to be me,
or  at least that everyone has a chance to do so rather than just one
company.
 

Well, then for the same reason we should write a Perl script that
connects to MySQl and dumps in PGSql format.
I think it's silly to try and read a MySQL dump and convert it - let's
just dump straight from the source.
Josh - I'm kind of keen to make this happen...

   

You might want to check out the DBIx::DB_Schema module at
http://search.cpan.org/~ivan/DBIx-DBSchema-0.23/DBSchema.pm
And this also looks cool - I just came across it searching for something 
else:

http://sqlfairy.sourceforge.net/

cheers

andrew

 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [HACKERS] Remove MySQL Tools from Source?

2004-04-17 Thread Andrew Dunstan


Jan Wieck wrote:

Christopher Kings-Lynne wrote:

... on projects.postgresql.org, or similar.They really aren't 
doing any good in /contrib.

I've already set up a category conversion tools on pgFoundry, and 
my idea was one project per target system.


I reckon that by far the best way to do a mysql2pgsql converter is to 
just modify mysqldump C source code to output in postgresql format! 


... and contribute it to MySQL :-)

But you would have to assign the copyright to them 

If someone is going to make money from my code, I prefer it to be me, or 
at least that everyone has a chance to do so rather than just one company.

cheers

andrew

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])