Re: [BUGS] BUG #8293: There are no methods to convert json scalar text to text in v9.3 beta2

2013-08-02 Thread Andrew Dunstan


On 08/02/2013 01:04 PM, Bruce Momjian wrote:

On Wed, Jul 10, 2013 at 07:07:54PM +, jaroslav.pota...@gmail.com wrote:

The following bug has been logged on the website:

Bug reference:  8293
Logged by:  Yaroslav Potapov
Email address:  jaroslav.pota...@gmail.com
PostgreSQL version: Unsupported/Unknown
Operating system:   All
Description:

SELECT '"a\"b"'::json::text


returns text: '"a\"b"' ,
but it must return 'a"b' in my opinion.

I see you didn't get a reply, so let me try.  I am no JSON expert, but I
think what is happening is that the system stores "a\"b" because that is
what a JSON/Javascript interpreter would need to understand that value.
It would convert "a\"b" to a"b.  If we just stored a"b, the interpreter
would throw an error on input.



Well, yes, although the shorter answer is simply that we would not be 
storing legal JSON, which is defined by a standard, not by the 
requirements of interpreters.



There is no specific cast to text for json. The cast therefore calls the 
type's output function, which of course delivers the json string. To do 
as the OP suggests would require us to treat JSON scalar strings as 
special, since we would certainly not want to de-escape any JSON that 
wasn't just a scalar string. e.g. removing quotes or backslashes in this 
would be a major error:


   select '{"\"a": "b\"c"}'::json::text;

IOW, this isn't a bug in my view.

What we should possibly provide is a function to de-escape JSON scalar 
strings explicitly. It would be a simple extension to write, 
particularly for 9.3 where the JSON parser is hookable. (Or it could 
easily be added as a core function of course).


cheers

andrew






--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8271: Configure warning: sys/ucred.h: present but cannot be compiled

2013-07-25 Thread Andrew Dunstan


On 07/25/2013 09:48 AM, Tom Lane wrote:

Andres Freund  writes:

Before that commit the checks for cmsgcred which includes sys/ucred.h
happened to include params.h... Patch attached, missing the configure
update since I don't have a compatible autoconf on my laptop, to produce
a minimal diff.

Could somebody apply the fix (including regenerating /configure)?

The proposed patch seems a bit overcomplicated --- isn't the real
problem that I changed the ordering of the header probes in
be4585b1c27ac5dbdd0d61740d18f7ad9a00e268?  I think I just alphabetized
them in a fit of neatnik-ism, not realizing that there were order
dependencies on some platforms.





It looks to me like you didn't reorder anything, you added a test for 
sys/ucred.h.


I haven't seen the proposed patch, though.

cheers

andrew


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8271: Configure warning: sys/ucred.h: present but cannot be compiled

2013-07-01 Thread Andrew Dunstan


On 07/01/2013 05:35 PM, Peter Eisentraut wrote:

On 7/1/13 9:19 AM, Tom Lane wrote:

AFAICT, the result in this case would be that the script comes to the
wrong conclusion about whether ucred.h is available.  Wouldn't that
result in a build failure, or at least missing features?  IOW, don't
we need to fix this test anyway?

The test needs to be fixed, but with a newer Autoconf version we would
(probably) have been alerted about that by a build failure rather than
someone scanning build logs.


I take it you mean a configure failure would occur with a later Autoconf.

cheers

andrew


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8271: Configure warning: sys/ucred.h: present but cannot be compiled

2013-06-30 Thread Andrew Dunstan


On 06/30/2013 11:07 AM, Andres Freund wrote:

On 2013-06-30 10:17:50 -0400, Andrew Dunstan wrote:

On 06/30/2013 09:49 AM, Tom Lane wrote:

Andrew Dunstan  writes:

On 2013-06-30 15:17:20 +0200, Andres Freund wrote:

Andrew: Could we perhaps check for the "Report this to" bit in the
buildfarm?

I'm not sure what you're asking here.

I think he's wishing that if configure prints something like

configure: WARNING: sys/ucred.h: present but cannot be compiled
configure: WARNING: sys/ucred.h: check for missing prerequisite headers?
configure: WARNING: sys/ucred.h: see the Autoconf documentation
configure: WARNING: sys/ucred.h: section "Present But Cannot Be Compiled"
configure: WARNING: sys/ucred.h: proceeding with the preprocessor's result
configure: WARNING: sys/ucred.h: in the future, the compiler will take 
precedence
configure: WARNING: ##  ##
configure: WARNING: ## Report this to pgsql-bugs@postgresql.org ##
configure: WARNING: ##  ##

that that ought to be treated as a failure not a success.  I'm not
entirely sure that I agree, but it's an arguable position.

Exactly. That we presumably had this warning showing up for more than 2
years seems to indicate we should think about doing something different.


Oh. Well, if that's a failure then it's up to configure to treat it as one.
The buildfarm doesn't second-guess the exit status of the various steps, and
it doesn't report warnings - if it did we'd be flooded.

I guess we don't want to do that because it would probably hurt people
building in unusual environments where some variants of this very well
can show up without stopping pg from being built. Many people on such
problems will have no difficulties fixing a minor compilation error, but
fixing configure.in + installing the correct autoconf version is a
higher barrier.
We could add a --strict-mode or so to configure, but afair the handling
of that warning is burried in autoconf itself making this harder. So
I thought adding some grep like thing to the buildfarm might be the
easiest solution.




But that *would* be second-guessing configure's exit status.

I don't understand the reference to autoconf - nobody building Postgres, 
including buildfarm members, needs autoconf installed at all. Only 
developers and committers need to, and then only when configure.in is 
changed.


cheers

andrew


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8271: Configure warning: sys/ucred.h: present but cannot be compiled

2013-06-30 Thread Andrew Dunstan


On 06/30/2013 09:49 AM, Tom Lane wrote:

Andrew Dunstan  writes:

On 2013-06-30 15:17:20 +0200, Andres Freund wrote:

Andrew: Could we perhaps check for the "Report this to" bit in the
buildfarm?

I'm not sure what you're asking here.

I think he's wishing that if configure prints something like

configure: WARNING: sys/ucred.h: present but cannot be compiled
configure: WARNING: sys/ucred.h: check for missing prerequisite headers?
configure: WARNING: sys/ucred.h: see the Autoconf documentation
configure: WARNING: sys/ucred.h: section "Present But Cannot Be Compiled"
configure: WARNING: sys/ucred.h: proceeding with the preprocessor's result
configure: WARNING: sys/ucred.h: in the future, the compiler will take 
precedence
configure: WARNING: ##  ##
configure: WARNING: ## Report this to pgsql-bugs@postgresql.org ##
configure: WARNING: ##  ##

that that ought to be treated as a failure not a success.  I'm not
entirely sure that I agree, but it's an arguable position.



Oh. Well, if that's a failure then it's up to configure to treat it as 
one. The buildfarm doesn't second-guess the exit status of the various 
steps, and it doesn't report warnings - if it did we'd be flooded.


cheers

andrew


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8271: Configure warning: sys/ucred.h: present but cannot be compiled

2013-06-30 Thread Andrew Dunstan


On 06/30/2013 09:20 AM, Andres Freund wrote:

On 2013-06-30 15:17:20 +0200, Andres Freund wrote:

Andrew: Could we perhaps check for the "Report this to" bit in the
buildfarm?

FWIW: I checked that there are no other reports on HEAD atm.




I'm not sure what you're asking here. Where exactly do you thing 
buildfarm failures should be reported? There are four mailing lists that 
get buildfarm status reports:


 * 
   gets a summary of every single reported build
 *  gets
   a summary of every build that fails
 * 
   gets a summary of every build that results in a status change
 * 
   gets a summary of every build that results in a status change to or
   from green (a.k.a. OK)

These are available in digest form.

What we could possibly add is a feature to email a committer about a 
commit included in the changeset of a failing build. The main trick 
would be to avoid flooding the committers, so that a commit would only 
be notified once. Magnus has suggested something like this previously, 
but I haven't looked at it much - I can again. It might not be too hard.


cheers

andrew





--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] [BUGS] BUG #7656: PL/Perl SPI_freetuptable() segfault

2012-11-13 Thread Andrew Dunstan


On 11/13/2012 12:17 PM, Tom Lane wrote:

pgm...@joh.to writes:

I have a reproducible segmentation fault in PL/Perl.  I have yet to narrow
down the test case to something sensible, but I do have a backtrace:
219 while (context->firstchild != NULL)
(gdb) bt
#0  0x000104e90782 in MemoryContextDeleteChildren (context=0x102bd)
at mcxt.c:219
#1  0x000104e906a8 in MemoryContextDelete (context=0x102bd) at
mcxt.c:174
#2  0x000104bbefb5 in SPI_freetuptable (tuptable=0x7f9ae4289230) at
spi.c:1003
#3  0x00011ec9928b in plperl_spi_execute_fetch_result
(tuptable=0x7f9ae4289230, processed=1, status=-6) at plperl.c:2900
#4  0x00011ec98f27 in plperl_spi_exec (query=0x7f9ae4155f80
"0x7f9ae3e3fe50", limit=-439796840) at plperl.c:2821
#5  0x00011ec9b5f7 in XS__spi_exec_query (my_perl=0x7f9ae40cce00,
cv=0x7f9ae4148e90) at SPI.c:69
While trying to narrow down the test case I noticed what the problem was: I
was calling spi_execute_query() instead of spi_execute_prepared().

Hm.  It looks like SPI_execute failed as expected (note the status
passed to plperl_spi_execute_fetch_result is -6 which is
SPI_ERROR_ARGUMENT), but it did not reset SPI_tuptable, which led to
plperl_spi_execute_fetch_result trying to call SPI_freetuptable on what
was probably an already-deleted tuple table.

One theory we could adopt on this is that this is
plperl_spi_execute_fetch_result's fault and it shouldn't be trying to
free a tuple table unless status > 0.

Another theory we could adopt is that SPI functions that are capable of
setting SPI_tuptable ought to clear it at start, to ensure that they
return it as null on failure.

The latter seems like a "nicer" fix but I'm afraid it might have
unexpected side-effects.  It would certainly be a lot more invasive.



These aren't mutually exclusive, though, are they? It seems reasonable 
to do the minimal fix for the stable branches (looks like it's just a 
matter of moving the call up a couple of lines in plperl.c) and make the 
nicer fix just for the development branch.


cheers

andrew




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Re: [BUGS] 9.2beta1 regression: pg_restore --data-only does not set sequence values any more

2012-05-25 Thread Andrew Dunstan



On 05/21/2012 02:59 PM, Andrew Dunstan wrote:



On 05/16/2012 10:23 AM, Andrew Dunstan wrote:



On Wed, May 16, 2012 at 9:08 AM, Tom Lane <mailto:t...@sss.pgh.pa.us>> wrote:


Martin Pitt mailto:mp...@debian.org>> writes:
> while packaging 9.2 beta 1 for Debian/Ubuntu the postgresql-common
> test suite noticed a regression: It seems that pg_restore
--data-only
> now skips the current value of sequences, so that in the upgraded
> database the sequence counter is back to the default.

I believe this is a consequence of commit
a4cd6abcc901c1a8009c62a27f78696717bb8fe1, which introduced the
entirely
false assumption that --schema-only and --data-only have 
something to

do with the order that entries appear in the archive ...



Darn, will investigate.




[cc -hackers]

Well, the trouble is that we have these pesky SECTION_NONE entries for 
things like comments, security labels and ACLs that need to be dumped 
in the right section, so we can't totally ignore the order. But we 
could (and probably should) ignore the order for making decisions 
about everything BUT those entries.


So, here's a revised plan:

--section=data will dump exactly TABLE DATA, SEQUENCE SET or BLOBS 
entries
--section=pre-data will dump SECTION_PRE_DATA items (other than 
SEQUENCE SET) plus any immediately following SECTION_NONE items.

--section=post-data will dump everything else.






It turns out there were some infelicities with pg_dump as well as with 
pg_restore.


I think the attached patch does the right thing. I'll keep testing - 
I'll be happier if other people bang on it too.


cheers

andrew
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***
*** 2341,2354  _tocEntryRequired(TocEntry *te, RestoreOptions *ropt, bool include_acls)
  	if (!ropt->createDB && strcmp(te->desc, "DATABASE") == 0)
  		return 0;
  
! 	/* skip (all but) post data section as required */
! 	/* table data is filtered if necessary lower down */
  	if (ropt->dumpSections != DUMP_UNSECTIONED)
  	{
! 		if (!(ropt->dumpSections & DUMP_POST_DATA) && te->inPostData)
! 			return 0;
! 		if (!(ropt->dumpSections & DUMP_PRE_DATA) && ! te->inPostData && strcmp(te->desc, "TABLE DATA") != 0)
  			return 0;
  	}
  
  
--- 2341,2365 
  	if (!ropt->createDB && strcmp(te->desc, "DATABASE") == 0)
  		return 0;
  
! 	/* 
! 	 * Skip pre and post data section as required 
! 	 * Data is filtered if necessary lower down 
! 	 * Sequence set operations are in the pre data section for parallel
! 	 * processing purposes, but part of the data section for sectioning
! 	 * purposes.
! 	 * SECTION_NONE items are filtered according to where they are 
! 	 * positioned in the list of TOC entries.
! 	 */
  	if (ropt->dumpSections != DUMP_UNSECTIONED)
  	{
! 		if (!(ropt->dumpSections & DUMP_POST_DATA) &&  /* post data skip */
! 			((te->section == SECTION_NONE && te->inPostData) || 
! 			  te->section == SECTION_POST_DATA))
  			return 0;
+ 		if (!(ropt->dumpSections & DUMP_PRE_DATA) &&  /* pre data skip */
+ 			((te->section == SECTION_NONE && ! te->inPostData) || 
+ 			 (te->section == SECTION_PRE_DATA && strcmp(te->desc, "SEQUENCE SET") != 0)))
+ 			return 0;			
  	}
  
  
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***
*** 7096,7101  dumpDumpableObject(Archive *fout, DumpableObject *dobj)
--- 7096,7103 
  
  	switch (dobj->objType)
  	{
+ 		case DO_TABLE:
+ 			break; /* has its own controls */
  		case DO_INDEX:
  		case DO_TRIGGER:
  		case DO_CONSTRAINT:
***
*** 12075,12081  dumpTable(Archive *fout, TableInfo *tbinfo)
  
  		if (tbinfo->relkind == RELKIND_SEQUENCE)
  			dumpSequence(fout, tbinfo);
! 		else if (!dataOnly)
  			dumpTableSchema(fout, tbinfo);
  
  		/* Handle the ACL here */
--- 12077,12083 
  
  		if (tbinfo->relkind == RELKIND_SEQUENCE)
  			dumpSequence(fout, tbinfo);
! 		else if (dumpSections & DUMP_PRE_DATA)
  			dumpTableSchema(fout, tbinfo);
  
  		/* Handle the ACL here */
***
*** 13291,13297  dumpSequence(Archive *fout, TableInfo *tbinfo)
  	 *
  	 * Add a 'SETVAL(seq, last_val, iscalled)' as part of a "data" dump.
  	 */
! 	if (!dataOnly)
  	{
  		/*
  		 * DROP must be fully qualified in case same name appears in
--- 13293,13299 
  	 *
  	 * Add a 'SETVAL(seq, last_val, iscalled)' as part of a "data" dump.
  	 */
! 	if (dumpSections & DUMP_PRE_DATA)
  	{
  		/*
  		 * DROP must be fully qualified in case same name appears in
***
*** 13412,13418  dumpSequence(Archive *fout, TableInfo *tbinfo)
  	 tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId);
  	}
  
! 	if (!sc

Re: [BUGS] 9.2beta1 regression: pg_restore --data-only does not set sequence values any more

2012-05-21 Thread Andrew Dunstan



On 05/16/2012 10:23 AM, Andrew Dunstan wrote:



On Wed, May 16, 2012 at 9:08 AM, Tom Lane <mailto:t...@sss.pgh.pa.us>> wrote:


Martin Pitt mailto:mp...@debian.org>> writes:
> while packaging 9.2 beta 1 for Debian/Ubuntu the postgresql-common
> test suite noticed a regression: It seems that pg_restore
--data-only
> now skips the current value of sequences, so that in the upgraded
> database the sequence counter is back to the default.

I believe this is a consequence of commit
a4cd6abcc901c1a8009c62a27f78696717bb8fe1, which introduced the
entirely
false assumption that --schema-only and --data-only have something to
do with the order that entries appear in the archive ...



Darn, will investigate.




[cc -hackers]

Well, the trouble is that we have these pesky SECTION_NONE entries for 
things like comments, security labels and ACLs that need to be dumped in 
the right section, so we can't totally ignore the order. But we could 
(and probably should) ignore the order for making decisions about 
everything BUT those entries.


So, here's a revised plan:

--section=data will dump exactly TABLE DATA, SEQUENCE SET or BLOBS 
entries
--section=pre-data will dump SECTION_PRE_DATA items (other than 
SEQUENCE SET) plus any immediately following SECTION_NONE items.

--section=post-data will dump everything else.

Comments?


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] 9.2beta1 regression: pg_restore --data-only does not set sequence values any more

2012-05-16 Thread Andrew Dunstan
On Wed, May 16, 2012 at 9:08 AM, Tom Lane  wrote:

> Martin Pitt  writes:
> > while packaging 9.2 beta 1 for Debian/Ubuntu the postgresql-common
> > test suite noticed a regression: It seems that pg_restore --data-only
> > now skips the current value of sequences, so that in the upgraded
> > database the sequence counter is back to the default.
>
> I believe this is a consequence of commit
> a4cd6abcc901c1a8009c62a27f78696717bb8fe1, which introduced the entirely
> false assumption that --schema-only and --data-only have something to
> do with the order that entries appear in the archive ...
>
>
>

Darn, will investigate.

cheers

andrew


Re: [BUGS] [HACKERS] COPY .... WITH (FORMAT binary) causes syntax error at or near "binary"

2011-07-05 Thread Andrew Dunstan



On 07/05/2011 11:23 AM, Robert Haas wrote:


Yeah.  In particular, it conflicts with the ancient copy syntax which
we still support for backwards compatibility with versions<  7.3.  We
can fix the immediate problem with something like the attached.

(a) Should we do that?


yes.


(b) Should we back-patch it to 9.1 and 9.0?


yes.


(c) Should we consider removing compatibility with the ancient copy
syntax in 9.2, and de-reserving that keyword?  (Given that the
workaround is this simple, I'm inclined to say "no", but could be
persuaded otherwise.)





I'm inclined to say yes, but mainly because it's just old cruft. I don't 
expect to be able,say, to load a pre-7.3 dump into a modern Postgres.


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] auto-explain does not work with JSON & csvlog

2010-07-18 Thread Andrew Dunstan



Tom Lane wrote:

Anyway, you'll get the same "not safe" bleat for any message
logged during early postmaster startup.

Maybe we should just drop the "not safe" message.  It's not conveying
anything very helpful, I think.  The useful bit of the behavior is to
shove the original message out to stderr, which it's doing already.


  


I thought we agreed back in November to stop the bleating. Maybe you 
thought I'd remove it and I thought you would ;-).


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] [BUGS] Invalid YAML output from EXPLAIN

2010-06-07 Thread Andrew Dunstan



Robert Haas wrote:

On Mon, Jun 7, 2010 at 10:37 AM, Greg Sabino Mullane  wrote:
  

Tom Lane wrote:
I don't think the above would be particularly hard to implement myself,
but if it becomes a really big deal, we can certainly punt by simply
quoting anything containing an indicator (the special characters above).
It will still be 100% valid YAML, just with some excess quoting for the
very rare case when a value contains one of the special characters.



Since you're the main advocate of this feature, I think you should
implement it rather than leaving it to Tom or I.
  


Or anyone else :-)


The reason why I was initially skeptical of adding a YAML output
format is that JSON is a subset of YAML.  Therefore, the JSON output
format ought to be perfectly sufficient for anyone using a YAML
parser.  
  


There is some debate on this point, IIRC.

cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Re: BUG #5065: pg_ctl start fails as administrator, with "could not locate matching postgres executable"

2009-10-21 Thread Andrew Dunstan



Magnus Hagander wrote:

From a quick look, it looks fine to me. I don't have time to do a
complete check right now, but I'll do that as soon as I can and then
commit it - unless people feel it's more urgent than maybe a week
worst case, in which case someone else has to pick it up :-)


  


I'd rather wait till you can check it.

cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Re: BUG #5065: pg_ctl start fails as administrator, with "could not locate matching postgres executable"

2009-10-19 Thread Andrew Dunstan



Dave Page wrote:

On Fri, Oct 16, 2009 at 7:03 PM, Jesse Morris  wrote:
  

-Original Message-
From: Dave Page [mailto:dp...@pgadmin.org]
Sent: Friday, October 16, 2009 2:14 AM
To: Jesse Morris
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] Re: BUG #5065: pg_ctl start fails as
  

administrator,


with "could not locate matching postgres executable"

  

The patch:

--begin patch--


:-(. Unfortunately inlining the patch in the email has munged it
beyond usability. Can you resend it as an attachment please?
  

Oops!  Re-sent, as an attachment.



Thanks. I've had a play with this, and it seems to work fine in 8.4.1
- at least, it doesn't seem to cause any regression that I can see
when testing in Vista or XP. I cannot reproduce the problem since I
wrote the original fix though, so I cannot confirm that this fixes any
new cases; we'll have to take your word for that :-)

The code around this has changed a little on -head. I don't have any
more spare cycles at the moment - are you able to produce an updated
patch for 8.5?

Andrew/Magnus; we do still see occasional failures of this nature, so
I believe there is still an issue here. Can we look at getting this
backpatched for 8.3.whatever and 8.4.2, assuming it looks good to you
as well?

  


It looks OK to me (modulo the incorrect changing of "its" to "it's" in a 
comment - whoever did that was trying to make it consistent, but 
unfortunately made it consistently wrong).


However, I'd like a bit more comment added on just why doing this is 
safe. Would it still be safe if someone granted some dangerous privilege 
directly to the Administrator user, if that's possible?


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Re: [BUGS] BUG #4796: Recovery followed by backup creates unrecoverable WAL-file

2009-05-15 Thread Andrew Dunstan



Simon Riggs wrote:

On Fri, 2009-05-15 at 10:17 -0400, Andrew Dunstan wrote:

  
This whole area is unfortunately way too fragile. We need some way of 
managing these facilities that hides a lot of these details and is 
therefore less likely to produce shot feet, IMNSHO. I get very nervous

every time I have to touch it.



I think it is complex, though that is because we now support a huge
number of use cases and options, to the benefit of many users. In fact,
more than I would like, but this is a group project.

Not sure why you say it's fragile; there have been very few bugs
considering the wide user base and those that have occurred have had
fixes submitted for them quickly. Yes, we require you to actually read
the docs, rather than open up psql and play, but this is business
critical stuff.

Realistically, we have more developers on this part of the code now than
any other. That's one reason for all the debate.

No problem in receiving feedback, just want to be able to understand it
sufficiently well to be able to enhance it.

  


I don't mean that it has bugs. I mean that it's far too easy to get it 
wrong and far too hard to get it right. I have reduced my uses to a 
couple of cases where I have worked out, with some trial and error, 
recipes that I follow. If I find these facilities complex to use, and I 
make virtually 100% of my living working with Postgres, what are more 
ordinary users going to say? That's why I think we need at the very 
least some tools for supporting the most common use cases, and hiding 
the messy details.


And no, I haven't even begun to think of what such tools might look like.

cheers

andrew



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Re: [BUGS] BUG #4796: Recovery followed by backup creates unrecoverable WAL-file

2009-05-15 Thread Andrew Dunstan



Simon Riggs wrote:

On Fri, 2009-05-15 at 22:56 +0900, Fujii Masao wrote:

  

OK, I probably understood your point. The timeline history files whose
timeline ID is larger than that of an oldest backup must not be deleted
from the archive. On the other hand, the smaller or equal one can be
deleted. Not all history files are necessary. So, if we don't keep older
backup, we probably can delete all files in the archive before
pg_start_backup().
Is my understanding right?



Heikki is right in one sense: if you do pg_start_backup() then for
*that* backup you do not need earlier files. 


However, as you have pointed out, if you have *multiple* backups then
deleting history files may cause problems with an earlier backup.

It's standard practice to have >1 backup, so there is potential for
error and minimum is we must document that. 


Rather than explaining the problem and the rules by which we can work
out exactly which history files to keep, I think it is safer to say that
we must keep all history files.

  


This whole area is unfortunately way too fragile. We need some way of 
managing these facilities that hides a lot of these details and is 
therefore less likely to produce shot feet, IMNSHO. I get very nervous 
every time I have to touch it.


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] plperl & sort

2008-11-04 Thread Andrew Dunstan



Alex Hunsaker wrote:

On Tue, Nov 4, 2008 at 15:02, Alex Hunsaker <[EMAIL PROTECTED]> wrote:
  

On Tue, Nov 4, 2008 at 14:43, Andrew Dunstan <[EMAIL PROTECTED]> wrote:


But by all means if you can come up with a robust way of allowing
  

the more traditional way of calling sort routines, send it in.
  

Well its not just sort its anything that uses main:: right?



Err no you're right its only builtins that use main:: sort being the
only one I know of off the top of my head... its a shame
PLContainer->share('$main::a'); does not seem to work..
  



$a and $b are magical *package* variables. See "perldoc perlvar". This 
has nothing whatever to do with main::


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] plperl & sort

2008-11-04 Thread Andrew Dunstan



Alex Hunsaker wrote:

On Tue, Nov 4, 2008 at 12:43, Alex Hunsaker <[EMAIL PROTECTED]> wrote:
  

It has something to do with anon subs not sure what...



It has to do with us returning the anonymous sub inside of the safe
and then calling the function outside of the safe (or at least in a
different namespace)

we do something eqvilient to this:
my $func_ptr = $safe->reval('sub { ... }');
$func_ptr->();

because safe makes its own namespace from perldoc Safe
The "root" of the namespace (i.e. "main::") is changed to a
different package and code evaluated in the compartment cannot
 refer to variables outside this namespace, even with run-time
 glob lookups and other tricks.

I only see one way to "fix" this which is to do something groddy like
share a global variable between the safe and the real interpreter.
Something like:

my $_pl_sub;
sub call_pl_sub
{
retrun $_pl_sub;
}

$safe->share(qw(call_pl_sub);

my $sub = $safe->reval('sub { ...}');

$_pl_sub = $sub;
$safe->reval('call_pl_sub();');

Note I tried just sharing $_pl_sub and doing
$safe->reval('$_pl_sub->()'); but I just get 'Undefined subroutine
&main::'

Should I work up a patch? Assuming someone confirm this?

  


OK, the first thing to note is that there is an easy workaround, which 
is to use a sort routine that doesn't need $a/$b. Example:


   create or replace function mysort() returns text language plperl as $f$

   my $sfunc = sub ($$) { $_[0] <=> $_[1] };

   my @vals = (5,3,4,2,7);

   return join(' ',sort $sfunc @vals);

   $f$;

We need to document that, and given that this exists I think we don't 
need to backpatch old versions.


Beyond that, we need to be very careful with any "solution" that we 
don't upset the moderately fragile security of trusted plperl, and I'm 
going to look fairly skeptically at anything that changes the way we set 
up and call functions. But by all means if you can come up with a robust 
way of allowing the more traditional way of calling sort routines, send 
it in. Sharing globals between the Safe and non-Safe worlds is not a 
solution - we removed an instance of that not long ago for security reasons.


cheers

andrew


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] [HACKERS] 0x1A in control file on Windows

2008-09-24 Thread Andrew Dunstan



Tom Lane wrote:

The point being that the config files are opened with AllocateFile(),
which in turn calls fopen(). It doesn't use open(). The proposal was
only to make all *open()* calls do it binary. I was under the impression
that on Unix, that's what open() did, so we should behave the same?



That seems just weird.  I do not think there's any correlation between
whether we use open or fopen and whether the file is text or binary.
Even if it happens to be true right now, depending on it would be
fragile.


  


I agree. If you really want something like that you should invent 
OpenConfigFile() or some such. But it hardly seems worth it.


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] [HACKERS] 0x1A in control file on Windows

2008-09-23 Thread Andrew Dunstan



Tom Lane wrote:

Bruce Momjian <[EMAIL PROTECTED]> writes:
  

Tom Lane wrote:


Well, why is that a bug?  If the platform is so silly as to define text
files that way, who are we to argue?
  


  

The problem is that our pg_controldata might have binary values that
contain 0x1a that will be confused by the operating system as
end-of-file.



pg_controldata is certainly already being read as binary. 


Umm, no, it is in the backend I believe but not in the utilities. Hence 
the original bug report. We need to add the binary flag in 
pg_controldata.c and pg_resetxlog.c.



 The
discussion here is about *text* files, particularly configuration
files.  Why should we not adhere to the platform standard about
what a text file is?

If you need a positive reason why this might be a bad idea, consider the
idea that someone is examining postgresql.conf with a text editor that
stops reading at control-Z.  He might not be able to see items that the
postmaster is treating as valid.


  


Yes, exactly right. We certainly can't just open everything in binary 
mode. Magnus did say that all the current config files are opened in 
text mode as far as he could see.


cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] [HACKERS] 0x1A in control file on Windows

2008-09-19 Thread Andrew Dunstan



Magnus Hagander wrote:

I had a chat with Heikki about this, and the proper way to fix it.

Should there actually be any reason not to *always* open our files with
O_BINARY? That seems to be what should mimic what Unix does, which would
be what we expect, no?

If that is so, then I propose we do that for 8.4, and just backpatch the
O_BINARY flag to these two locations for 8.3 and 8.2. Thoughts?


  


ISTR there are a few places where we want CRLF translation (config files?)

I'd be fairly conservative about making changes like this.

cheers

andrew

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] [BUGS] possible bug windows setup

2008-02-07 Thread Andrew Dunstan



Magnus Hagander wrote:


I have a patch working for me, I've sent it over to Gevik for testing in
his environment. Attached here if somebody else wants to play.

  


Looks OK.

cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [BUGS] possible bug windows setup

2008-02-07 Thread Andrew Dunstan



Magnus Hagander wrote:

On Wed, Feb 06, 2008 at 02:59:31PM +0100, Gevik Babakhani wrote:
  

I might be very wrong, but when I try to install 8.3 on Windows with NLS
options selected, no share/locale files are installed. could someone please
test or confirm this?



Yes, it's broken. It seems the change in Install.pm rev 1.20 to use
File::Find instead of external dir broke this, and was never tested at all
:-(

I know I build without NLS enabled, and it seems so does everybody else who
regularly builds the msvc stuff.

There is no testing at all of the NLS stuff in the regression tests.
Perhaps it would be a good idea to do that? Could be somethign as simple as
launching psql in a way tha tgenerates a syntax error and make sure i
matches a proper translation?


(and yes, I'm working on a patch for the actual issue)


  


Oops, My bad!

I think if we just change the forward slashes to backslashes at the top[ 
of that loop it should work, but haven't had time to test.


cheers

andrew


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-10 Thread Andrew Dunstan



Alvaro Herrera wrote:

Tom Lane wrote:
  

Andrew Dunstan <[EMAIL PROTECTED]> writes:


Tom Lane wrote:
  

Well, if we want to cram all that stuff in there, how shall we do it?
It seems wrong to put all those lines into one text field, but I'm
not sure I want to add six more text fields to the CSV format
either.  Thoughts?

Really? Six? In any case, would that be so bad? It would mean six extra 
commas per line in the log file, and nothing much in the log table 
unless there were content in those fields.
  

Yeah --- the lines output in the plain-stderr case that are not covered
in the other are

DETAIL
HINT
QUERY   (this is an internally-generated query that failed)
CONTEXT (think "stack trace")
LOCATION(reference to code file/line reporting the error)
STATEMENT   (user query that led to the error)



Here is a patch to do this.  It emits all of these as separate columns,
which are output empty if they are not present.  Of course, the commas
are emitted all the time.
  


Thanks. I will look at it in detail later today.

Not sure I understand what this comment I noticed on a very brief glance 
is about:


 /* assume no newlines in funcname or filename... */

If it's about what to quote, we need to quote anything that might contain a 
newline, quote or comma. Filenames certainly come into that category.

cheers

andrew




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-09 Thread Andrew Dunstan



Alvaro Herrera wrote:

Andrew Dunstan wrote:

  
OK, works for me. I'll try to look at it after I have attended to the 
Windows build issues. My plate is pretty full right now, though.



FYI I'm having a look at it now.

  


Great. Thanks.

cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-08 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan <[EMAIL PROTECTED]> writes:
  

Tom Lane wrote:


One issue here is that CONTEXT is potentially multiple lines.  I'm not
sure that there is much we can do about that, especially not at the last
minute.  If we had some time to rewrite internal APIs it might be fun to
think about emitting that as array of text not just text, but I fear
it's much too late to consider that now.
  


  
I'm not sure that putting all this into a single extra field would be so 
wrong. As for an array of text, that doesn't seem very portable. I don't 
think we should assume that Postgres is the only intended program 
destination of CSV logs.



Well, I don't see that "{some text,more text,yet more text}" is going
to be harder to cram into the average CSV-reader than "some text
more text
yet more text".  However, in most cases split_to_array on newlines
would be a good enough way of deconstructing the field in Postgres,
so maybe it's not worth worrying about.

Anyway, I think that we should just make the CSV fields be the same as
the existing divisions in the textual log format, which seem to have
stood up well enough in use since 7.4 or whenever we put that scheme in.


  


OK, works for me. I'll try to look at it after I have attended to the 
Windows build issues. My plate is pretty full right now, though.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-08 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan <[EMAIL PROTECTED]> writes:
  

Tom Lane wrote:


Well, if we want to cram all that stuff in there, how shall we do it?
It seems wrong to put all those lines into one text field, but I'm
not sure I want to add six more text fields to the CSV format
either.  Thoughts?
  


  
Really? Six? In any case, would that be so bad? It would mean six extra 
commas per line in the log file, and nothing much in the log table 
unless there were content in those fields.



Yeah --- the lines output in the plain-stderr case that are not covered
in the other are

DETAIL
HINT
QUERY   (this is an internally-generated query that failed)
CONTEXT (think "stack trace")
LOCATION(reference to code file/line reporting the error)
STATEMENT   (user query that led to the error)

One issue here is that CONTEXT is potentially multiple lines.  I'm not
sure that there is much we can do about that, especially not at the last
minute.  If we had some time to rewrite internal APIs it might be fun to
think about emitting that as array of text not just text, but I fear
it's much too late to consider that now.
  


I'm not sure that putting all this into a single extra field would be so 
wrong. As for an array of text, that doesn't seem very portable. I don't 
think we should assume that Postgres is the only intended program 
destination of CSV logs.



Another thing that I notice is that the CSV code emulates a couple of
not-very-orthogonal behaviors of send_message_to_server_log():
substituting "missing error text" for a NULL error field, and emitting
cursor pos as a tack-on to the error text instead of a separate field.
I think both of those are less than defensible.  So if you're willing
to entertain redefining the CSV column set at this late date, I'd
propose throwing in a seventh new field too: CURSORPOS.
  


Seems like over-egging the pudding to me somewhat, but OK if we decide 
to go with a whole bunch of fields.



Another thing: for stderr output, we have various log verbosity options
that determine whether these additional fields get printed.  Should
those options also function in the CSV-output case, or are we just going
to do our best to exhaust disk space as fast as possible all the time?


  


I think for sanity's sake we need one (maximal) format. I'm not so 
concerned about disk space. It's not like this is the only logging 
option available.



cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-06 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan <[EMAIL PROTECTED]> writes:
  
I can't see any very good reason for text logs to have different 
content from CSV logs.



Well, if we want to cram all that stuff in there, how shall we do it?
It seems wrong to put all those lines into one text field, but I'm
not sure I want to add six more text fields to the CSV format
either.  Thoughts?


  


Really? Six? In any case, would that be so bad? It would mean six extra 
commas per line in the log file, and nothing much in the log table 
unless there were content in those fields.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [BUGS] BUG #3799: csvlog skips some logs

2007-12-06 Thread Andrew Dunstan



Tom Lane wrote:

"depesz" <[EMAIL PROTECTED]> writes:
  

Description:csvlog skips some logs



The point here is that CSV-format log output doesn't include the
DETAIL, HINT, or context (QUERY/STATEMENT/CONTEXT) lines that
you might get with normal output.

I suppose this was intentional in order to keep the CSV output
format manageable, but I have to wonder whether it's really a
good idea.  I can see the argument that you probably don't need
to log HINTs, but the other stuff might be important.  Particularly
the STATEMENT.

Comments?


  


I don't recall making such a conscious intention - not sure about others 
whose fingers have been in the pie. More likely it's just oversight. In 
general, I'd say that the log content should be independent of the 
format. I can't see any very good reason for text logs to have different 
content from CSV logs.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] BUG #3415: plperl spi_exec_prepared variable undef value confusion

2007-06-28 Thread Andrew Dunstan



Tom Lane wrote:

"Matt" <[EMAIL PROTECTED]> writes:
  

Description:plperl spi_exec_prepared variable undef value confusion



[ pokes at it ... ]  Some of the places in plperl.c that are checking for
undef values use code like

if (SvOK(val) && SvTYPE(val) != SVt_NULL)

and some just test the SvTYPE part.  It looks to me like the SvOK test
is essential --- in fact I'm not sure the SvTYPE test is even bringing
anything to the party.  Any perl-extension gurus around here?


  


The perlapi docs explicitly state that one should always use SvOK() to 
check for undef. IIRC some SvOK() tests were added in some places where 
it was found to be necessary, and the old tests kept out of an abundance 
of caution, but a little googling suggests that you are correct.


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Re: [BUGS] BUG #3242: FATAL: could not unlock semaphore: error code 298

2007-04-20 Thread Andrew Dunstan

Magnus Hagander wrote:


The effective max count on Unixen is typically in the thousands,
and I'd suggest the same on Windows unless there's some efficiency
reason to keep it small (in which case, maybe ten would do).



AFAIK there's no problem with huge numbers (it takes an int32, and the
documentation says nothing about a limit - I'm sure it's just a 32-bit
counter in the kernel). I'll give that a shot.

  


Linux manpage suggests local max is 32767, so that's probably a good 
value to try.


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [BUGS] BUG #2873: Function that returns an empty set

2007-01-09 Thread Andrew Dunstan
Tom Lane wrote:

> This is closely related to the discussion a couple weeks ago about how
> a LEFT JOIN could produce nulls in an output column that was labeled as
> having a non-null-domain type.  We haven't figured out what is a sane
> behavior for that case, either.  I'm beginning to think that domains
> constrained not null are just fundamentally a bad idea.
>

I think we just expect left joins to produce nulls regardless of
constraints on the underlying cols, don't we? Concluding that not null in
domains is bad seems a bit drastic.

cheers

andrew


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [BUGS] BUG #2683: spi_exec_query in plperl returns

2006-10-15 Thread Andrew Dunstan
Tom Lane wrote:
> I wrote:
>> It looks to me like basically everywhere in plperl.c that does newSVpv()
>> should follow it with
>>
>> #if PERL_BCDVERSION >= 0x5006000L
>> if (GetDatabaseEncoding() == PG_UTF8)
>> SvUTF8_on(sv);
>> #endif
>
> Experimentation proved that this was insufficient to fix Vitali's
> problem --- the string he's unhappy about is actually a hash key entry,
> and there's no documented way to mark the second argument of hv_store()
> as being a UTF-8 string.  Some digging in the Perl source code found
> that since at least Perl 5.8.0, hv_fetch and hv_store recognize a
> negative key length as meaning a UTF-8 key (ick!!), so I used that hack.
> I am not sure there is any reasonable fix available in Perl 5.6.x.
>
> Attached patch applied to HEAD, but I'm not going to risk back-patching
> it without some field testing.
>

Hmm. That negative pointer hack is mighty ugly.

I am also wondering, now that it's been raised, if we need to issue a "use
utf8;" in the startup code, so that literals in the code get the right
encoding.

cheers

andrew



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] [BUGS] BUG #2221: Bad delimiters allowed in COPY ...

2006-02-01 Thread Andrew Dunstan
David Fetter said:
> On Tue, Jan 31, 2006 at 08:03:41PM -0500, Bruce Momjian wrote:
>> Uh, couldn't the delimiter be a backslash in CVS mode?
>
> I don't think so.  Folks?

Using backslash as a delimiter in CSV would be odd, to say the least. As an
escape char it is occasionally used, but not as a delimiter in my
experience. Maybe we should apply the "be liberal in what you accept" rule,
but I think this would be stretching it.

>
> Anyhow, if there are different sets, I could do something like:
>
> #define BADCHARS "\r\n\\"
> #define BADCHARS_CSV "\r\n"
>
> and then check for csv_mode, etc.
>
>>  + #define BADCHARS "\r\n\\"
>>
>> Also, should we disable DELIMITER and NULL from sharing characters?
>
> That's on about line 916, post-patch:
>
>/* Don't allow the delimiter to appear in the null string. */
>if (strchr(cstate->null_print, cstate->delim[0]) != NULL)
>ereport(ERROR,
>(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
>errmsg("COPY delimiter must not appear in the NULL
>specification")));
>
> I suppose that a different error code might be The Right Thing™ here.
>

ERRCODE_WHAT WERE_YOU_THINKING ?

cheers

andrew



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] [BUGS] BUG #2221: Bad delimiters allowed in COPY ...

2006-01-30 Thread Andrew Dunstan



David Fetter wrote:

 
+ 	/* Disallow BADCHARS characters */

+   if (strcspn(cstate->delim, BADCHARS) != 1)
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg("COPY delimiter cannot be \"%#02x\"",
+   *cstate->delim)));
+ 






Is  ERRCODE_FEATURE_NOT_SUPPORTED the right errcode? This isn't a 
missing feature; we are performing a sanity check here. We can 
reasonably expect never to support CR, LF or \ as the text delimiter. 
Maybe ERRCODE_INVALID_PARAMETER_VALUE ? Or maybe we need a new one.


Also, I would probably make the format %#.02x so the result would look 
like 0x0d (for a CR).


(I bet David never thought there would so much fuss over a handful of  
lines of code)


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] [pgsql-hackers-win32] Initdb failing for no apparent reason in

2005-01-09 Thread Andrew Dunstan

Steve McWilliams wrote:
Nevermind, I found out what this was.  Turned out that the customer
machine in question had particularly heavy security settings and so the
enetaware account did not have permission to write into the directory
where it was trying to create PGDATA.  Once I widened the settings on the
parent directory then it worked fine.  Kind of odd that inidb.exe just
fails silently when this is the case however.
 

Very odd, since initdb calls chmod to fix the directory permissions, if 
it already exists, and creates it with those same permissions if it 
isn't (in both cases the permissions are 0700 ). If that isn't enough on 
Windows, perhaps someone can tell us what is.

cheers
andrew
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [pgsql-hackers-win32] [BUGS] More SSL questions..

2005-01-04 Thread Andrew Dunstan

Matthew T. O'Connor wrote:
Tom Lane wrote:

If someone can whip up and test a WIN32 version of this, I'll take care
of the rest.
 

I can't do the coding, but I took a quick look at msdn and I think 
this is relevant:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/shellcc/platform/shell/reference/functions/shgetfolderpath.asp 

HRESULT SHGetFolderPath( HWND /hwndOwner/,
   int /nFolder/,
   HANDLE /hToken/,
   DWORD /dwFlags/,
   LPTSTR /pszPath/
);
Also, for nFolder, it looks like the we want to use a value of 
CSIDL_PROFILE (0x0028).

Er, I don't think so. MSDN says:
CSIDL_PROFILE (0x0028)
   Version 5.0. The user's profile folder. A typical path is 
C:\Documents and Settings\username. Applications should not create files 
or folders at this level; they should put their data under the locations 
referred to by CSIDL_APPDATA or CSIDL_LOCAL_APPDATA.

I think CSIDL_APPDDATA is probably the way to go, but one of the heavy 
Windows hitters will know better than I do.

cheers
andrew

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [pgsql-hackers-win32] [BUGS] postgresql 8.0b1 Win32 observations

2004-08-19 Thread Andrew Dunstan

Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
 

Hi Folks,
I installed postgresql8.0b1 a fews days back.
I am listing the issues which i came across
these many be already known issues or may not
be bugs.
1. Automatic convertion to NTFS.
Many NT based windows installation may not
have formatted the drive with NTFS , i think
the installer may assist or at leaset suggest
that converting from fat --> ntfs is a snap.
   

OK, installer issue?  I don't think we should have the postmaster
complain in its logs, right?
 

 

I've seen contrary opinions, but I don't see how anybody could 
contemplate running *any* server on FAT. But then, people use Notepad to 
write programs, too.

The installer should complain, but I guess that's it.
cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]