Re: [HACKERS] Add regression tests for DISCARD

2013-07-06 Thread Robins Tharakan
Thanks Fabrizio.

Although parallel_schedule was a miss for this specific patch, however, I
guess I seem to have missed out serial_schedule completely (in all patches)
and then thanks for pointing this out. Subsequently Robert too noticed the
miss at the serial_schedule end.

Please find attached a patch, updated towards serial_schedule /
parallel_schedule as well as the role name change as per Robert's feedback
on CREATE OPERATOR thread.


--
Robins Tharakan


On 2 July 2013 09:32, Fabrízio de Royes Mello wrote:

> On Mon, Jul 1, 2013 at 5:59 PM, Robins Tharakan wrote:
>
>>
>> Thanks Marko for pointing out about guc.sql.
>>
>> Please find attached a patch to move DISCARD related tests from guc.sql
>> to discard.sql. It adds an extra test for a DISCARD PLANS line, although I
>> amn't sure on how to validate that its working.
>>
>> Personally, I wouldn't call this a great patch, since most of the tests
>> were already running, although in a generic script. The separation of
>> DISCARD related tests to another file is arguably good for the long-term
>> though.
>>
>
> Robins,
>
> You must add this new test case called "discard" to
> src/test/regress/parallel_schedule and src/test/regress/serial_schedule,
> because if we do "make check" the new "discard" test case is not executed.
>
> Regards,
>
> --
> Fabrízio de Royes Mello
> Consultoria/Coaching PostgreSQL
> >> Blog sobre TI: http://fabriziomello.blogspot.com
> >> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
> >> Twitter: http://twitter.com/fabriziomello
>


regress_discard_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bool abuse in path_encode()

2013-07-06 Thread Tom Lane
Peter Eisentraut  writes:
> The function path_encode() in geo_ops.c takes an argument "bool closed",
> to indicate an open or closed path --- but you can also pass -1 to
> indicate "neither".

Egad :-(.

> I propose to change this to an enum to make the
> code clearer and less weird.

Please change the "default" cases to be "case PATH_NONE", then.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for ROLE (USER)

2013-07-06 Thread Robins Tharakan
> However, before it can get committed, I think this set of tests needs
> streamlining.  It does seem to me valuable, but I think it's wasteful
> in terms of runtime to create so many roles, do just one thing with
> them, and then drop them.  I recommend consolidating some of the
> tests.  For example:
>
> Generally, I think that the tests which return a syntax error are of
> limited value and should probably be dropped.  That is unlikely to get
> broken by accident.  If the syntax error is being thrown by something
> outside of bison proper, that's probably worth testing.  But I think
> that testing random syntax variations is a pretty low-value
> proposition.
>
>
Thanks Robert.

Although the idea of being repetitive was just about trying to make tests
simpler to infer for the next person, but I guess this example was
obviously an overkill. Point taken, would correct and revert with an
updated patch.

However, the other aspect that you mention, I am unsure if I understand
correctly. Do you wish that syntactical errors not be tested? If so,
probably you're referring to tests such as the one below, and then I think
it may get difficult at times to bifurcate how to chose which tests to
include and which to not. Can I assume that all errors that spit an error
messages with 'syntax error' are to be skipped, probably that'd be an easy
test for me to know what you'd consider important?

+ALTER ROLE regress_rol_rol18 WITH ENCRYPTED UNENCRYPTED PASSWORD 'abc';
+ERROR:  syntax error at or near "UNENCRYPTED"
+LINE 1: ALTER ROLE regress_rol_rol18 WITH ENCRYPTED UNENCRYPTED PASS...

Personally, I think all tests are important. Unless there is a clear
understanding that aiming for 100% code-coverage isn't the goal, I think
all tests are important, syntactical or otherwise. Its possible that not
all code is reachable (therefore testable) but the vision generally remains
at 100%.

Do let me know your view on this second point, so that I can remove these
tests if so required.

Robins Tharakan


[HACKERS] bool abuse in path_encode()

2013-07-06 Thread Peter Eisentraut
The function path_encode() in geo_ops.c takes an argument "bool closed",
to indicate an open or closed path --- but you can also pass -1 to
indicate "neither".  I propose to change this to an enum to make the
code clearer and less weird.

diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c
index ad18cf0..7bfea01 100644
--- a/src/backend/utils/adt/geo_ops.c
+++ b/src/backend/utils/adt/geo_ops.c
@@ -32,6 +32,8 @@
  * Internal routines
  */
 
+enum path_delim { PATH_NONE, PATH_OPEN, PATH_CLOSED };
+
 static int	point_inside(Point *p, int npts, Point *plist);
 static int	lseg_crossing(double x, double y, double px, double py);
 static BOX *box_construct(double x1, double x2, double y1, double y2);
@@ -57,7 +59,7 @@
 static int	pair_encode(float8 x, float8 y, char *str);
 static int	pair_count(char *s, char delim);
 static int	path_decode(int opentype, int npts, char *str, int *isopen, char **ss, Point *p);
-static char *path_encode(bool closed, int npts, Point *pt);
+static char *path_encode(enum path_delim path_delim, int npts, Point *pt);
 static void statlseg_construct(LSEG *lseg, Point *pt1, Point *pt2);
 static double box_ar(BOX *box);
 static void box_cn(Point *center, BOX *box);
@@ -280,7 +282,7 @@
 }	/* path_decode() */
 
 static char *
-path_encode(bool closed, int npts, Point *pt)
+path_encode(enum path_delim path_delim, int npts, Point *pt)
 {
 	int			size = npts * (P_MAXLEN + 3) + 2;
 	char	   *result;
@@ -296,12 +298,12 @@
 	result = palloc(size);
 
 	cp = result;
-	switch (closed)
+	switch (path_delim)
 	{
-		case TRUE:
+		case PATH_CLOSED:
 			*cp++ = LDELIM;
 			break;
-		case FALSE:
+		case PATH_OPEN:
 			*cp++ = LDELIM_EP;
 			break;
 		default:
@@ -322,12 +324,12 @@
 		pt++;
 	}
 	cp--;
-	switch (closed)
+	switch (path_delim)
 	{
-		case TRUE:
+		case PATH_CLOSED:
 			*cp++ = RDELIM;
 			break;
-		case FALSE:
+		case PATH_OPEN:
 			*cp++ = RDELIM_EP;
 			break;
 		default:
@@ -415,7 +417,7 @@
 {
 	BOX		   *box = PG_GETARG_BOX_P(0);
 
-	PG_RETURN_CSTRING(path_encode(-1, 2, &(box->high)));
+	PG_RETURN_CSTRING(path_encode(PATH_NONE, 2, &(box->high)));
 }
 
 /*
@@ -1018,7 +1020,7 @@
 	{
 	}
 
-	return path_encode(TRUE, 2, (Point *) &(ls->p[0]));
+	return path_encode(PATH_CLOSED, 2, (Point *) &(ls->p[0]));
 #else
 	ereport(ERROR,
 			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@@ -1441,7 +1443,7 @@
 {
 	PATH	   *path = PG_GETARG_PATH_P(0);
 
-	PG_RETURN_CSTRING(path_encode(path->closed, path->npts, path->p));
+	PG_RETURN_CSTRING(path_encode(path->closed ? PATH_CLOSED : PATH_OPEN, path->npts, path->p));
 }
 
 /*
@@ -1823,7 +1825,7 @@
 {
 	Point	   *pt = PG_GETARG_POINT_P(0);
 
-	PG_RETURN_CSTRING(path_encode(-1, 1, pt));
+	PG_RETURN_CSTRING(path_encode(PATH_NONE, 1, pt));
 }
 
 /*
@@ -2051,7 +2053,7 @@
 {
 	LSEG	   *ls = PG_GETARG_LSEG_P(0);
 
-	PG_RETURN_CSTRING(path_encode(FALSE, 2, (Point *) &(ls->p[0])));
+	PG_RETURN_CSTRING(path_encode(PATH_OPEN, 2, (Point *) &(ls->p[0])));
 }
 
 /*
@@ -3494,7 +3496,7 @@
 {
 	POLYGON*poly = PG_GETARG_POLYGON_P(0);
 
-	PG_RETURN_CSTRING(path_encode(TRUE, poly->npts, poly->p));
+	PG_RETURN_CSTRING(path_encode(PATH_CLOSED, poly->npts, poly->p));
 }
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How to implement Gin method?

2013-07-06 Thread Kenji uno
Hi.

I want to try GIN and know programming information of GIN technology.

Please teach me about these functions extractValue, extractQuery and consistent.

I have posted question at stack overflow.

http://stackoverflow.com/questions/17489703/postgresql-how-to-implement-gin

Please help my question.

Thanks

Kenji uno

Windows Phoneから送信

Re: [HACKERS] [PATCH] Add transforms feature

2013-07-06 Thread Dimitri Fontaine
Josh Berkus  writes:
> (c) I'd like to do a future feature which supports "install all
> transforms" functionality, which would be helped by having them in their
> own directory.

I think we should install required extensions automatically when they
are available. Also, we will need better tooling to deal with extension
dependencies, see my patch for that from some time ago.

  https://commitfest.postgresql.org/action/patch_view?id=727

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: extension template

2013-07-06 Thread Dimitri Fontaine
Hi,

Thanks a lot for your detailed review!

Markus Wanner  writes:
> Initially, I was confused about what the patch is supposed to achieve.
> The 'template' naming certainly contributed to that confusion. My mental

Yes, I did share this viewpoint over the naming of the feature, but Tom
insisted that we already have those kind of templates for text search.

> The major distinguishing factor is not the 'template' character of
> extensions "installed" that way, but the storage place of the its
> control data: filesystem vs system catalog. I'd either recommend
> appropriate renaming to reflect that fact and to avoid confusing users;
> or follow the "template" model better and decouple the extension from
> its template - with implications on extensions requiring additional
> binary code. Thinking of it, I kind of like that approach...

Could you go into more details into your ideas here? I don't understand
what you're suggesting.

> Compiling pg_upgrade_support in contrib fails:
>
>> $SRC/contrib/pg_upgrade_support/pg_upgrade_support.c:193:8:
>>   error: too few arguments to function ‘InsertExtensionTuple’

I don't have that problem here. Will try to reproduce early next week.

> As originally mentioned by Heikki, if the tplscript doesn't parse, the
> error message is just "syntax error at or near". That matches the
> behavior of extensions installed on the file system. However, given this
> adds a second way, a hint about where this error actually comes from is
> mandatory, IMO.

Will have a look at what it takes to implement support for better error
messages. May I suggest to implement that later, though? I think this is
an improvement over the current system that will be complicated to get
right and I don't want that to swamp the current patch. After all, this
patch is already in its 3rd development cycle…

> Trying to re-create a pre-existing template properly throws 'template
> for extension "$NAME" version "$VERSION" already exists'. However, if
> the extension is already enabled for that database, it instead says:
> "extension "$NAME" already exists". I can see why that's fine if you
> assume a strong binding between the "instantiation" and the "template".

The idea here is to protect against mixing the file based extension
installation mechanism with the catalogs one. I can see now that the
already installed extension could have been installed using a template
in the first place, so that message now seems strange.

I still think that we shouldn't allow creating a template for an
extension that is already installed, though. Do you have any suggestions
for a better error message?

> However, it's possible to enable an extension and then rename its
> template. The binding (as in pg_depend) is still there, but the above
> error (in that case "extension $OLD_NAME already existing") certainly
> doesn't make sense. One can argue whether or not an extension with a
> different name is still the same extension at all...

When renaming a template, the check against existing extensions of the
same name is made against the new name of the template, so I don't see
what you say here in the code. Do you have a test case?

> Trying to alter an inexistent or file-system stored extension doesn't
> throw an error, but silently succeeds. Especially in the latter case,
> that's utterly misleading. Please fix.

Fixed in my github branch, not producing a new patch version as of yet,
I think we need to set the new error messages first and I'm running
short of inspiration tonight.

> That started to get me worried about the effects of a mixed
> installation, but I quickly figured it's not possible to have a full
> version on disk and then add an incremental upgrade via the system
> catalogs. I think that's a fair limitation, as mixed installations would
> pose their own set of issues. On the other hand, isn't ease of upgrades
> a selling point for this feature?

The main issue to fix when you want to have that feature, which I want
to have, is how to define a sane pg_dump policy around the thing. As we
couldn't define that in the last rounds of reviews, we decided not to
allow the case yet.

I think that's a fair remark that we want to get there eventually, and
that like the superuser only limitation, that's something for a future
patch and not this one.

> In any case, the error message could again be more specific:
>
>   (having extension 'pex' version '0.9' on the file-system)
>
>   # CREATE TEMPLATE FOR EXTENSION pex VERSION '1.0' ...
>   ERROR: extension "pex" already available
>
>   [ This one could mention it exists on the file-system. ]
>
>   # CREATE TEMPLATE FOR EXTENISON pex FROM '1.9' TO '1.10' AS ...
>   ERROR: no template for extension "pex"
>
> This last error is technically correct only if you consider file system
> extensions to not be templates. In any case, there is *something*
> relevant called "pex" on the file-system, that prevents creation of the
> template in the system catalogs. The error message shoul

Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-06 Thread Michael Meskes
On Fri, Jul 05, 2013 at 08:08:06AM -0400, Andrew Dunstan wrote:
> This looks incomplete. Surely just escaping backslashes alone is not
> enough. I suspect at least the " char and any chars below 0x20
> should be quoted also.

The " char I just added, however, my tests did bring up any problem with chars
below 0x20. If anybody sees another character breaking ecpg, please tell me and
I'll fix it.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-06 Thread Szymon Guz
On 6 July 2013 17:58, Claudio Freire  wrote:

> Look at that:
>
>   return x
>   $$ LANGUAGE plpythonu;
>   SELECT * FROM test_type_conversion_numeric(100);
> ! INFO:  (Decimal('100'), 'Decimal')
>   CONTEXT:  PL/Python function "test_type_conversion_numeric"
>test_type_conversion_numeric
>   --
> --- 219,225 
>   return x
>   $$ LANGUAGE plpythonu;
>   SELECT * FROM test_type_conversion_numeric(100);
> ! INFO:  (Decimal("100"), 'Decimal')
>   CONTEXT:  PL/Python function "test_type_conversion_numeric"
>test_type_conversion_numeric
>   --
>
> " instead of '
>
> All the more reason to use as_tuple
>
>
>
> On Sat, Jul 6, 2013 at 9:16 AM, Andrew Dunstan 
> wrote:
> >
> > On 07/06/2013 01:52 AM, Claudio Freire wrote:
> >>
> >> On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane  wrote:
> >>>
> >>> Peter Eisentraut  writes:
> 
>  PL/Python: Convert numeric to Decimal
> >>>
> >>> Assorted buildfarm members don't like this patch.
> >>
> >>
> >> Do you have failure details?
> >>
> >> This is probably an attempt to operate decimals vs floats.
> >>
> >> Ie: Decimal('3.0') > 0 works, but Decimal('3.0') > 1.3 doesn't
> >> (decimal is explicitly forbidden from operating on floats, some design
> >> decision that can only be disabled in 3.3).
> >>
> >>
> >
> >
> > Instead of speculating, you can actually see for yourself. The dashboard
> is
> > at  Pick one of the
> > machines failing at PLCheck-C and click its 'Details' link. Then scroll
> down
> > a bit and you'll see what is failing.
> >
> > cheers
> >
> > andrew
> >
>
>
Hi,
I've modifled the tests to check the numeric->decimal conversion some other
way. They check now conversion to float/int and to string, and also tuple
values.

I've checked that on decimal and cdecimal on python 2.7 and 3.3. The
outputs are the same regardles the Python and decimal versions.

thanks,
Szymon


fix_plpython_decimal_tests.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-06 Thread Joshua Berkus


- Original Message -
> * Josh Berkus (j...@agliodbs.com) wrote:
> > Is there anyone else on the committer list with similar circumstances?
> 
> I'll just flip it around and offer to be publically flogged whenever I'm
> not helping out with a commitfest. :)  Perhaps this should be more
> "opt-in" than "opt-out", wrt committers anyway.

Can we flog you even if you *are* helping?  I just wanna see the YouTube video, 
either way.  ;-)-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlHYcnEACgkQrzgMPqB3kigEogCeONVFuhxuvOrgHObuhOSSiNcq
67AAmwQNJyXNXPR3Kk5jRAZMh9i65Wgy
=MH6M
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-06 Thread Stephen Frost
* Josh Berkus (j...@agliodbs.com) wrote:
> Is there anyone else on the committer list with similar circumstances?

I'll just flip it around and offer to be publically flogged whenever I'm
not helping out with a commitfest. :)  Perhaps this should be more
"opt-in" than "opt-out", wrt committers anyway.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-06 Thread Dimitri Fontaine
Jaime Casanova  writes:
> not sure if you're wrong. but at the very least, you miss a
> heap_freetuple(oldtup) there, because get_catalog_object_by_oid()

Well, oldtup can be either a syscache copy or a heap tuple. I've been
looking at other call sites and they don't free their tuple either. So
I'm leaving it at that for now.

> no, that code is not unchanged because function
> get_ext_ver_list_from_catalog() comes from your patch.

Yes. Here's the relevant hunk:

***
*** 997,1003  get_nearest_unprocessed_vertex(List *evi_list)
   * the versions that can be reached in one step from that version.
   */
  static List *
! get_ext_ver_list(ExtensionControlFile *control)
  {
List   *evi_list = NIL;
int extnamelen = strlen(control->name);
--- 1093,1099 
   * the versions that can be reached in one step from that version.
   */
  static List *
! get_ext_ver_list_from_files(ExtensionControl *control)
  {
List   *evi_list = NIL;
int extnamelen = strlen(control->name);
***

So the content of the function has not changed. I'm not opposed to
trying to fix it, I just don't think it would be wise to do so as part
of the extension templates patch, as I do believe that the problem
should manifest itself in head too: it's the same code under a new
function's name.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Run-time posix_fallocate failures

2013-07-06 Thread Tom Lane
Jeff Davis  writes:
> On Sat, 2013-07-06 at 10:33 -0400, Tom Lane wrote:
>> I think you'd better rejigger that patch so that it falls through to the
>> old implementation if posix_fallocate() fails.

> Do you mean "fails at all" or "fails with EINVAL"?

I'd be inclined to ignore the specific error code and just do it the
hard way for any failure.  Less code, and I see no particular advantage
to being pickier than that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] planner not choosing fastest estimated plan

2013-07-06 Thread Jeff Janes
On Sat, Jul 6, 2013 at 10:52 AM, Pavel Stehule  wrote:
> Hello
>
> pls, can you send EXPLAIN ANALYZE result?

Sure.  I've put them on depesz.  (I don't know how useful it will be,
as I'm worried about the internal inconsistency rather than the actual
run time):

enable_seqscan=on
http://explain.depesz.com/s/8GF

enable_seqscan=off
http://explain.depesz.com/s/51V

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-07-06 Thread Szymon Guz
On 28 June 2013 22:29, Claudio Freire  wrote:

> On Fri, Jun 28, 2013 at 5:14 PM, Steve Singer  wrote:
> > On 06/27/2013 05:04 AM, Szymon Guz wrote:
> >>
> >> On 27 June 2013 05:21, Steve Singer  >> > wrote:
> >>
> >> On 06/26/2013 04:47 PM, Szymon Guz wrote:
> >>
> >>
> >>
> >>
> >>
> >>
> >> Hi Steve,
> >> thanks for the changes.
> >>
> >> You're idea about common code for decimal and cdecimal is good, however
> >> not good enough. I like the idea of common code for decimal and
> cdecimal.
> >> But we need class name, not the value.
> >>
> >> I've changed the code from str(x) to x.__class__.__name__ so the
> function
> >> prints class name (which is Decimal for both packages), not the value.
> We
> >> need to have the class name check. The value is returned by the
> function and
> >> is a couple of lines lower in the file.
> >>
> >> patch is attached.
> >>
> >
> > I think the value is more important than the name, I want to the tests to
> > make sure that the conversion is actually converting properly.  With your
> > method of getting the class name without the module we can have both.
> >
> > The attached patch should print the value and the class name but not the
> > module name.
>
>
> Why not forego checking of the type, and instead check the interface?
>
> plpy.info(x.as_tuple())
>
> Should do.
>
> >>> d  = decimal.Decimal((0,(3,1,4),-2))
> >>> d.as_tuple()
> DecimalTuple(sign=0, digits=(3, 1, 4), exponent=-2)
> >>> d.as_tuple() == (0,(3,1,4),-2)
> True
> >>> d = decimal.Decimal("3.14")
> >>> d.as_tuple()
> DecimalTuple(sign=0, digits=(3, 1, 4), exponent=-2)
> >>> d.as_tuple() == (0,(3,1,4),-2)
> True
> >>>
>

Yea, however decimal and cdecimal have different outputs:

For decimal:

! INFO:  DecimalTuple(sign=1, digits=(1, 0, 0), exponent=0)

for cdecimal:

! INFO:  DecimalTuple(sign=1, digits=(1, 0, 0), exponent=0L)


Re: [HACKERS] Run-time posix_fallocate failures

2013-07-06 Thread Jeff Davis
On Sat, 2013-07-06 at 10:33 -0400, Tom Lane wrote:
> Apparently OmniOS has a version of posix_fallocate that doesn't actually
> work, or at least doesn't work on the specific filesystem rover_firefly
> is using.  The Single Unix Spec says that that is the correct error to
> return if the filesystem doesn't provide support:

Thank you.

> I think you'd better rejigger that patch so that it falls through to the
> old implementation if posix_fallocate() fails.

Do you mean "fails at all" or "fails with EINVAL"?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] planner not choosing fastest estimated plan

2013-07-06 Thread Pavel Stehule
Hello

pls, can you send EXPLAIN ANALYZE result?

Regards

Pavel Stehule

2013/7/6 Jeff Janes :
> I have a weird case where the planner doesn't choose the plan that it
> itself believes to be the fastest plan.  If I disable seqscan, it then
> chooses a faster plan (faster in estimate and faster in reality) than
> the one chosen when all options were open to it.  I can't figure out
> how this can be anything other than a bug.  The *collapse_limit
> parameters are not restraining things.
>
> I've created a dummy self-contained test case that is a simple
> self-join of a partitioned table, with a function-based index.
>
> If I analyze the tables after the function-based indexes are in place,
> then the problems goes away.  And that is the production solution.
> But still, a bug is a bug, even if there is a work around.
>
> This is using the default configuration with LANG=C in 9.2, 9.3, and 9.4.
>
> It was introduced in commit 5b7b5518d0ea56c422a19787, "Revise
> parameterized-path mechanism to fix assorted issues"
>
> I've tried compiling under OPTIMIZER_DEBUG, but the output did not
> mean anything to me.  It looks like the only RELOPTINFO corresponding
> to the join, "RELOPTINFO (1 2)" only lists the HashJoin when
> enable_seqscan=on, and only contains NestLoop when enable_seqscan=off.
>  I don't know why it doesn't list both in both cases and then choose
> the faster.
>
>
>
> create table foo1 as select lpad(g::text,7,'0') as val1, g as num1
> from generate_series(1,10) g;
> create table foo2 as select lpad(g::text,7,'0') as val1, g as num1
> from generate_series(11,100) g;
> alter table foo2 inherit foo1;
> create index on foo1(num1 );
> create index on foo2(num1 );
> analyze foo1; analyze foo2;
> create language plperl;
> CREATE OR REPLACE FUNCTION perlupper(text)
>  RETURNS text
>  LANGUAGE plperl
>  IMMUTABLE COST 1000
> AS $function$
>   return uc($_[0]);
> $function$;
> create index on foo1 (perlupper(val1));
> create index on foo2 (perlupper(val1));
>
>
> jjanes=# explain select a.num1 from foo1 a, foo1 b where
> perlupper(a.val1)=perlupper(b.val1) and b.num1 = 987845;
>QUERY PLAN
> 
>  Hash Join  (cost=32789.00..538040.65 rows=1 width=4)
>Hash Cond: (perlupper(b.val1) = perlupper(a.val1))
>->  Append  (cost=0.00..16.64 rows=2 width=8)
>  ->  Index Scan using foo1_num1_idx on foo1 b
> (cost=0.00..8.28 rows=1 width=8)
>Index Cond: (num1 = 987845)
>  ->  Index Scan using foo2_num1_idx on foo2 b
> (cost=0.00..8.37 rows=1 width=8)
>Index Cond: (num1 = 987845)
>->  Hash  (cost=15406.00..15406.00 rows=100 width=12)
>  ->  Append  (cost=0.00..15406.00 rows=100 width=12)
>->  Seq Scan on foo1 a  (cost=0.00..1541.00 rows=10 
> width=12)
>->  Seq Scan on foo2 a  (cost=0.00..13865.00
> rows=90 width=12)
>
> jjanes=# set enable_seqscan TO off;
> jjanes=# explain select a.num1 from foo1 a, foo1 b where
> perlupper(a.val1)=perlupper(b.val1) and b.num1 = 987845;
>QUERY PLAN
> -
>  Nested Loop  (cost=17.14..60438.19 rows=1 width=4)
>->  Append  (cost=0.00..16.64 rows=2 width=8)
>  ->  Index Scan using foo1_num1_idx on foo1 b
> (cost=0.00..8.28 rows=1 width=8)
>Index Cond: (num1 = 987845)
>  ->  Index Scan using foo2_num1_idx on foo2 b
> (cost=0.00..8.37 rows=1 width=8)
>Index Cond: (num1 = 987845)
>->  Append  (cost=17.14..30160.77 rows=5000 width=12)
>  ->  Bitmap Heap Scan on foo1 a  (cost=17.14..3022.65 rows=500 
> width=12)
>Recheck Cond: (perlupper(val1) = perlupper(b.val1))
>->  Bitmap Index Scan on foo1_perlupper_idx
> (cost=0.00..17.01 rows=500 width=0)
>  Index Cond: (perlupper(val1) = perlupper(b.val1))
>  ->  Bitmap Heap Scan on foo2 a  (cost=92.22..27138.12
> rows=4500 width=12)
>Recheck Cond: (perlupper(val1) = perlupper(b.val1))
>->  Bitmap Index Scan on foo2_perlupper_idx
> (cost=0.00..91.10 rows=4500 width=0)
>  Index Cond: (perlupper(val1) = perlupper(b.val1))
>
> Cheers,
>
> Jeff
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] planner not choosing fastest estimated plan

2013-07-06 Thread Jeff Janes
I have a weird case where the planner doesn't choose the plan that it
itself believes to be the fastest plan.  If I disable seqscan, it then
chooses a faster plan (faster in estimate and faster in reality) than
the one chosen when all options were open to it.  I can't figure out
how this can be anything other than a bug.  The *collapse_limit
parameters are not restraining things.

I've created a dummy self-contained test case that is a simple
self-join of a partitioned table, with a function-based index.

If I analyze the tables after the function-based indexes are in place,
then the problems goes away.  And that is the production solution.
But still, a bug is a bug, even if there is a work around.

This is using the default configuration with LANG=C in 9.2, 9.3, and 9.4.

It was introduced in commit 5b7b5518d0ea56c422a19787, "Revise
parameterized-path mechanism to fix assorted issues"

I've tried compiling under OPTIMIZER_DEBUG, but the output did not
mean anything to me.  It looks like the only RELOPTINFO corresponding
to the join, "RELOPTINFO (1 2)" only lists the HashJoin when
enable_seqscan=on, and only contains NestLoop when enable_seqscan=off.
 I don't know why it doesn't list both in both cases and then choose
the faster.



create table foo1 as select lpad(g::text,7,'0') as val1, g as num1
from generate_series(1,10) g;
create table foo2 as select lpad(g::text,7,'0') as val1, g as num1
from generate_series(11,100) g;
alter table foo2 inherit foo1;
create index on foo1(num1 );
create index on foo2(num1 );
analyze foo1; analyze foo2;
create language plperl;
CREATE OR REPLACE FUNCTION perlupper(text)
 RETURNS text
 LANGUAGE plperl
 IMMUTABLE COST 1000
AS $function$
  return uc($_[0]);
$function$;
create index on foo1 (perlupper(val1));
create index on foo2 (perlupper(val1));


jjanes=# explain select a.num1 from foo1 a, foo1 b where
perlupper(a.val1)=perlupper(b.val1) and b.num1 = 987845;
   QUERY PLAN

 Hash Join  (cost=32789.00..538040.65 rows=1 width=4)
   Hash Cond: (perlupper(b.val1) = perlupper(a.val1))
   ->  Append  (cost=0.00..16.64 rows=2 width=8)
 ->  Index Scan using foo1_num1_idx on foo1 b
(cost=0.00..8.28 rows=1 width=8)
   Index Cond: (num1 = 987845)
 ->  Index Scan using foo2_num1_idx on foo2 b
(cost=0.00..8.37 rows=1 width=8)
   Index Cond: (num1 = 987845)
   ->  Hash  (cost=15406.00..15406.00 rows=100 width=12)
 ->  Append  (cost=0.00..15406.00 rows=100 width=12)
   ->  Seq Scan on foo1 a  (cost=0.00..1541.00 rows=10 width=12)
   ->  Seq Scan on foo2 a  (cost=0.00..13865.00
rows=90 width=12)

jjanes=# set enable_seqscan TO off;
jjanes=# explain select a.num1 from foo1 a, foo1 b where
perlupper(a.val1)=perlupper(b.val1) and b.num1 = 987845;
   QUERY PLAN
-
 Nested Loop  (cost=17.14..60438.19 rows=1 width=4)
   ->  Append  (cost=0.00..16.64 rows=2 width=8)
 ->  Index Scan using foo1_num1_idx on foo1 b
(cost=0.00..8.28 rows=1 width=8)
   Index Cond: (num1 = 987845)
 ->  Index Scan using foo2_num1_idx on foo2 b
(cost=0.00..8.37 rows=1 width=8)
   Index Cond: (num1 = 987845)
   ->  Append  (cost=17.14..30160.77 rows=5000 width=12)
 ->  Bitmap Heap Scan on foo1 a  (cost=17.14..3022.65 rows=500 width=12)
   Recheck Cond: (perlupper(val1) = perlupper(b.val1))
   ->  Bitmap Index Scan on foo1_perlupper_idx
(cost=0.00..17.01 rows=500 width=0)
 Index Cond: (perlupper(val1) = perlupper(b.val1))
 ->  Bitmap Heap Scan on foo2 a  (cost=92.22..27138.12
rows=4500 width=12)
   Recheck Cond: (perlupper(val1) = perlupper(b.val1))
   ->  Bitmap Index Scan on foo2_perlupper_idx
(cost=0.00..91.10 rows=4500 width=0)
 Index Cond: (perlupper(val1) = perlupper(b.val1))

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-06 Thread Josh Berkus
On 07/05/2013 02:34 PM, Jeff Janes wrote:
> On Wed, Jul 3, 2013 at 12:03 PM, Christopher Browne wrote:
> 
>> On Wed, Jul 3, 2013 at 2:24 PM, Cédric Villemain 
>> wrote:
>>> Others rules appeared, like the 5 days limit.
>>>
>>
> The limit was previously 4 days (at least according to
> http://wiki.postgresql.org/wiki/RRReviewers) but I think that that was
> honored almost exclusively in the breach.  I don't have a sense for how the
> 5 day limit is working.  If it is working, great.  If not, I would advocate
> lengthening it--having a limit specified but not generally held to is
> counterproductive.  I know that I, and at least one other potential
> reviewer, won't ask to be assigned a random patch because we have no
> confidence we can do an adequate review of a random patch within a
> contiguous 5 day window.

Well, keep in mind that the "reviewer" blank on the commitfest page is
solely a reservation to prevent duplicate effort.  There is absolutely
nothing preventing someone from doing and submitting a review without
putting their name down -- and several have, this CF.

The idea of the 5-day limit is that this often happens:
1. reviewer takes patch because they want to review it
2. something comes up in their life outside the postgres project, and
they don't get time to review it
3. because their name is on the patch as "reviewer", others don't pick
it up for review
4. it gets to 3 days before CF end, and nobody has reviewed the patch

While there are certainly cases where a reviewer takes more than 5 days
actively to review a patch, these are the minority.  Most of the time, a
reviewer who doesn't review a patch within 5 days doesn't get around to
reviewing it that commitfest, period.

If you are actively reviewing for more than 5 days, simply send me/Mike
an email saying so, before the 5 days are up.  Or post something on
-hackers in the patch thread.

Previously, we dealt with this by sending out an "are you reviewing this
patch" email from the CFM to the reviewer after 5 days passed.  The
problem with that is that a reviewer who was too busy to review was
usually too busy to answer, and as a result the whole cycle of "he's not
gonna review that" took around 10 days on average before we knew we
could remove the name.  10 days is 1/3 of the total commitfest --
unacceptably long.  Thus the approach we're trying this CF.

Again, this is an area where better automation could really help.
Reviewers could get automated reminders at 4 and 5 days, and ack those
reminders to extend the review period.

> On the other hand, I could always just go to the open commitfest (rather
> than the in progress one), pick something at random myself, and have up to
> 3 months to review it.  I just don't do have the discipline to do that, at
> least not often.

Please do!

>> To me it outlines that some are abusing the CF app and pushing there
>>> useless
>>> patches (not still ready or complete, WIP, ...
>>
>>
>> Seems to me that "useless" overstates things, but it does seem fair to
>> say that some patches are not sufficiently well prepared to be efficiently
>> added into Postgres.

Unfortunately, the only time we guarantee that a patch or even a spec
proposal will get a hearing and discussion is the CF.  Thus people who
really want to get agreement on a prototype, spec, proposal or API are
gonna submit it to the CF, so that they get some useful feedback.  Most
of the time, someone posting a WIP, API, spec, SQL syntax or feature
concept outside of a CF gets little or no useful criticism/suggestions
on it.

If we don't want WIP/RFC patches in the CF, then we need to provide some
other way to guarantee that these incomplete patches will get feedback.
 I'd be in favor of having something -- I think more authors would get
better feedback early on in development -- but I have no idea what it
would be.

Other uncommittable patches submitted to the CF are there because the
submitter is sending in a first-time patch.  It's very important for
training up the new submitter that their patch get the full
review-returned-with-feedback cycle.  That's how they become better
patch authors in the future.  Personally, I think this is one of the
most valuable aspects of the CF process.

>> Well, if the project is hampered by not being able to get *all* the
>> changes that people imagine that they want to put in, then we have a
>> real problem of needing a sort of "triage" to determine which changes
>> will be accepted, and which will not.
>>
>> Perhaps we need an extra status in the CommitFest application, namely
>> one that characterizes:
>>Insufficiently Important To Warrant Review

Gods forbid.  We might as well have the tag "Stupid Idiot Patch" and be
done with it.  And people accuse *me* of being submitter-hostile.

> I don't think that this would really be an improvement.  Someone still has
> to spend enough time looking at the patch to make the decision that it
> falls into one of those categories.  Having spent sufficient time to 

Re: [HACKERS] GIN improvements part 3: ordering in index

2013-07-06 Thread Tomas Vondra
Hi,

this is a follow-up to the message I posted to the thread about
additional info in GIN.

I've applied all three patches (ginaddinfo7.patch, gin_fast_scan.4.patch
and gin_ordering.4.patch) onto commit b8fd1a09. I ended up with two
definitions of ‘cmpEntries’ in ginget.c, but I suppose this is due to
split of the patch into multiple pieces. The definitions are exactly the
same so I've commented out the second one.

After applying fast scan the queries fail with 'buffer is not owned by
resource owner Portal' errors, the ordering patch causes segmentation
faults when loading the data.

Loading the data is basically a bunch of INSERT statements into
"messages" table, with a GIN index on the message body. So the table and
index are defined like this:

CREATE TABLE messages (

idSERIAL PRIMARY KEY,
parent_id INT REFERENCES messages(id),
thread_id INT,
level INT,
hash_id   VARCHAR(32) NOT NULL UNIQUE,

list  VARCHAR(32) NOT NULL REFERENCES lists(id),
message_idVARCHAR(200),
in_reply_to   TEXT[],
refs  TEXT[],
sent  TIMESTAMP,

subject   TEXT,
authorTEXT,

body_plainTEXT,

body_tsvector tsvector,
subject_tsvector  tsvector,

headers   HSTORE,
raw_message   TEXT
);

CREATE INDEX message_body_idx on messages using gin(body_tsvector);

I've observed about three failure scenarios:

1) autovacuum runs VACUUM on the 'messages' table and fails, killing
   all the connections, with this message in the server log

LOG:  server process (PID 16611) was terminated by signal
  11: Segmentation fault
DETAIL:  Failed process was running: autovacuum: ANALYZE
 public.messages


2) manual run of VACUUM on the table, with about the same result and
   this output on the console (and the same segfault in the server log)

archie=# vacuum messages;
WARNING:  relation "messages" page 6226 is uninitialized --- fixing
WARNING:  relation "messages" page 6227 is uninitialized --- fixing
WARNING:  relation "messages" page 6228 is uninitialized --- fixing
WARNING:  relation "messages" page 6229 is uninitialized --- fixing
WARNING:  relation "messages" page 6230 is uninitialized --- fixing
WARNING:  relation "messages" page 6231 is uninitialized --- fixing
WARNING:  relation "messages" page 6232 is uninitialized --- fixing
WARNING:  relation "messages" page 6233 is uninitialized --- fixing
The connection to the server was lost. Attempting reset: Failed.


3) disabled autovacuum, the load fails (always at exactly the same
   place) - I have collected a backtrace from gdb (after recompiling
   with disabled optimization), see the attachment.

All three scenarios might actually be caused by the same bug, as I've
checked the backtrace for the VACUUM and it fails at exactly the same
place as the third case.

regards
Tomas
Program received signal SIGSEGV, Segmentation fault.
0x0047517d in ginDataPageLeafReadItemPointer (ptr=0x15f00ae , iptr=0x7fff1781d340, addInfoIsNull=0x7fff1781d2d7 "") 
at ../../../../src/include/access/gin_private.h:866
866 v = *ptr;
(gdb) bt
#0  0x0047517d in ginDataPageLeafReadItemPointer (ptr=0x15f00ae 
, iptr=0x7fff1781d340, 
addInfoIsNull=0x7fff1781d2d7 "") at 
../../../../src/include/access/gin_private.h:866
#1  0x004752ae in ginDataPageLeafRead (ptr=0x15f00ae , attnum=1, iptr=0x7fff1781d340, addInfo=0x7fff1781d348, 
addInfoIsNull=0x7fff1781d347 "", ginstate=0x7fff1781d9c0)
at ../../../../src/include/access/gin_private.h:916
#2  0x004779d6 in dataSplitPageLeaf (btree=0x8b97580, lbuf=33601, 
rbuf=33602, off=1018, prdata=0x7fff1781d428) at gindatapage.c:954
#3  0x00478d0d in dataSplitPage (btree=0x8b97580, lbuf=33601, 
rbuf=33602, off=1018, prdata=0x7fff1781d428) at gindatapage.c:1262
#4  0x0047a056 in ginInsertValue (btree=0x8b97580, stack=0x8b98748, 
buildStats=0x0) at ginbtree.c:385
#5  0x004793d1 in ginInsertItemPointers (ginstate=0x7fff1781d9c0, 
attnum=1, gdi=0x8b97580, items=0x43c6726, addInfo=0x3a41b08, 
addInfoIsNull=0x8b97b31 "", nitem=1795, buildStats=0x0) at gindatapage.c:1414
#6  0x0046f7b2 in buildFreshLeafTuple (ginstate=0x7fff1781d9c0, 
attnum=1, key=49986944, category=0 '\000', items=0x43c4f50, addInfo=0x3a3fb40, 
addInfoIsNull=0x8b97738 "", nitem=2812, buildStats=0x0) at gininsert.c:418
#7  0x0046faf0 in ginEntryInsert (ginstate=0x7fff1781d9c0, attnum=1, 
key=49986944, category=0 '\000', items=0x43c4f50, addInfo=0x3a3fb40, 
addInfoIsNull=0x8b97738 "", nitem=2812, buildStats=0x0) at gininsert.c:512
#8  0x004859c3 in ginInsertCleanup (ginstate=0x7fff1781d9c0, 
vac_delay=0 '\000', stats=0x0) at ginfast.c:960
#9  0x0048425a in ginHeapTupleFastInsert (ginstate=0x7fff1781d9c0, 
collector=0x7fff1781d9a0) at ginfast.c:440
#10 0

Re: [HACKERS] MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

2013-07-06 Thread Stephen Frost
Jeff,

* Jeff Janes (jeff.ja...@gmail.com) wrote:
> I was going to add another item to make nodeHash.c use the new huge
> allocator, but after looking at it just now it was not clear to me that it
> even has such a limitation.  nbatch is limited by MaxAllocSize, but
> nbuckets doesn't seem to be.

nodeHash.c:ExecHashTableCreate() allocates ->buckets using:

palloc(nbuckets * sizeof(HashJoinTuple)) 

(where HashJoinTuple is actually just a pointer), and reallocates same
in ExecHashTableReset().  That limits the current implementation to only
about 134M buckets, no?

Now, what I was really suggesting wasn't so much changing those specific
calls; my point was really that there's a ton of stuff in the HashJoin
code that uses 32bit integers for things which, these days, might be too
small (nbuckets being one example, imv).  There's a lot of code there
though and you'd have to really consider which things make sense to have
as int64's.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] GIN improvements part2: fast scan

2013-07-06 Thread Tomas Vondra
Hi,

this is a follow-up to the message I posted to the thread about
additional info in GIN.

I've applied both ginaddinfo.7.patch and gin_fast_scan.4.patch on commit
b8fd1a09, but I'm observing a lot of failures like this:

STATEMENT:  SELECT id FROM messages WHERE body_tsvector @@
plainto_tsquery('english', 'email free') LIMIT 100
ERROR:  buffer 238068 is not owned by resource owner Portal

There's a GIN index on messages(body_tsvector). I haven't dug into why
it fails like this.

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-07-06 Thread Tomas Vondra
Hi,

I've done a fair amount of testing by loading pgsql-general archives
into a database and running a bunch of simple ts queries that use a GIN
index.

I've tested this as well as the two other patches, but as I was able to
get meaningful results only from this patch, I'll post the results here
and info about segfaults and other observed errors to the other threads.

First of all - update the commitfest page whenever you submit a new
patch version, please. I've spent two or three hours testing and
debugging a patches linked from those pages only to find out that there
are newer versions. I should have checked that initially, but let's keep
that updated.

I wan't able to apply the patches to the current head, so I've used
b8fd1a09 (from 17/06) as a base commit.

The following table shows these metrics:

 * data load
- how long it took to import ~200k messages from the  list archive
- includes a lot of time spent in Python (parsing), checking FKs ...
- so unless this is significantly higher, it's probably OK

 * index size
- size of the main GIN index on message body

 * 1/2/3-word(s)
- number of queries in the form

  SELECT id FROM messages
   WHERE body_tsvector @@ plainto_tsquery('english', 'w1 w2')
   LIMIT 100

(executed over 60 seconds, and 'per second' speed)

All the scripts are available at https://bitbucket.org/tvondra/archie

Now, the results:

no patches:
data load:  710 s
index size: 545 MB
1 word:  37500 (630/s)
2 words: 49800 (800/s)
3 words: 4 (660/s)

additional info (ginaddinfo.7.patch):
data load:  693 s
index size: 448 MB
1 word: 135000 (2250/s)
2 words: 85000 (1430/s)
3 words: 54000 ( 900/s)

additional info + fast scan (gin_fast_scan.4.patch):
data load:  720 s
index size: 455 MB
1 word: FAIL
2 words:FAIL
3 words:FAIL

additional info + fast scan + ordering (gin_ordering.4.patch):
data load:  FAIL
index size: N/A
1 word: N/A
2 words:N/A
3 words:N/A

So the speedup after adding info into GIN seems very promising, although
I don't quite understand why searching for two words is so much slower.
Also the index size seems to decrease significantly.

After applying 'fast scan' the things started to break down, so I wasn't
able to run the queries and then even the load failed consistently.

I'll post the info into the appropriate threads.

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A mailing-list based bug tracker

2013-07-06 Thread Michael Alan Dorman
I'm just a user, not a contributer, but I've read -hackers long enough
to remember the many discussions over the years about bug trackers, and
the consensus always seemed to come back to "if it doesn't fit our
mailing-list-based workflow, it's going to fail."

I happened upon mention of a new bug tracker written by someone working
on Xen, where the requirements seem to be pretty similar, and thought it
worth bringing to you all's attention.

I make no representations about how mature the code is, how robust it
is, etc., but it seems like it might at least be a starting point for
something that could serve the PostgreSQL community in the way desires.

http://www.hellion.org.uk/blog/posts/emesinae-yet-another-bug-tracker/

Mike.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-06 Thread Claudio Freire
Look at that:

  return x
  $$ LANGUAGE plpythonu;
  SELECT * FROM test_type_conversion_numeric(100);
! INFO:  (Decimal('100'), 'Decimal')
  CONTEXT:  PL/Python function "test_type_conversion_numeric"
   test_type_conversion_numeric
  --
--- 219,225 
  return x
  $$ LANGUAGE plpythonu;
  SELECT * FROM test_type_conversion_numeric(100);
! INFO:  (Decimal("100"), 'Decimal')
  CONTEXT:  PL/Python function "test_type_conversion_numeric"
   test_type_conversion_numeric
  --

" instead of '

All the more reason to use as_tuple



On Sat, Jul 6, 2013 at 9:16 AM, Andrew Dunstan  wrote:
>
> On 07/06/2013 01:52 AM, Claudio Freire wrote:
>>
>> On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane  wrote:
>>>
>>> Peter Eisentraut  writes:

 PL/Python: Convert numeric to Decimal
>>>
>>> Assorted buildfarm members don't like this patch.
>>
>>
>> Do you have failure details?
>>
>> This is probably an attempt to operate decimals vs floats.
>>
>> Ie: Decimal('3.0') > 0 works, but Decimal('3.0') > 1.3 doesn't
>> (decimal is explicitly forbidden from operating on floats, some design
>> decision that can only be disabled in 3.3).
>>
>>
>
>
> Instead of speculating, you can actually see for yourself. The dashboard is
> at  Pick one of the
> machines failing at PLCheck-C and click its 'Details' link. Then scroll down
> a bit and you'll see what is failing.
>
> cheers
>
> andrew
>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Have REFRESH MATERIALIZED VIEW run as the MV owner

2013-07-06 Thread Noah Misch
On Fri, Jul 05, 2013 at 11:18:50PM -0700, Hitoshi Harada wrote:
> On Fri, Jul 5, 2013 at 9:45 AM, Noah Misch  wrote:
> > REFRESH MATERIALIZED VIEW should temporarily switch the current user ID to 
> > the
> > MV owner.  REINDEX and VACUUM do so to let privileged users safely maintain
> > objects owned by others, and REFRESH MATERIALIZED VIEW belongs in that class
> > of commands.
> 
> I was trying to understand why this is safe for a while.  REINDEX and
> VACUUM make sense to me because they never contain side-effect as far
> as I know, but MV can contain some volatile functions which could have
> some unintended operation that shouldn't be invoked by no one but the
> owner.  For example, if the function creates a permanent table per
> call and doesn't clean it up, but later some other maintenance
> operation is supposed to clean it up, and the owner schedules REFRESH
> and maintenance once a day.  A non-owner user now can refresh it so
> many times until the disk gets full.

I'm not proposing to expand the set of people *permitted* to refresh the MV.
That's still limited to the owning role (including other roles acquiring that
role by membership) and superusers.  My goal is to make it safe for a
superuser to refresh any MV, much like we've made it safe for a superuser to
REINDEX any index.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Run-time posix_fallocate failures

2013-07-06 Thread Tom Lane
Buildfarm member "rover_firefly" has been failing like this since
269e780 went in:

creating template1 database in 
/export/home/postgres/HEAD/pgsql.47842/src/test/regress/./tmp_check/data/base/1 
... FATAL:  could not allocate space for file "pg_xlog/xlogtemp.60072" using 
posix_fallocate: Invalid argument

Apparently OmniOS has a version of posix_fallocate that doesn't actually
work, or at least doesn't work on the specific filesystem rover_firefly
is using.  The Single Unix Spec says that that is the correct error to
return if the filesystem doesn't provide support:

[EINVAL]
The len argument is less than zero, or the offset argument is less than 
zero, or the underlying file system does not support this operation.

I think you'd better rejigger that patch so that it falls through to the
old implementation if posix_fallocate() fails.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for COLLATE

2013-07-06 Thread Andrew Dunstan


On 07/06/2013 09:12 AM, Robert Haas wrote:

On Fri, Jul 5, 2013 at 6:56 PM, Andres Freund  wrote:

On 2013-07-03 14:17:20 -0400, Robert Haas wrote:

I agree.  I think it'd be a good idea to get the buildfarm to run the
existing collate.utf8.linux test regularly on platforms where it
passes, but this particular approach is valuable mostly because
(supposedly) it was going to work everywhere.  However, it doesn't.

Hm. What about extending the existing resultmap logic to make that
possible in general? E.g. don't run the test if a file is mapped to an
empty expected file.

Not a bad thought.



Or maybe just invent a magic result file name such as "none" or 
"do_not_run".


I'm not keen to have us build up a large patchwork of regression tests 
that run on some platforms and not on others, though.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for COLLATE

2013-07-06 Thread Robert Haas
On Fri, Jul 5, 2013 at 6:56 PM, Andres Freund  wrote:
> On 2013-07-03 14:17:20 -0400, Robert Haas wrote:
>> I agree.  I think it'd be a good idea to get the buildfarm to run the
>> existing collate.utf8.linux test regularly on platforms where it
>> passes, but this particular approach is valuable mostly because
>> (supposedly) it was going to work everywhere.  However, it doesn't.
>
> Hm. What about extending the existing resultmap logic to make that
> possible in general? E.g. don't run the test if a file is mapped to an
> empty expected file.

Not a bad thought.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MVCC catalog access

2013-07-06 Thread Robert Haas
On Fri, Jul 5, 2013 at 11:27 AM, Andres Freund  wrote:
> Hi Robert,
>
> On 2013-07-02 09:31:23 -0400, Robert Haas wrote:
>> I have a few ideas for getting rid of the remaining uses of
>> SnapshotNow that I'd like to throw out there:
>
> Is your current plan to get rid of SnapshotNow entirely? I am wonder
> because the changeset extraction needs to care and how the proper fix
> for dealing with CatalogSnapshotData looks depends on it...

I would like to do that, but I haven't quite figured out how to get
rid of the last few instances, per discussion upthread.  I do plan to
spend some more time on it, but likely not this week.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-06 Thread Andrew Dunstan


On 07/06/2013 01:52 AM, Claudio Freire wrote:

On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane  wrote:

Peter Eisentraut  writes:

PL/Python: Convert numeric to Decimal

Assorted buildfarm members don't like this patch.


Do you have failure details?

This is probably an attempt to operate decimals vs floats.

Ie: Decimal('3.0') > 0 works, but Decimal('3.0') > 1.3 doesn't
(decimal is explicitly forbidden from operating on floats, some design
decision that can only be disabled in 3.3).





Instead of speculating, you can actually see for yourself. The dashboard 
is at  Pick one of 
the machines failing at PLCheck-C and click its 'Details' link. Then 
scroll down a bit and you'll see what is failing.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-07-06 Thread Dean Rasheed
On 5 July 2013 18:23, David Fetter  wrote:
> Please find attached changes based on the above.
>

This looks good. The grammar changes are smaller and neater now on top
of the makeFuncCall() patch.

Overall I think this patch offers useful additional functionality, in
compliance with the SQL spec, which should be handy to simplify
complex grouping queries.

There's a minor typo in syntax.sgml: "for each input row, each row
matching same." --- fix attached.

I think this is ready for committer.

Regards,
Dean


filter_008b.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] copy & pasted include guard in attoptcache.h

2013-07-06 Thread Andres Freund
Hi,

attoptcache.h currently uses #ifndef SPCCACHE_H. The attached patch
fixes that.
A quick
$ grep -r '\#ifndef' src/include/|grep _H|awk '{print $2}'|sort|uniq -cd
doesn't show any further duplicated ones.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From 9e71d51d750a0097795b3cf295be7ed84b7a4d53 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Sat, 6 Jul 2013 12:09:32 +0200
Subject: [PATCH] Fix copy & pasted include guard name in attoptcache.h

---
 src/include/utils/attoptcache.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/include/utils/attoptcache.h b/src/include/utils/attoptcache.h
index e1c4ab4..133a075 100644
--- a/src/include/utils/attoptcache.h
+++ b/src/include/utils/attoptcache.h
@@ -10,8 +10,8 @@
  *
  *-
  */
-#ifndef SPCCACHE_H
-#define SPCCACHE_H
+#ifndef ATTOPTCACHE_H
+#define ATTOPTCACHE_H
 
 /*
  * Attribute options.
@@ -25,4 +25,4 @@ typedef struct AttributeOpts
 
 AttributeOpts *get_attribute_options(Oid spcid, int attnum);
 
-#endif   /* SPCCACHE_H */
+#endif   /* ATTOPTCACHE_H */
-- 
1.8.3.251.g1462b67


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-06 Thread Michael Meskes
On Fri, Jul 05, 2013 at 09:41:26AM -0400, Tom Lane wrote:
> Um ... 9.3 is a separate branch now, please fix it there also.

Done. Seems I missed a new branch - yet again. Sorry and thanks for pointing it
out to me.

michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers