Re: [HACKERS] question about large object

2008-11-06 Thread Volkan YAZICI
On Thu, 6 Nov 2008, "xie jiong" <[EMAIL PROTECTED]> writes:
> what's mean of pageno? or what 's "page" of a large object refer to?
> is this "page"(pageno) refer to "chunk"(chunk number) of lob, as
> opposed to real data page? (or just one data page to store one chunk
> of lob)

Checked the explanation[1] in the documentation?


Regards.

[1] http://www.postgresql.org/docs/current/static/catalog-pg-largeobject.html

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lisp as a procedural language?

2008-10-18 Thread Volkan YAZICI
"M. Edward (Ed) Borasky" <[EMAIL PROTECTED]> writes:
> Someone at the PostgreSQL West conference last weekend expressed an
> interest in a Lisp procedural language. The only two Lisp environments
> I've found so far that aren't GPL are Steel Bank Common Lisp (MIT,
> http://sbcl.sourceforge.net) and XLispStat (BSD,
> http://www.stat.uiowa.edu/~luke/xls/xlsinfo/xlsinfo.html). SBCL is a
> very active project, but I'm not sure about XLispStat.

You see PL/scheme[1]?


Regards.

[1] http://plscheme.projects.postgresql.org/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL future ideas

2008-09-20 Thread Volkan YAZICI
On Fri, 19 Sep 2008, "Gevik Babakhani" <[EMAIL PROTECTED]> writes:
> Has there been any idea to port PG to a more modern programming language
> like C++? Of course there are some minor obstacles like a new OO design,
> this being a gigantic task to perform and rewriting almost everything etc...
> I am very interested to hear your opinion.

This topic was discussed many times before in the past. See mailing list
archives. If you have any _alternative_ ideas to say to previous
discussions, I think developers will be appreciated to hear it.


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keeping creation time of objects

2008-09-09 Thread Volkan YAZICI
On Tue, 9 Sep 2008, David Fetter <[EMAIL PROTECTED]> writes:
>> AFAICS, PostgreSQL is not keeping info about when a table, database,
>> sequence, etc was created.  We cannot get that info even from OS,
>> since CLUSTER or VACUUM FULL may change the metadata of
>> corresponding relfilenode.
>
> When people aren't keeping track of their DDL, that is very strictly a
> process problem on their end.  When people are shooting themselves in
> the foot, it's a great disservice to market Kevlar shoes to them.

Word. In the company I'm currently working at we store database schema
in a VCS repository with minor and major version taggings. And there is
a current_foo_soft_version() function that returns the revision of the
related database schema. If there is no control over the database schema
changes in a company working scheme, the most logging-feature-rich
PostgreSQL release will provide an insignificant benefit compared the
mess needs to get fixed.


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Verbosity of Function Return Type Checks

2008-09-08 Thread Volkan YAZICI
On Mon, 8 Sep 2008, Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Modified as you suggested. BTW, will there be a similar i18n scenario
>> for "dropped column" you mentioned below?
>
> Yes, you need _() around those too.

For this purpose, I introduced a dropped_column_type variable in
validate_tupdesc_compat() function:

  const char dropped_column_type[] = _("n/a (dropped column)");

And used this in below fashion:

  OidIsValid(returned->attrs[i]->atttypid)
  ? format_type_be(returned->attrs[i]->atttypid)
  : dropped_column_type

>> Done with format_type_be() usage.
>
> BTW you forgot to remove the quotes around the type names (I know I told
> you to add them but Tom gave the reasons why it's not needed) :-)

Done.

> Those are minor problems that are easily fixed.  However there's a
> larger issue that Tom mentioned earlier and I concur, which is that the
> caller is forming the primary error message and passing it down.  It
> gets a bit silly if you consider the ways the messages end up worded:
>
>errmsg("returned record type does not match expected record type"));
>errdetail("Returned type \"%s\" does not match expected type \"%s\" in 
> column \"%s\".",
>---> this is the case where it's OK
>
>errmsg("wrong record type supplied in RETURN NEXT"));
>errdetail("Returned type \"%s\" does not match expected type \"%s\" in 
> column \"%s\".",
>--> this is strange
>
>errmsg("returned tuple structure does not match table of trigger event"));
>errdetail("Returned type \"%s\" does not match expected type \"%s\" in 
> column \"%s\".",
>--> this is not OK

What we're trying to do is to avoid code duplication while checking the
returned tuple types against expected tuple types. For this purpose we
implemented a generic function (validate_tupdesc_compat) to handle all
possible cases. But, IMHO, it's important to remember that there is no
perfect generic function to satisfy all possible cases at best.

> I've been thinking how to pass down the context information without
> feeding the complete string, but I don't find a way without doing
> message construction. Construction is to be avoided because it's a
> pain for translators.
>
> Maybe we should just use something generic like errmsg("mismatching
> record type") and have the caller pass two strings specifying what's
> the "returned" tuple and what's the "expected", but I don't see how
> ...  (BTW this is worth fixing, because every case seems to have
> appeared independently without much thought as to other callsites with
> the same pattern.)

I considered the subject with identical constraints but couldn't come up
with a more rational solution. Nevertheless, I'm open to any suggestion.


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.219
diff -c -r1.219 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	1 Sep 2008 22:30:33 -	1.219
--- src/pl/plpgsql/src/pl_exec.c	9 Sep 2008 05:48:57 -
***
*** 188,194 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
--- 188,195 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc expected, TupleDesc returned,
! 	const char *msg);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
***
*** 384,394 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	if (estate.rettupdesc == NULL ||
! 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
!  errmsg("returned record type does not match expected record type")));
  	break;
  case TYPEFUNC_RECORD:
  
--- 385,392 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! 			gettext_noop("returned record type does not match expected record type"));
  	break;
  case TYPEFUNC_RECORD:
  
***
*** 705,715 
  		rettup = NULL;
  	else
  	{
! 		if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_DATATYPE_MISMATCH),
! 	 errmsg("returned tuple structure does not match table of trigger event")));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_co

Re: [HACKERS] Verbosity of Function Return Type Checks

2008-09-05 Thread Volkan YAZICI
On Fri, 05 Sep 2008, Tom Lane <[EMAIL PROTECTED]> writes:
> I think the best way is to use
>
>   subroutine(..., gettext_noop("special error message here"))
>
> at the call sites, and then
>
>   errmsg("%s", _(msg))
>
> when throwing the error.  gettext_noop() is needed to have the string
> be put into the message catalog, but it doesn't represent any run-time
> effort.  The _() macro is what actually makes the translation lookup
> happen.  We don't want to incur that cost in the normal no-error case,
> which is why you shouldn't just do _() at the call site and pass an
> already-translated string to the subroutine.

Modified as you suggested. BTW, will there be a similar i18n scenario
for "dropped column" you mentioned below?

>>   if (td1->attrs[i]->atttypid &&
>>   td2->attrs[i]->atttypid &&
>>   td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)
>
>> expression to fix this?
>
> No, that's weakening the compatibility check.  (There's a separate issue
> here of teaching plpgsql to actually cope nicely with rowtypes
> containing dropped columns, but that's well beyond the scope of this
> patch.)
>
> What I'm on about is protecting format_type_be() from being passed
> a zero and then failing.  Perhaps it would be good enough to do
> something like
>
>   OidIsValid(td1->attrs[i]->atttypid) ?
>  format_type_with_typemod(td1->attrs[i]->atttypid,
>   td1->attrs[i]->atttypmod) :
>  "dropped column"
>
> while throwing the error.
>
> BTW, since what's actually being looked at is just the type OID and not
> the typmod, it seems inappropriate to me to show the typmod in the
> error.  I'd go with just format_type_be(td1->attrs[i]->atttypid) here
> I think.

Done with format_type_be() usage.

BTW, Alvaro fixed my string concatenations which yielded in lines
exceeding 80 characters width, but I'd want to ask twice if you're sure
with this. Because, IMHO, PostgreSQL is also famous with the quality and
readability of its source code -- that I'm quite proud of as a user,
kudos to developers -- and I think it'd be better to stick with 80
characters width convention as much as one can.


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.219
diff -c -r1.219 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	1 Sep 2008 22:30:33 -	1.219
--- src/pl/plpgsql/src/pl_exec.c	5 Sep 2008 18:19:50 -
***
*** 188,194 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
--- 188,195 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc expected, TupleDesc returned,
! 	const char *msg);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
***
*** 384,394 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	if (estate.rettupdesc == NULL ||
! 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
!  errmsg("returned record type does not match expected record type")));
  	break;
  case TYPEFUNC_RECORD:
  
--- 385,392 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! 			gettext_noop("returned record type does not match expected record type"));
  	break;
  case TYPEFUNC_RECORD:
  
***
*** 705,715 
  		rettup = NULL;
  	else
  	{
! 		if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_DATATYPE_MISMATCH),
! 	 errmsg("returned tuple structure does not match table of trigger event")));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
--- 703,711 
  		rettup = NULL;
  	else
  	{
! 		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
! estate.rettupdesc,
! gettext_noop("returned tuple structure does not match table of trigger event"));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
***
*** 2199,2209 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		  

Re: [HACKERS] Verbosity of Function Return Type Checks

2008-09-05 Thread Volkan YAZICI
On Fri, 5 Sep 2008, Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Please use the patch I posted yesterday, as it had all the issues I
> found fixed.  There were other changes in that patch too.

My bad. Patch is modified with respect to suggestions[1][2] from
Tom. (All 115 tests passed in cvs tip.)


Regards.

[1] "char *msg" is replaced with "const char *msg".

[2] "errmsg(msg)" is replaced with 'errmsg("%s", msg)'.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.219
diff -c -r1.219 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	1 Sep 2008 22:30:33 -	1.219
--- src/pl/plpgsql/src/pl_exec.c	5 Sep 2008 13:47:07 -
***
*** 188,194 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
--- 188,195 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc expected, TupleDesc returned,
! 	const char *msg);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
***
*** 384,394 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	if (estate.rettupdesc == NULL ||
! 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
!  errmsg("returned record type does not match expected record type")));
  	break;
  case TYPEFUNC_RECORD:
  
--- 385,392 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! 			"returned record type does not match expected record type");
  	break;
  case TYPEFUNC_RECORD:
  
***
*** 705,715 
  		rettup = NULL;
  	else
  	{
! 		if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_DATATYPE_MISMATCH),
! 	 errmsg("returned tuple structure does not match table of trigger event")));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
--- 703,711 
  		rettup = NULL;
  	else
  	{
! 		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
! estate.rettupdesc,
! "returned tuple structure does not match table of trigger event");
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
***
*** 2199,2209 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
! 	if (!compatible_tupdesc(tupdesc, rec->tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! 		errmsg("wrong record type supplied in RETURN NEXT")));
  	tuple = rec->tup;
  }
  break;
--- 2195,2204 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned"
! 	 " record is indeterminate.")));
! 	validate_tupdesc_compat(tupdesc, rec->tupdesc,
! 		"wrong record type supplied in RETURN NEXT");
  	tuple = rec->tup;
  }
  break;
***
*** 2309,2318 
  		   stmt->params);
  	}
  
! 	if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! 		  errmsg("structure of query does not match function result type")));
  
  	while (true)
  	{
--- 2304,2311 
  		   stmt->params);
  	}
  
! 	validate_tupdesc_compat(estate->rettupdesc, portal->tupDesc,
! 			"structure of query does not match function result type");
  
  	while (true)
  	{
***
*** 5145,5167 
  }
  
  /*
!  * Check two tupledescs have matching number and types of attributes
   */
! static bool
! compatible_tupdesc(TupleDesc td1, TupleDesc td2)
  {
! 	int			i;
  
! 	if (td1->natts != td2->natts)
! 		return false;
  
! 	for (i = 0; i < td1->natts; i++)
! 	{
! 		if (td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)
! 			return false;
! 	}
  
! 	return true;
  }
  
  /* --
--- 5138,5174 
  }
  
  /*
!

Re: [HACKERS] Verbosity of Function Return Type Checks

2008-09-04 Thread Volkan YAZICI
On Thu, 04 Sep 2008, Tom Lane <[EMAIL PROTECTED]> writes:
> This is not ready to go: you've lost the ability to localize most of
> the error message strings.

How can I make this available? What's your suggestion?

> Also, "char *msg" should be "const char *msg"

Done.

> if you're going to pass literal constants to it, and this gives me the
> willies even though the passed-in strings are supposedly all fixed:
>   errmsg(msg),
> Use
>   errmsg("%s", msg),
> to be safe.

Done.

> Actually, the entire concept of varying the main message to suit the
> context exactly, while the detail messages are *not* changing, seems
> pretty bogus...

I share your concerns but couldn't come up with a better approach. I'd
be happy to hear your suggestions.

> Another problem with it is it's likely going to fail completely on
> dropped columns (which will have atttypid = 0).

Is it ok if I'd replace

  if (td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)

line in validate_tupdesc_compat with

  if (td1->attrs[i]->atttypid &&
  td2->attrs[i]->atttypid &&
  td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)

expression to fix this?


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.219
diff -c -r1.219 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	1 Sep 2008 22:30:33 -	1.219
--- src/pl/plpgsql/src/pl_exec.c	5 Sep 2008 06:13:18 -
***
*** 188,194 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
--- 188,195 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc td1, TupleDesc td2,
! 	const char *msg);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
***
*** 384,394 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	if (estate.rettupdesc == NULL ||
! 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
!  errmsg("returned record type does not match expected record type")));
  	break;
  case TYPEFUNC_RECORD:
  
--- 385,393 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! 			"returned record type does not "
! 			"match expected record type");
  	break;
  case TYPEFUNC_RECORD:
  
***
*** 705,715 
  		rettup = NULL;
  	else
  	{
! 		if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_DATATYPE_MISMATCH),
! 	 errmsg("returned tuple structure does not match table of trigger event")));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
--- 704,713 
  		rettup = NULL;
  	else
  	{
! 		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
! estate.rettupdesc,
! "returned tuple structure does not match "
! "table of trigger event");
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
***
*** 2199,2209 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
! 	if (!compatible_tupdesc(tupdesc, rec->tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! 		errmsg("wrong record type supplied in RETURN NEXT")));
  	tuple = rec->tup;
  }
  break;
--- 2197,2207 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned"
! 	 " record is indeterminate.")));
! 	validate_tupdesc_compat(rec->tupdesc, tupdesc,
! 		"wrong record type supplied in "
! 			"RETURN NEXT");
  	tuple = rec->tup;
  }
  break;
***
*** 2309,2318 
  		   stmt->params);
  	}
  
! 	if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_D

Re: [HACKERS] Verbosity of Function Return Type Checks

2008-09-04 Thread Volkan YAZICI
On Thu, 4 Sep 2008, Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Cool, thanks.  I had a look and you had some of the expected vs.
> returned reversed.

I'll happy to fix the reversed ones if you can report them in more
details.


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Verbosity of Function Return Type Checks

2008-08-09 Thread Volkan YAZICI
[Please ignore the previous reply.]

On Fri, 8 Aug 2008, Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I think this is a good idea, but the new error messages need more work.
> Have a look at the message style guidelines please,
> http://www.postgresql.org/docs/8.3/static/error-style-guide.html

Right. Done -- I hope.

> Particularly I think you need to keep the original errmsg() and add the
> new messages as errdetail().

Made callers pass related error message as a string parameter, and
appended required details using errdetail().

> (I notice that there's the slight problem
> that the error messages are different for the different callers.)

Above mentioned change should have addressed this issue too.

> Also, please use context diffs.

Done.


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.216
diff -c -r1.216 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	16 May 2008 18:34:51 -	1.216
--- src/pl/plpgsql/src/pl_exec.c	9 Aug 2008 10:10:32 -
***
*** 190,196 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
--- 190,196 
  	   Oid reqtype, int32 reqtypmod,
  	   bool isnull);
  static void exec_init_tuple_store(PLpgSQL_execstate *estate);
! static void validate_tupdesc_compat(TupleDesc td1, TupleDesc td2, char *msg);
  static void exec_set_found(PLpgSQL_execstate *estate, bool state);
  static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
  static void free_var(PLpgSQL_var *var);
***
*** 386,396 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	if (estate.rettupdesc == NULL ||
! 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
!  errmsg("returned record type does not match expected record type")));
  	break;
  case TYPEFUNC_RECORD:
  
--- 386,394 
  			{
  case TYPEFUNC_COMPOSITE:
  	/* got the expected result rowtype, now check it */
! 	validate_tupdesc_compat(tupdesc, estate.rettupdesc,
! 			"returned record type does not "
! 			"match expected record type");
  	break;
  case TYPEFUNC_RECORD:
  
***
*** 707,717 
  		rettup = NULL;
  	else
  	{
! 		if (!compatible_tupdesc(estate.rettupdesc,
! trigdata->tg_relation->rd_att))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_DATATYPE_MISMATCH),
! 	 errmsg("returned tuple structure does not match table of trigger event")));
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
--- 705,714 
  		rettup = NULL;
  	else
  	{
! 		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
! estate.rettupdesc,
! "returned tuple structure does not match "
! "table of trigger event");
  		/* Copy tuple to upper executor memory */
  		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
  	}
***
*** 2201,2211 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
! 	if (!compatible_tupdesc(tupdesc, rec->tupdesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! 		errmsg("wrong record type supplied in RETURN NEXT")));
  	tuple = rec->tup;
  }
  break;
--- 2198,2208 
  		  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
  		   errmsg("record \"%s\" is not assigned yet",
    rec->refname),
! 		   errdetail("The tuple structure of a not-yet-assigned"
! 	 " record is indeterminate.")));
! 	validate_tupdesc_compat(rec->tupdesc, tupdesc,
! 		"wrong record type supplied in "
! 			"RETURN NEXT");
  	tuple = rec->tup;
  }
  break;
***
*** 2311,2320 
  		   stmt->params);
  	}
  
! 	if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
! 		ereport(ERROR,
! (errcode(ERRCODE_DATATYPE_MISMATCH),
! 		  errmsg("structure of query does not match function result type")));
  
  	while (true)
  	{
--- 2308,2316 
  		   stmt->params);
  	}
  
! 	validate_tupdesc_compat(portal->tupDesc, estate->rettupdesc,
! 			"structure of query does not match function "
! 			"result type");
  
  	while (true)
  	{
***
*** 5138,5160 
  }
  
  /*
!  * Check two tupledescs have matchi

Re: [HACKERS] Verbosity of Function Return Type Checks

2008-08-09 Thread Volkan YAZICI
On Fri, 8 Aug 2008, Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I think this is a good idea, but the new error messages need more work.
> Have a look at the message style guidelines please,
> http://www.postgresql.org/docs/8.3/static/error-style-guide.html

Right. Done -- I hope.

> Particularly I think you need to keep the original errmsg() and add the
> new messages as errdetail().  (I notice that there's the slight problem
> that the error messages are different for the different callers.)

Done.

> Also, please use context diffs.

Done.


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.216
diff -r1.216 pl_exec.c
193c193
< static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
---
> static void validate_tupdesc_compat(TupleDesc td1, TupleDesc td2);
389,393c389
< 	if (estate.rettupdesc == NULL ||
< 		!compatible_tupdesc(estate.rettupdesc, tupdesc))
< 		ereport(ERROR,
< (errcode(ERRCODE_DATATYPE_MISMATCH),
<  errmsg("returned record type does not match expected record type")));
---
> 	validate_tupdesc_compat(tupdesc, estate.rettupdesc);
710,714c706,707
< 		if (!compatible_tupdesc(estate.rettupdesc,
< trigdata->tg_relation->rd_att))
< 			ereport(ERROR,
< 	(errcode(ERRCODE_DATATYPE_MISMATCH),
< 	 errmsg("returned tuple structure does not match table of trigger event")));
---
> 		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
> estate.rettupdesc);
2204,2208c2197,2199
< 		   errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
< 	if (!compatible_tupdesc(tupdesc, rec->tupdesc))
< 		ereport(ERROR,
< (errcode(ERRCODE_DATATYPE_MISMATCH),
< 		errmsg("wrong record type supplied in RETURN NEXT")));
---
> 		   errdetail("The tuple structure of a not-yet-assigned"
> 	 " record is indeterminate.")));
> 	validate_tupdesc_compat(rec->tupdesc, tupdesc);
2314,2317c2305
< 	if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
< 		ereport(ERROR,
< (errcode(ERRCODE_DATATYPE_MISMATCH),
< 		  errmsg("structure of query does not match function result type")));
---
> 	validate_tupdesc_compat(portal->tupDesc, estate->rettupdesc);
5141c5129,5130
<  * Check two tupledescs have matching number and types of attributes
---
>  * Validates compatibility of supplied TupleDesc couple by checking # and type
>  * of available arguments.
5143,5144c5132,5133
< static bool
< compatible_tupdesc(TupleDesc td1, TupleDesc td2)
---
> static void
> validate_tupdesc_compat(TupleDesc td1, TupleDesc td2)
5146c5135,5141
< 	int			i;
---
> 	int i;
> 
> 	if (!td1 || !td2)
> 		ereport(ERROR,
> (errcode(ERRCODE_DATATYPE_MISMATCH),
>  errmsg("returned record type does not match expected "
> 		"record type")));
5149c5144,5150
< 		return false;
---
> 		ereport(ERROR,
> (errcode(ERRCODE_DATATYPE_MISMATCH),
>  errmsg("returned record type does not match expected "
> 		"record type"),
>  errdetail("Number of returned columns (%d) does not match "
> 		   "expected column count (%d).",
> 		   td1->natts, td2->natts)));
5152d5152
< 	{
5154,5157c5154,5164
< 			return false;
< 	}
< 
< 	return true;
---
> 			ereport(ERROR,
> 	(errcode(ERRCODE_DATATYPE_MISMATCH),
> 	 errmsg("returned record type does not match expected "
> 			"record type"),
> 	 errdetail("Returned record type (%s) does not match "
> 			   "expected record type (%s) in column %d (%s).",
> 			   format_type_with_typemod(td1->attrs[i]->atttypid,
> 		td1->attrs[i]->atttypmod),
> 			   format_type_with_typemod(td2->attrs[i]->atttypid,
> 		td2->attrs[i]->atttypmod),
> 			   (1+i), NameStr(td2->attrs[i]->attname;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Verbosity of Function Return Type Checks

2008-08-08 Thread Volkan YAZICI
Hi,

Yesterday I needed to fiddle with PostgreSQL internals to be able to
debug a PL/pgSQL procedure returning a set of records. I attached the
patch I used to increase the verbosity of error messages related with
function return type checks. I'll be appreciated if any developer could
commit this patch (or a similar one) into the core.


Regards.

Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.216
diff -u -r1.216 pl_exec.c
--- src/pl/plpgsql/src/pl_exec.c	16 May 2008 18:34:51 -	1.216
+++ src/pl/plpgsql/src/pl_exec.c	8 Aug 2008 11:52:02 -
@@ -190,7 +190,7 @@
 	   Oid reqtype, int32 reqtypmod,
 	   bool isnull);
 static void exec_init_tuple_store(PLpgSQL_execstate *estate);
-static bool compatible_tupdesc(TupleDesc td1, TupleDesc td2);
+static void validate_tupdesc_compat(TupleDesc td1, TupleDesc td2);
 static void exec_set_found(PLpgSQL_execstate *estate, bool state);
 static void plpgsql_create_econtext(PLpgSQL_execstate *estate);
 static void free_var(PLpgSQL_var *var);
@@ -386,11 +386,12 @@
 			{
 case TYPEFUNC_COMPOSITE:
 	/* got the expected result rowtype, now check it */
-	if (estate.rettupdesc == NULL ||
-		!compatible_tupdesc(estate.rettupdesc, tupdesc))
+	if (!estate.rettupdesc)
 		ereport(ERROR,
 (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("returned record type does not match expected record type")));
+ errmsg("returned record type does not match "
+		"expected record type")));
+	validate_tupdesc_compat(tupdesc, estate.rettupdesc);
 	break;
 case TYPEFUNC_RECORD:
 
@@ -707,11 +708,8 @@
 		rettup = NULL;
 	else
 	{
-		if (!compatible_tupdesc(estate.rettupdesc,
-trigdata->tg_relation->rd_att))
-			ereport(ERROR,
-	(errcode(ERRCODE_DATATYPE_MISMATCH),
-	 errmsg("returned tuple structure does not match table of trigger event")));
+		validate_tupdesc_compat(trigdata->tg_relation->rd_att,
+estate.rettupdesc);
 		/* Copy tuple to upper executor memory */
 		rettup = SPI_copytuple((HeapTuple) DatumGetPointer(estate.retval));
 	}
@@ -2202,10 +2200,7 @@
 		   errmsg("record \"%s\" is not assigned yet",
   rec->refname),
 		   errdetail("The tuple structure of a not-yet-assigned record is indeterminate.")));
-	if (!compatible_tupdesc(tupdesc, rec->tupdesc))
-		ereport(ERROR,
-(errcode(ERRCODE_DATATYPE_MISMATCH),
-		errmsg("wrong record type supplied in RETURN NEXT")));
+	validate_tupdesc_compat(rec->tupdesc, tupdesc);
 	tuple = rec->tup;
 }
 break;
@@ -2311,10 +2306,7 @@
 		   stmt->params);
 	}
 
-	if (!compatible_tupdesc(estate->rettupdesc, portal->tupDesc))
-		ereport(ERROR,
-(errcode(ERRCODE_DATATYPE_MISMATCH),
-		  errmsg("structure of query does not match function result type")));
+	validate_tupdesc_compat(portal->tupDesc, estate->rettupdesc);
 
 	while (true)
 	{
@@ -5138,23 +5130,32 @@
 }
 
 /*
- * Check two tupledescs have matching number and types of attributes
+ * Validates compatibility of supplied TupleDesc's by checking # and type of
+ * available arguments.
  */
-static bool
-compatible_tupdesc(TupleDesc td1, TupleDesc td2)
+static void
+validate_tupdesc_compat(TupleDesc td1, TupleDesc td2)
 {
-	int			i;
+	int i;
 
 	if (td1->natts != td2->natts)
-		return false;
+		ereport(ERROR,
+(errcode(ERRCODE_DATATYPE_MISMATCH),
+ errmsg("Number of returned columns (%d) does not match "
+		"expected column count (%d).",
+	td1->natts, td2->natts)));
 
 	for (i = 0; i < td1->natts; i++)
-	{
 		if (td1->attrs[i]->atttypid != td2->attrs[i]->atttypid)
-			return false;
-	}
-
-	return true;
+			ereport(ERROR,
+	(errcode(ERRCODE_DATATYPE_MISMATCH),
+	 errmsg("Returned record type (%s) does not match "
+			"expected record type (%s) in column %d (%s).",
+			format_type_with_typemod(td1->attrs[i]->atttypid,
+ td1->attrs[i]->atttypmod),
+			format_type_with_typemod(td2->attrs[i]->atttypid,
+	 td2->attrs[i]->atttypmod),
+			(1+i), NameStr(td2->attrs[i]->attname;
 }
 
 /* --

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Concurrent Restores

2008-07-03 Thread Volkan YAZICI
Hi,

[I've searched archives for the subject, but couldn't find a related
discussion. If there is any, sorry for duplication.]

We're migrating nearly a dozen of MSSQL servers of size ~100GiB per
cluster. For this purpose, we dump MSSQL data to COPY files using a Java
program. We have database schemas for PostgreSQL which are equivalent to
their correponding ones in MSSQL side. The problem is, while we're
creating primary key, foreign key and index relations, I'm manually
partitioning related SQL files into separate files to gain performance
from CPU usage. One can argue that, concurrent processes will consume
larger disk I/O in this scheme and cause I/O bottleneck this time. But
as far as I monitored the system statistics, during concurrent
restoration, in our situation operation is CPU bounded, not disk
I/O. (Thanks SAN!)

pg_dump is capable of dumping objects with respect to their dependency
relations. It'd be really awesome if pg_dump can also handle
parallelizing primary key, foreign key and index creation queries into
separate files. Would such a think be possible? Comments?


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Text <-> C string

2008-03-19 Thread Volkan YAZICI
On Wed, 19 Mar 2008, Sam Mason <[EMAIL PROTECTED]> writes:
> ...
>   char * str = cstring_of_text(src_text);
> ...
>
> I think I got my original inspiration for doing it this way around from
> the Caml language.

Also, used in Common Lisp as class accessors:

  char *s = cstring_of(text);
  text *t = text_of(cstring);

But I'd vote for TextPGetCString style Tom suggested for the eye-habit
compatibility with the rest of the code.


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread Volkan YAZICI
On Wed, 19 Mar 2008, "Nikolay Samokhvalov" <[EMAIL PROTECTED]> writes:
> I wonder, if the following is correct and provides expected result:
>
> test=# select generate_series(1, 2), generate_series(1, 4);
>  generate_series | generate_series
> -+-
>1 |   1
>2 |   2
>1 |   3
>2 |   4
> (4 rows)
>
>
>  1. Is it correct at all to use SRF in select list, w/o explicit FROM?
> Why then we do not allow using subselects that return multiple rows?
> I'd rather expect that these two things work in similar manner.
>  2. Why the query above provides 4 rows, not 2*4=8? Actually, that's
> interesting -- I can use this query to find l.c.m. But it's defenetely
> not that I'd expect before my try...

>From PL/scheme sources:

/*
 * There're 2 ways to return from an SRF:
 *
 * 1. Value-per-call Mode
 *You return each tuple one by one via SRF_RETURN_NEXT() macro. But
 *PG_RETURN_DATUM() calls in the macro, makes it quite
 *impracticble. OTOH, this method gives opportunity to call SRFs in
 *a fashion like "SELECT mysrf();"
 *
 * 2. Materialize Mode
 *In this mode, you collect all tuples in a single set and return
 *that set. When compared to previos method, it's not possible to
 *use SRF of materialize mode like "SELECT my_materialized_srf();",
 *instead, you need to access it as a simple table: "SELECT * FROM
 *my_materialized_srf();".
 *
 * ...
 */

And I conclude to that generate_series() is written as a SRF function of
value-per-call mode. (Also you may want to check Returning Sets[1]
chapter at PostgreSQL manual.)

[1] 
http://www.postgresql.org/docs/current/static/xfunc-c.html#XFUNC-C-RETURN-SET


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] BK-Tree Implementation on top of GiST

2007-10-28 Thread Volkan YAZICI
Hi,

In an address search framework of a company, we need to deal with
queries including potential spelling errors. After some external
address space constraints (e.g. match first letters, word length,
etc.) we're still ending up with a huge data set to filter through
Levenshtein like distance metrics.

Sequential scanning a record set with roughly 100,000 entries through
some sort of distance metric is not something we'd want in
production. For this purpose, I plan to implement BK-Trees[1] on top
of GiST, which will (technically) reduce overhead complexity from O(n)
to O(logn). As far as I'm concerned, such a work will worth the time
it will take when compared to overhead reduction it will bring.

[1] Some approaches to best-match file searching
http://portal.acm.org/citation.cfm?id=362003.362025

Anyway, I have some experience with source code of intarray
module. Does anybody have suggestions/warnings/comments about such a
project? Would PostgreSQL team welcome such a patch to get integrated
into fuzzystrmatch module, or should I create my own project at
pgfoundry?

BTW, as you'd imagine, related implementation won't be something
specific to Levenshtein. Any distance metric on any kind of data will
be able to benefit from BK-Trees.


Regards.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_get_domaindef()

2006-10-26 Thread Volkan YAZICI
On Oct 26 05:27, Volkan YAZICI wrote:
> On Oct 26 03:33, FAST PostgreSQL wrote:
> > I couldn't find the CONSTRAINT name ('testconstraint' in this case) being 
> > stored in the system catalog. Any idea where I can find it?
> 
> AFAIK, it is passed to the related procedure via a DomainIOData struct
> that fcinfo->flinfo->fn_extra points to. (See domain_in() in
> backend/utils/adt/domains.c)

Ah, please excuse my wrong answer. See GetDomainConstraints() function
in the same file.


Regards.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pg_get_domaindef()

2006-10-26 Thread Volkan YAZICI
On Oct 26 03:33, FAST PostgreSQL wrote:
> I couldn't find the CONSTRAINT name ('testconstraint' in this case) being 
> stored in the system catalog. Any idea where I can find it?

AFAIK, it is passed to the related procedure via a DomainIOData struct
that fcinfo->flinfo->fn_extra points to. (See domain_in() in
backend/utils/adt/domains.c)


Regards.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] estimated_count() implementation

2006-10-22 Thread Volkan YAZICI
On Oct 21 05:09, Michael Fuhr wrote:
> I hadn't noticed the TODO item but about a year ago I posted a
> cursor_plan_rows() function and asked for comments.

Ah! I didn't see this.

> The only reply was from Tom, who said, "Given how far off it
> frequently is, I can't believe that any of the people who ask for the
> feature would find this a satisfactory answer :-("

AFAIU, cursor_plan_rows() has some serious limitations like requiring to
be executed for a portal. I was planning to make estimated_count() work
for nodeAgg and custom calls too - as count() does.

But OTOH, Tom's complaints look like still applicable for my
estimated_count() too. Does this TODO need a little bit more
clarification or we can count is a redundant one?


Regards.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] estimated_count() implementation

2006-10-21 Thread Volkan YAZICI
Hi,

I'm trying to implement estimated_count() function that's mentioned in
the TODO list. First of all, I wanted to learn if this TODO item is
still valid? I looked at the related -hackers discussions, does anybody
want to say more sth related with the implementation?

Also I've some questions. I'd be appreciated if somebody would answer
any of the below questions to help me find my way.

1. I'm planning to use same method as ExplainOneQuery() does in
   backend/commands/explain.c. (Using Plan->plan_rows that will be
   returned from planner(query, isCursor, cursorOptions, params)
   function.) Is this the way to go, or should I look for another
   method to aggregate estimated row count.

2. I've been also considering getting called from a nodeAgg. In such a
   case, it shouldn't be a problem for me to use same way as above to
   retrieve Query, ParamListInfo and TupOutputState. Right?

3. I was looking at int8inc() and backend/executor/nodeAgg.c and
   couldn't find anything special to count() aggregate. Am I looking
   at the right place? For instance, for my case, I won't need any
   transition function call. How should I modify nodeAgg.c to skip
   transfn calls for estimated_count()?

4. Related with the problem, any question I missed.


Regards.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] TupleDesc for a Nested Record

2006-10-10 Thread Volkan YAZICI
Hi,

While returning from a function call, PL can easily interfere will be
returned HeapTuple's TupleDesc from fcinfo. But what if function returns
a record type? Then we must create our own TupleDesc (or AttInMetadata)
for the related attribute (and then create HeapTuple). So far everything
is ok, but how can I interfere the data types in the nested record? This
isn't supplied by fcinfo. What would you suggest in such a situation?


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Storing MemoryContext Pointers

2006-10-06 Thread Volkan YAZICI
On Oct 05 03:34, Tom Lane wrote:
> Volkan YAZICI <[EMAIL PROTECTED]> writes:
> > When I allocate a new memory context via
> 
> >   oldmcxt = AllocSetContextCreate(TopMemoryContext, ...)
> >   persistent_mcxt = CurrentMemoryContext;
> 
> ITYM
> 
> persistent_mcxt = AllocSetContextCreate(TopMemoryContext, ...)

Opps! Right. (I was looking at some MemoryContextSwitchTo() code while
typing above lines.)

> because the other doesn't do what you think...
> 
> > How can I store the persistent_mcxt in a persistent place that I'll be
> > able to reach it in my next getting invoked?
> 
> Make it a static variable.

I had thought some kind of fcinfo->flinfo->fn_extra trick but... a
static variable is fair enough too.

Thanks so much for the answer (also for your reply to caching subjected
post).


Regards.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Storing MemoryContext Pointers

2006-10-05 Thread Volkan YAZICI
Hi,

When I allocate a new memory context via

  oldmcxt = AllocSetContextCreate(TopMemoryContext, ...)
  persistent_mcxt = CurrentMemoryContext;

How can I store the persistent_mcxt in a persistent place that I'll be
able to reach it in my next getting invoked? Is that possible? If not,
how can I reach my previously created persistent data in my next
invocation?


Regards.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Input Function (domain_in) Call

2006-08-18 Thread Volkan YAZICI
Hi,

I was using OidInputFunctionCall() to cast a basic type into a domain
type. But when I saw

/*
 * As above, for I/O functions identified by OID.  These are only to be
 * used in seldom-executed code paths.  They are not only slow but leak
 * memory.
 */
Datum
OidInputFunctionCall(Oid functionId, char *str,
 Oid typioparam, int32 typmod)

comment in backend/utils/fmgr/fmgr.c, I started to consider my decision.
Is this the right way to use domain_in() function? Which way would you
suggest? Or is there a totally different way to accomplish basic type to
domain transition?


Regards.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] "cache reference leak" and "problem in alloc set" warnings

2006-08-18 Thread Volkan YAZICI
On Aug 17 10:38, Tom Lane wrote:
> Volkan YAZICI <[EMAIL PROTECTED]> writes:
> > I've still biten by a single "write past chunk" error while returning a
> > record in PL/scheme:
> 
> >   WARNING:  problem in alloc set ExprContext: detected write past chunk
> >   end in block 0x84a0598, chunk 0x84a0c84
> 
> The actual bug, almost certainly, is that you're miscomputing the space
> needed for a variable-size palloc request.  But tracking that down will
> be hard until you find out which chunk it is. 

Looks like my palloc() math was correct. Just I had missed special
handling of attnulls array passed to heap_formtuple(). It had should be

  attnulls[i] = (isnull) ? 'n' : ' ';

> Do you have a sequence that will make the problem happen consistently at
> the same address?  If so, you can use a gdb watchpoint to find out where
> the write-past-end is happening.  Or use a conditional breakpoint in
> AllocSetAlloc to try to identify where the chunk is handed out.

Yeah! That's exactly it. After setting a "watchpoint *0x84a0c84", in the
first "where" call, the erronous line is in front of me!

> Another possibility is to set a breakpoint where the warning is emitted
> and take a look at the contents of the chunk to see if you can identify
> it; that wouldn't require knowing the target chunk address in advance.
> 
> BTW, if I recall that code correctly, the "chunk address" in the message
> is probably the address of the start of the overhead data for the chunk,
> not the usable-space start address that is passed back by palloc.

Thanks so much for your kindly help. These all mentioned methods are
applicable in a whole software development area. Thanks again.


Regards.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] "cache reference leak" and "problem in alloc set" warnings

2006-08-17 Thread Volkan YAZICI
On Aug 16 04:20, Volkan YAZICI wrote:
> On Aug 16 03:09, Volkan YAZICI wrote:
> > WARNING:  problem in alloc set ExprContext: detected write past chunk
> > end in block 0x8462f00, chunk 0x84634c8
> > WARNING:  cache reference leak: cache pg_type (34), tuple 2/7 has
> > count 1
> 
> Excuse me for bugging the list. I've solved the problem. I should look
> for ReleaseSysCache() call just after every SearchSysCache().

Looks like this only solves catalog search related allocation issues.
I've still biten by a single "write past chunk" error while returning a
record in PL/scheme:

  WARNING:  problem in alloc set ExprContext: detected write past chunk
  end in block 0x84a0598, chunk 0x84a0c84

First, I thouht that it was because of clobbering a memory chunk that
doesn't belong to me. But when I place a 

  { char *tmp = palloc(32); printf("-> %p\n", tmp); pfree(tmp) }

line at the entrance and end of PL handler, outputed bounds don't
include above 0x84a0598 chunk. Even the address of the heap tuples I
created are far distinct from the address in the error message.

I don't have any clue about the problematic section of the code,
although I know that it occurs when you return a record. I'd be very
very appreciated if somebody can help me to figure out how to debug (or
even solve) the problem.


Regards.

P.S. Here's the related source code: http://cvs.pgfoundry.org/cgi-bin/
cvsweb.cgi/~checkout~/plscheme/plscheme/plscheme-8.2.c?rev=1.3&content
-type=text/plain in case of if anyone would want to take a look at.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-08-16 Thread Volkan YAZICI
On Aug 16 11:37, Tom Lane wrote:
> Volkan YAZICI <[EMAIL PROTECTED]> writes:
> > On Aug 11 12:51, Greg Sabino Mullane wrote:
> >> Prepared statements are not visible nor survivable outside of your
> >> session, so this doesn't really make sense. If your application needs
> >> the information, it can get it at prepare time.
> 
> > What about persistent connections? Actually, I can give lots of corner
> > cases to support my idea but they're not that often used. I think, as
> > long as we'll break compatibility, placing Describe facility in the
> > PQprepare() is not the way to go.
> 
> I think this viewpoint has pretty much carried the day, so the
> PQdescribe functions should remain separate.  However, it still seems
> to me that it'd be a shame if PQdescribePrepared couldn't return the
> statement's output column types, seeing that the backend is going to
> pass that info to us anyway.

I think you have a misunderstanding about the patch I previously sent.
When you issue a PQdescribePrepared() call, in the first PQgetResult()
call returned PGresult will have the input parameter types of the
prepared statement. And in the second PQgetResult() call, returned
PGresult will hold statement's output column types.

> So I propose storing the parameter type
> info in a new section of a PGresult struct, and adding new PGresult
> accessor functions PQnparams, PQparamtype (or maybe PQptype to follow
> the existing PQftype precedent more closely) to fetch the parameter type
> info.  The existing functions PQnfields etc will fetch output-column
> info.  Aside from being more functional, this definition maintains the
> principle of least surprise, in that the interpretation of a PGresult
> from Describe isn't fundamentally different from a PGresult from a
> regular query.

Another possibility can be like this:

PGresult *PQdescribePrepared(PGconn *conn,
 const char *stmt,
 Oid **argtypes);

A PQdescribePrepared() call will immediatly return a PGresult
(previosly, we were just returning a boolean value that shows the result
of the command send status) result that holds statement's output column
types and argtypes will get altered to point to an Oid array that has
input parameter type information. (By assigning NULL value to argtypes,
user will decide to receive or not receive input parameter types.)

> We also need async versions PQsendDescribePrepared and
> PQsendDescribePortal, as I mentioned before.

If you decided on the method to use I'm volunteered to modify existing
patch. Waiting for your comments.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] "cache reference leak" and "problem in alloc set" warnings

2006-08-16 Thread Volkan YAZICI
On Aug 16 03:09, Volkan YAZICI wrote:
> WARNING:  problem in alloc set ExprContext: detected write past chunk
> end in block 0x8462f00, chunk 0x84634c8
> WARNING:  cache reference leak: cache pg_type (34), tuple 2/7 has
> count 1

Excuse me for bugging the list. I've solved the problem. I should look
for ReleaseSysCache() call just after every SearchSysCache().


Regards.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] "cache reference leak" and "problem in alloc set" warnings

2006-08-16 Thread Volkan YAZICI
Hi,

I've been trying to implement INOUT/OUT functionality in PL/scheme. When
I return a record type tuple, postmaster complains with below warnings:

WARNING:  problem in alloc set ExprContext: detected write past chunk
end in block 0x8462f00, chunk 0x84634c8
WARNING:  cache reference leak: cache pg_type (34), tuple 2/7 has
count 1

I found a related thread in the ml archives that Joe Conway fixed a
similar problem in one of his patches but I couldn't figure out how he
did it. Can somebody help me to figure out the reasons of above warnings
and how can I fix them?


Regards.

P.S. Also here's the backtrace of stack just before warnings are dumped.
 Yeah, it's a little bit useless 'cause there's nearly one way to
 reach these errors but... I thought it can give an oversight to
 hackers who takes a quick look.

Breakpoint 2, AllocSetCheck (context=0x845ff58) at aset.c:1155
1155elog(WARNING, "problem in alloc set %s: 
detected write past c
(gdb) where
#0  AllocSetCheck (context=0x845ff58) at aset.c:1155
#1  0x0829b728 in AllocSetReset (context=0x845ff58) at aset.c:407
#2  0x0829c958 in MemoryContextReset (context=0x845ff58) at mcxt.c:129
#3  0x0817dce5 in ExecResult (node=0x84a0754) at nodeResult.c:113
#4  0x0816b423 in ExecProcNode (node=0x84a0754) at execProcnode.c:334
#5  0x081698fb in ExecutePlan (estate=0x84a05bc, planstate=0x84a0754, 
operation=CMD_SELECT,
numberTuples=0, direction=138818820, dest=0x84102ec) at execMain.c:1145
#6  0x0816888b in ExecutorRun (queryDesc=0x842c680, 
direction=ForwardScanDirection, count=138818820)
at execMain.c:223
#7  0x08204a08 in PortalRunSelect (portal=0x842eae4, forward=1 '\001', count=0, 
dest=0x84102ec)
at pquery.c:803
#8  0x08204762 in PortalRun (portal=0x842eae4, count=2147483647, 
dest=0x84102ec, altdest=0x84102ec,
completionTag=0xbfc23cb0 "") at pquery.c:655
#9  0x082001e5 in exec_simple_query (query_string=0x840f91c "SELECT 
in_out_t_2(13, true);")
at postgres.c:1004
#10 0x08202de5 in PostgresMain (argc=4, argv=0x83bd7fc, username=0x83bd7d4 
"vy") at postgres.c:3184
#11 0x081d6b54 in BackendRun (port=0x83d21a8) at postmaster.c:2853
#12 0x081d636f in BackendStartup (port=0x83d21a8) at postmaster.c:2490
#13 0x081d455e in ServerLoop () at postmaster.c:1203
#14 0x081d39ca in PostmasterMain (argc=3, argv=0x83bb888) at postmaster.c:955
#15 0x0818d404 in main (argc=3, argv=0x83bb888) at main.c:187

Breakpoint 1, PrintCatCacheLeakWarning (tuple=0xb5ef7dbc) at catcache.c:1808
1808Assert(ct->ct_magic == CT_MAGIC);
(gdb) where
#0  PrintCatCacheLeakWarning (tuple=0xb5ef7dbc) at catcache.c:1808
#1  0x0829e927 in ResourceOwnerReleaseInternal (owner=0x83da800,
phase=RESOURCE_RELEASE_AFTER_LOCKS, isCommit=1 '\001', isTopLevel=0 '\0') 
at resowner.c:273
#2  0x0829e64c in ResourceOwnerRelease (owner=0x83da800, 
phase=RESOURCE_RELEASE_AFTER_LOCKS,
isCommit=1 '\001', isTopLevel=0 '\0') at resowner.c:165
#3  0x0829dd8e in PortalDrop (portal=0x842eae4, isTopCommit=0 '\0') at 
portalmem.c:358
#4  0x082001f9 in exec_simple_query (query_string=0x840f91c "SELECT 
in_out_t_2(13, true);")
at postgres.c:1012
#5  0x08202de5 in PostgresMain (argc=4, argv=0x83bd7fc, username=0x83bd7d4 
"vy") at postgres.c:3184
#6  0x081d6b54 in BackendRun (port=0x83d21a8) at postmaster.c:2853
#7  0x081d636f in BackendStartup (port=0x83d21a8) at postmaster.c:2490
#8  0x081d455e in ServerLoop () at postmaster.c:1203
#9  0x081d39ca in PostmasterMain (argc=3, argv=0x83bb888) at postmaster.c:955   

#10 0x0818d404 in main (argc=3, argv=0x83bb888) at main.c:187

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-08-11 Thread Volkan YAZICI
On Aug 11 12:51, Greg Sabino Mullane wrote:
> > think it would be quite handy to be able to gather information about
> > a prepared stmt in later phases of an application. For instance one
> > might need to get the parameter and row types of a prepared query
> > that he/she isn't created.
> 
> Prepared statements are not visible nor survivable outside of your
> session, so this doesn't really make sense. If your application needs
> the information, it can get it at prepare time.

What about persistent connections? Actually, I can give lots of corner
cases to support my idea but they're not that often used. I think, as
long as we'll break compatibility, placing Describe facility in the
PQprepare() is not the way to go.

> >> Anyone have a need to get the result type info during PQprepare?
> 
> > I don't think so. And if one would ever need such an information, can
> > reach it quite easily via PQdescribePrepared().
> 
> That's a good point, however, along with your other arguments. :) I
> could live with either way.

I'm just declined to break current PQprepare() or to introduce new
PGresult-processor functions for a feature (IMHO) that needs its own
function. But the general use case is the main fact that'll say the last
word.


Regards.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Difference Between record and rowtype

2006-08-11 Thread Volkan YAZICI
Hi,

What's [the difference between] a record (which is a pseudo 'p' type) and
a rowtype (which's a complex 'c' type).


Regards.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-08-11 Thread Volkan YAZICI
On Aug 10 11:35, Tom Lane wrote:
> Volkan YAZICI <[EMAIL PROTECTED]> writes:
> > [ patch to add PQdescribePrepared and PQdescribePortal ]
> 
> After looking this over, I don't see the point of PQdescribePortal,
> at least not without adding other functionality to libpq.  There is
> no functionality currently exposed by libpq that allows creating a
> portal (that is, sending a Bind message) without also executing the
> portal.  And the execution always returns the portal description.
> So I don't see why you'd use it.
> 
> PQdescribePrepared is useful though, as it plugs the feature omission
> mentioned in the description of PQprepare, namely, you can't find out
> what datatype was inferred for a parameter that you didn't specify a
> type for.
> 
> My inclination is to add PQdescribePrepared, but leave out
> PQdescribePortal until such time as we decide to add functions
> to libpq that support separate Bind and Execute operations.
> (That might be never, seeing that no one's gotten around to it
> since 7.4...)

My intention while implementing PQdescribePortal() was to gather
information about a portal created by an explicit DECLARE ... CURSOR
query. In case of connections are persistenly established with some pool
mechanism, it can be handy to be able to learn will be returned row
descriptions from an existing portal.

> The patch is missing an asynchronous version of PQdescribePrepared.
> I'm not real sure what to call it --- the naming conventions we've
> used in libpq are not as orthogonal as one could wish.
> PQsendDescribePrepared is the best I can manage; anyone have a better
> idea?
> 
> Also, we could take a completely different tack, which is to not invent
> new functions but instead fold this functionality into PQprepare and
> PQsendPrepare.  What Volkan's done with this patch is to define the
> successful result of PQdescribePrepared as being a PGresult in which
> only the number of columns and their datatypes (PQnfields and PQftype)
> are meaningful.  We could perfectly well use that convention in the
> PGresults returned by PQprepare/PQsendPrepare.  The upside of this
> method is that it wouldn't require an extra network round trip to get
> the information (because we'd just include the Describe Statement
> request in the original Prepare packet).  The downside is that we'd
> always spend the time to perform Describe Statement, even if the
> application doesn't need it.  However I'd expect that time to be
> pretty minimal in comparison to the other costs of a Prepare.
> 
> I'm leaning slightly to the fold-it-into-PQprepare way, but am by
> no means set on that. Comments anyone?

IMHO, it isn't the only use case of Description messages for prepared
queries to learn the infered types just after a PQprepare() call. I
think it would be quite handy to be able to gather information about a
prepared stmt in later phases of an application. For instance one might
need to get the parameter and row types of a prepared query that he/she
isn't created. If we'd place Describe message facility into PQprepare(),
then we'll just lose that functionality of the feature.

OTOH, moving Describe data processing into the PQprepare() is fairly
conventional for introducing a new functionality at the same time
keeping the API consistent without raising any compatibility problems.
But AFAICS, that's not possible without giving over one of the features
of Describe messages for prepared statements: parameter types
information or row types information. Because, if we consider placing
Describe facility into PQprepare(), client would have to issue two
distinct PQgetResult() calls; one for parameter types and another one
for row types.

On Aug 10 12:31, Tom Lane wrote:
> So another theory about how this ought to work is that PQprepare's
> result PGresult ought to carry the column name/type info where PQfname
> and PQftype can get them, and then we'd have to have two new
> PGresult-inspection functions to pull out the separately stored
> parameter-datatype info.

Yes, that's another feasible approach to the solution. But this one too
has its own PITAs as the one mentioned above.

> This seems much cleaner than overloading the meaning of PQftype, but
> OTOH it's yet a few more cycles added to the execution cost of
> PQprepare.

I think, placing Describe facility into PQprepare() will just obfuscate
the problem. In every approach we tried to place Describe into
PQprepare(), we needed to introduce new functions or broke compatibility
with the exisiting versions. ISTM, Describe features having their own
functions is the only fair solution I could come up with.

> Anyone have a need to get the result type info during PQprepare?

I don't think so. And if one would ever need such an information, can
reach it quite easily via PQdescribePrepared().


Regards.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-06-24 Thread Volkan YAZICI
On Jun 16 08:21, Tom Lane wrote:
> Bruce Momjian  writes:
> > Volkan YAZICI wrote:
> >> The problem is, AFAICS, it's not possible to distinguish between a tuple
> >> returning query (T, ..., C, Z or T, E) and a description of a portal (T,
> >> Z). Therefore, I've created a global flag (parsing_row_desc) which is
> >> turned on when we receive a 'T' and turned off if we receive a 'C' or
> >> 'E'. It's a kind of ugly method but the only solution I could come up
> >> with.
> 
> > The problem with this solution is that it is not thread-safe.  Perhaps
> > you can use a per-PGconn boolean?

Ie replaced the static flag with a conn->queryclass value using
PGQueryClass as Tom suggested. Also updated patch to be compatible with
exports.txt stuff.

> The whole thing sounds like brute force to me.  Shouldn't you be adding
> states to enum PGQueryClass, if you need to track what sort of Describe
> you're doing?

I totally agree with the followed ugly style. But IMHO the recursive
parsing (that is followed in pqParseInputN()) of received data is the main
problem behind this. I think, it will even get harder everytime somebody
try to to add another type of message parsing capability to that loop.
For instance, isn't pollution of PGQueryClass with state variables (like
PGQUERY_PREPARE or PGQUERY_DESCRIBE) one of the proofs of this.

While playing with pqParseInputN loops, I feel like coding Lisp recursions
using C syntax; it's quite ridiculous.


Regards.
Index: src/backend/tcop/postgres.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/tcop/postgres.c,v
retrieving revision 1.489
diff -c -r1.489 postgres.c
*** src/backend/tcop/postgres.c 20 Jun 2006 22:52:00 -  1.489
--- src/backend/tcop/postgres.c 24 Jun 2006 11:31:10 -
***
*** 1853,1858 
--- 1853,1859 
  static void
  exec_describe_statement_message(const char *stmt_name)
  {
+   MemoryContext   oldContext;
PreparedStatement *pstmt;
TupleDesc   tupdesc;
ListCell   *l;
***
*** 1865,1871 
start_xact_command();
  
/* Switch back to message context */
!   MemoryContextSwitchTo(MessageContext);
  
/* Find prepared statement */
if (stmt_name[0] != '\0')
--- 1866,1872 
start_xact_command();
  
/* Switch back to message context */
!   oldContext = MemoryContextSwitchTo(MessageContext);
  
/* Find prepared statement */
if (stmt_name[0] != '\0')
***
*** 1923,1929 
--- 1924,1933 
  NULL);
else
pq_putemptymessage('n');/* NoData */
+   
+   MemoryContextSwitchTo(oldContext);
  
+   finish_xact_command();
  }
  
  /*
***
*** 1934,1939 
--- 1938,1944 
  static void
  exec_describe_portal_message(const char *portal_name)
  {
+   MemoryContext   oldContext;
Portal  portal;
  
/*
***
*** 1943,1949 
start_xact_command();
  
/* Switch back to message context */
!   MemoryContextSwitchTo(MessageContext);
  
portal = GetPortalByName(portal_name);
if (!PortalIsValid(portal))
--- 1948,1954 
start_xact_command();
  
/* Switch back to message context */
!   oldContext = MemoryContextSwitchTo(MessageContext);
  
portal = GetPortalByName(portal_name);
if (!PortalIsValid(portal))
***
*** 1975,1980 
--- 1980,1989 
  
portal->formats);
else
pq_putemptymessage('n');/* NoData */
+   
+   MemoryContextSwitchTo(oldContext);
+ 
+   finish_xact_command();
  }
  
  
Index: src/interfaces/libpq/exports.txt
===
RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/exports.txt,v
retrieving revision 1.11
diff -c -r1.11 exports.txt
*** src/interfaces/libpq/exports.txt28 May 2006 22:42:05 -  1.11
--- src/interfaces/libpq/exports.txt24 Jun 2006 11:31:10 -
***
*** 130,132 
--- 130,134 
  PQencryptPassword 128
  PQisthreadsafe129
  enlargePQExpBuffer130
+ PQdescribePrepared131
+ PQdescribePortal  132
Index: src/interfaces/libpq/fe-exec.c
===
RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v
retrieving revision 1.186
diff -c -r1.186 fe-exec.c
*** src/interfaces/libpq/fe-exec.c  28 May 2006 21:13:54 -  1.186
--- src/interfaces/libpq/fe-exec.c  24 Jun 2006 11:31:12 -

Re: [HACKERS] CSV mode option for pg_dump

2006-06-13 Thread Volkan YAZICI
On Jun 13 10:20, Bruce Momjian wrote:
> 
> Good point.  The number of CSV options would be hard to support for
> pg_dump.  Any thoughts from anyone on how to do that cleanly?  Could we
> just support the default behavior?

IMHO, it might be better if we'd support a syntax like

  pg_dump --csv=opt0,para0:opt2,opt3

This can save us from the pg_dump parameter pollution a little bit.

Furthermore, I think CSV format for the dump files can be maintained
better under an external project. (pgFoundry?) By this way, main
developers will be able to cope with their own core problems while
other users/developers can contribute on the CSV code easily. And if
any user will ever want to get CSV functionality in the pg_dump,
he/she will just issue a --csv parameter (with the above syntax) and
pg_dump will make a suitable dlopen() call for the related (CSV)
module. Anyway, this is just an idea for modularity; but the main
thing I try to underline is to give pg_dump a module functionality for
similar problems.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement

2006-05-21 Thread Volkan YAZICI
On May 19 11:51, [EMAIL PROTECTED] wrote:
>   if (PQsendQuery(conn, "COPY test FROM STDIN") > 0) {
> retrieve(conn, 20);

Shouldn't you be send()'ing instead of retrieve()'ing? COPY tbl FROM
stdin, requests data from client to COPY FROM stdin TO tbl.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] intarray internals

2006-05-11 Thread Volkan YAZICI
Hi,

First, thanks so much for your reply.

On May 10 04:01, Teodor Sigaev wrote:
> > Again, in g_int_decompress(), I couldn't figure out the functionality of
> > below lines:
> 
> gist__int_ops use rangeset compression technique, read about in "THE 
> RD-TREE: AN INDEX STRUCTURE FOR SETS", Joseph M. Hellerstein,
> http://www.sai.msu.su/~megera/postgres/gist/papers/rd-tree.ps

Thanks so much for the papers. I read the related section (and will read
whole today or tomorrow).

> * intarray_union.patch.0 - doesn't applied, but make small optimization to 
> reduce number non-unique values. I don't believe that one pass through 
> array with a lot of ifs will be faster than two pass with simple ifs. Did 
> you some tests?

IMHO, the only significant improvement in my proposal about _int_union()
is that this method will visit arrays only once (with extra price of x2
condition checks), while current one will also make a second visit to
arrays to remove duplicates (with small condition checks).

You can be right, maybe it doesn't worth for worrying about. Improvement
(if there's any) will probably be available to see for very long arrays.
(Sorry, no tests for this proposal.)

> * intarray_same.patch.0 - move SORT as you suggest, but don't touch 
> algorithm.
>  1) if (A[0] == B[0] && A[1] == B[1] && ...)
> 
>  2) if (A[0] == B[0] && A[  N] == B[  N] &&
> A[1] == B[1] && A[N-1] == B[N-1] &&
> ...)
> 
>   Why are you sure that second a much faster? Did you make tests? Number of 
> comparisons is the same...

Yep, both algorithms have O(n) comparisions in their worst cases. But
for general purposes, AFAICS, second one will perform better. For
instance consider below examples:

 [Best case for 2nd algo.]
 Input: 1, 2, 3, ..., 6, 7, *9
 1st algo.: O(n)
 2nd algo.: O(1)

 [Worst case for 2nd algo.]
 Input: 1, 2, 3, 4, *4, 6, 7, 8, 9
 1st algo.: O(n/2)
 2nd algo.: O(n)

But as you can see, because of our arrays are sorted, any missing (or
additional) element in the target array will produce a padding in the
end of the array --- assuming that arrays generally don't hold
duplicate values. Therefore, making comparisons for the tail elements
will perform better beucause of the unmatched values caused by padding.

Hope I managed to explain what I try to mean. Actually, IIRC, I saw this
method (both hacks for small sized arrays and comparisons for the tail
elements of a sorted array) in another FOSS project's source code ---
probably PHP, but I'm not sure.

For about testing, if you'd supply suitable inputs there occurs a quite
much performance improve.

> * intarray_sort.patch.0 - doesn't applied. isort() is very often called for 
> already sorted and unique arrays (which comes from index), so it should be 
> fast as possible for sorted arrays.

Uh, sorry. I missed that point.

> As I remember ordered array is a worst 
> case for qsort(). May be, it will be better choice to use mergesort.

I'll investigate alternative methods to sort already sorted arrays.


Regards.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] TODO: Update pg_dump and psql to use the new COPY libpq API

2006-04-16 Thread Volkan YAZICI
IIRC, "Update pg_dump and psql to use the new COPY libpq API" TODO
item's goal is already achieved in the cvs tip.


Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-04-15 Thread Volkan YAZICI
Hi,

[Sending this message (first) to -hackers for discussion about the
extension and followed implementation.]

On Apr 01 09:39, Volkan YAZICI wrote:
> I've prepared a patch for the Describe <-> ParameterDescription
> messaging which is available via current extended query protocol.

Here's a written from scratch patch for the above mentioned extension.
It adds PQdescribePrepared() and PQdescribePortal() functions to the
libpq. New functions work as follows:

1. Issue a PQdescribePrepared() call.
2. First PQgetResult() will return a PGresult with input parameter
   types of the prepared statement. (You can use PQftype() on this
   PGresult to extract information.)
3. Second PQgetResult() will return another PGresult which holds the
   column information for the will be returned tuples. (All PQf*()
   functions can be used on this result.)

(A PQdescribePortal() call will just skip the 2nd step in the above
list.)

Patch passes regression tests and there're two examples attached for
PQdescribePrepared() and PQdescribePortal() usage.

To mention about the followed implementation, it needed some hack on
pqParseInput3() code to make it understand if a received message is
a reponse to a Describe ('D') query or to another tuple returning
query. To summarize problem, there're two possible forms of a 'D'
response:

 1. Description of a prepared statement: t, T, Z
 2. Description of a portal: T, Z

The problem is, AFAICS, it's not possible to distinguish between a tuple
returning query (T, ..., C, Z or T, E) and a description of a portal (T,
Z). Therefore, I've created a global flag (parsing_row_desc) which is
turned on when we receive a 'T' and turned off if we receive a 'C' or
'E'. It's a kind of ugly method but the only solution I could come up
with.


Regards.
Index: src/backend/tcop/postgres.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/tcop/postgres.c,v
retrieving revision 1.483
diff -c -r1.483 postgres.c
*** src/backend/tcop/postgres.c 4 Apr 2006 19:35:35 -   1.483
--- src/backend/tcop/postgres.c 15 Apr 2006 07:39:49 -
***
*** 1870,1875 
--- 1870,1876 
  static void
  exec_describe_statement_message(const char *stmt_name)
  {
+   MemoryContext   oldContext;
PreparedStatement *pstmt;
TupleDesc   tupdesc;
ListCell   *l;
***
*** 1882,1888 
start_xact_command();
  
/* Switch back to message context */
!   MemoryContextSwitchTo(MessageContext);
  
/* Find prepared statement */
if (stmt_name[0] != '\0')
--- 1883,1889 
start_xact_command();
  
/* Switch back to message context */
!   oldContext = MemoryContextSwitchTo(MessageContext);
  
/* Find prepared statement */
if (stmt_name[0] != '\0')
***
*** 1940,1946 
--- 1941,1950 
  NULL);
else
pq_putemptymessage('n');/* NoData */
+   
+   MemoryContextSwitchTo(oldContext);
  
+   finish_xact_command();
  }
  
  /*
***
*** 1951,1956 
--- 1955,1961 
  static void
  exec_describe_portal_message(const char *portal_name)
  {
+   MemoryContext   oldContext;
Portal  portal;
  
/*
***
*** 1960,1966 
start_xact_command();
  
/* Switch back to message context */
!   MemoryContextSwitchTo(MessageContext);
  
portal = GetPortalByName(portal_name);
if (!PortalIsValid(portal))
--- 1965,1971 
start_xact_command();
  
/* Switch back to message context */
!   oldContext = MemoryContextSwitchTo(MessageContext);
  
portal = GetPortalByName(portal_name);
if (!PortalIsValid(portal))
***
*** 1992,1997 
--- 1997,2006 
  
portal->formats);
else
pq_putemptymessage('n');/* NoData */
+   
+   MemoryContextSwitchTo(oldContext);
+ 
+   finish_xact_command();
  }
  
  
Index: src/interfaces/libpq/fe-exec.c
===
RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v
retrieving revision 1.182
diff -c -r1.182 fe-exec.c
*** src/interfaces/libpq/fe-exec.c  14 Mar 2006 22:48:23 -  1.182
--- src/interfaces/libpq/fe-exec.c  15 Apr 2006 07:39:58 -
***
*** 55,60 
--- 55,62 
  static void parseInput(PGconn *conn);
  static bool PQexecStart(PGconn *conn);
  static PGresult *PQexecFinish(PGconn *conn);
+ static int pqDescribe(PGconn *conn, const char desc_type,
+ const char *desc_target);
  
  
  /* 

[HACKERS] libpq Describe Extension [WAS: Bytea and perl]

2006-04-01 Thread Volkan YAZICI
Hi,

On Mar 25 08:47, John DeSoi wrote:
> I have not looked at libpq in any detail, but it should have access  
> to the type of all the parameters in the prepared statement. The  
> Describe (F) statement in the frontend/backend protocol identifies  
> the type of each parameter.

I've prepared a patch for the Describe <-> ParameterDescription
messaging which is available via current extended query protocol.

Usage (and implementation) is explained in the patch's documentation
related part. (Also I tried to place informative comments in the code
too.)

But I've a problem with ereport() error calls caused by erronous
target_type entries. After an error in exec_describe_statement_message()
(or exec_describe_portal_message()) it leaves block with ereport() call
and client side stalls in PGASYNC_BUSY state while backend stalls in
PostgresMain() by calling ReadCommand(). To summerize, an error
returning pqDescribe() call causes both side to stall.

I'd be so appreciated to hear your thoughts about the patch and above
problem.


Regards.
? src/interfaces/libpq/.deps
? src/interfaces/libpq/cscope.out
? src/interfaces/libpq/libpq.so.4.2
Index: doc/src/sgml/libpq.sgml
===
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v
retrieving revision 1.206
diff -c -r1.206 libpq.sgml
*** doc/src/sgml/libpq.sgml 10 Mar 2006 19:10:48 -  1.206
--- doc/src/sgml/libpq.sgml 1 Apr 2006 18:27:41 -
***
*** 2045,2053 
  
  
  
! PQprintPQprint
  
  
Prints out all the rows and,  optionally,  the
column names  to  the specified output stream.
  
--- 2045,2081 
  
  
  
! 
PQdescPreparedPQdescPrepared
! 
PQdescPortalPQdescPortal
  
  
+   Describe a prepared statement or portal.
+   int PQdescPrepared(PGconn *conn, const char *stmt);
+   int PQdescPortal(PGconn *conn, const char *portal);
+ 
+ 
+   These two functions are used to describe a prepared statement or portal.
+   Functions return 1 on success and 0 on failure. (An appropriate error
+   message will be placed in an error situation.) NULL values in
+   the stmt and portal parameters will be 
treated
+   as an empty string.
+ 
+ 
+   After a PQdescPrepared or PQdescPortal 
function
+   call, issuing a PQgetResult will place backend's response
+   into a PGresult structure - that'll be
+   extractable via PQftype.
+ 
+ 
+   These functions are available within extended query protocol which is
+   available in version 3.0 and above.
+ 
+ 
+ 
+ 
+ 
+ PQprintPQprint
+ 
Prints out all the rows and,  optionally,  the
column names  to  the specified output stream.
  
Index: src/backend/tcop/postgres.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/tcop/postgres.c,v
retrieving revision 1.482
diff -c -r1.482 postgres.c
*** src/backend/tcop/postgres.c 14 Mar 2006 22:48:21 -  1.482
--- src/backend/tcop/postgres.c 1 Apr 2006 18:28:08 -
***
*** 3391,3396 
--- 3391,3398 
   
describe_type)));
break;
}
+   
+   send_ready_for_query = true;
}
break;
  
Index: src/interfaces/libpq/fe-exec.c
===
RCS file: /projects/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v
retrieving revision 1.182
diff -c -r1.182 fe-exec.c
*** src/interfaces/libpq/fe-exec.c  14 Mar 2006 22:48:23 -  1.182
--- src/interfaces/libpq/fe-exec.c  1 Apr 2006 18:28:15 -
***
*** 55,60 
--- 55,62 
  static void parseInput(PGconn *conn);
  static bool PQexecStart(PGconn *conn);
  static PGresult *PQexecFinish(PGconn *conn);
+ static int pqDescribe(PGconn *conn, const char desc_type,
+ const char *desc_target);
  
  
  /* 
***
*** 2281,2286 
--- 2283,2400 
return 0;
  }
  
+ 
+ /*
+  * pqDescribe - Describe given prepared statement or portal.
+  *
+  * Available options for target_type are
+  *   's' to describe a prepared statement; or
+  *   'p' to describe a portal.
+  * Returns 1 on success and 0 on failure.
+  *
+  * By issuing a PQgetResult(), response from the server will be placed
+  * in an empty PGresult that will be extractable via PQftype().
+  */
+ static int
+ pqDescribe(PGconn *conn, const char desc_type, const char *desc_target)
+ {
+   int ret;
+ 
+   /* This isn't gonna work on a 2.0 server. */
+   if (PG_PROTOCOL_MAJOR(conn->pversion) < 3)
+   {
+   printfPQExpBuffer(&conn->errorMessage,
+

Re: [HACKERS] 8.2 hold queue [MB Chars' Case Conversion]

2006-03-09 Thread Volkan YAZICI
Hi,

On Mar 08 04:08, Bruce Momjian wrote:
> I have applied all the patches in the patch queue, and am starting to
> look at the patches_hold queue, which are patches submitted after the
> feature freeze.

Related with

  BUG #1931: ILIKE and LIKE fails on Turkish locale
  Case Conversion Fix for MB Chars

posts, I wanted to report that case conversion routines in the current
CVS tip is working without any problem.

I tested upper(), lower() functions and ILIKE, ~* operators for Turkish
characters and they were all succeeded.


Regards.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] FATAL: failed to initialize TimeZone to "UNKNOWN"

2005-12-28 Thread Volkan YAZICI
Hi,

While trying to initdb using cvs tip, it dumps below error output:

  ...
  creating configuration files ... ok
  creating template1 database in pgd/base/1 ...
  FATAL:  failed to initialize TimeZone to "UNKNOWN"
  child process exited with exit code 1
  ...

I edited some files but don't think they're related with this problem.
Edited files are like.c, oracle_compat.c and pg_mb2wchar function in
mbutils.c. (Actually, IIRC, pg_mb2wchar function is used nowhere in
the code.)

Attached a gdb output (tz_error.gdb) for initdb. Also, here's an
"strace -s 256" output for initdb:
http://www.students.itu.edu.tr/~yazicivo/tz_error.strace
(I'm not so experienced in debugging; if you'd need any further
information just let me know.)

I'd be so appreciated to hear any kind of help/idea/suggestion.


Regards.
$ gdb usr/bin/initdb
...
(gdb) set args -D pgd
(gdb) b bootstrap_template1
Breakpoint 1 at 0x804aef0: file initdb.c, line 1341.
(gdb) run
Starting program: /home/knt/farm/fake-root/usr/bin/initdb -D pgd
The files belonging to this database system will be owned by user "knt".
This user must also own the server process.

The database cluster will be initialized with locale tr_TR.
The default database encoding has accordingly been set to LATIN5.

creating directory pgd ... ok
creating directory pgd/global ... ok
creating directory pgd/pg_xlog ... ok
creating directory pgd/pg_xlog/archive_status ... ok
creating directory pgd/pg_clog ... ok
creating directory pgd/pg_subtrans ... ok
creating directory pgd/pg_twophase ... ok
creating directory pgd/pg_multixact/members ... ok
creating directory pgd/pg_multixact/offsets ... ok
creating directory pgd/base ... ok
creating directory pgd/base/1 ... ok
creating directory pgd/pg_tblspc ... ok
selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 50/2
creating configuration files ... ok

Breakpoint 1, bootstrap_template1 (short_version=0x8056868 "8.2") at 
initdb.c:1341
1341char   *talkargs = "";
(gdb) n
1345printf(_("creating template1 database in %s/base/1 ... "), 
pg_data);
(gdb)
1346fflush(stdout);
(gdb) 
creating template1 database in pgd/base/1 ... 1348  if (debug)
(gdb) 
1351bki_lines = readfile(bki_file);
(gdb) 
1355snprintf(headerline, sizeof(headerline), "# PostgreSQL %s\n",
(gdb) 
1358if (strcmp(headerline, *bki_lines) != 0)
(gdb) 
1368bki_lines = replace_token(bki_lines, "POSTGRES", 
effective_user);
(gdb) 
1370bki_lines = replace_token(bki_lines, "ENCODING", encodingid);
(gdb) 
1378snprintf(cmd, sizeof(cmd), "LC_COLLATE=%s", lc_collate);
(gdb) 
1379putenv(xstrdup(cmd));
(gdb) 
1381snprintf(cmd, sizeof(cmd), "LC_CTYPE=%s", lc_ctype);
(gdb) 
1382putenv(xstrdup(cmd));
(gdb) 
1384unsetenv("LC_ALL");
(gdb) 
1387unsetenv("PGCLIENTENCODING");
(gdb) 
1389snprintf(cmd, sizeof(cmd),
(gdb) 
1393PG_CMD_OPEN;
(gdb) 
1395for (line = bki_lines; *line != NULL; line++)
(gdb) FATAL:  failed to initialize TimeZone to "UNKNOWN"

1397PG_CMD_PUTS(*line);
(gdb) 

Program received signal SIGPIPE, Broken pipe.
0xb7d3e95e in write () from /lib/tls/libc.so.6
(gdb)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Case Conversion Functions

2005-12-26 Thread Volkan YAZICI
Hi,

There're lots of places in the code which uses either pg_tolower()
or just tolower() - without aware of MB characters; or some
on-their-own implementations of pg_tolower(). (Actually, AFAIK,
whole MB case conversion is broken in -rHEAD.)

For instance, consider backend/utils/adt/{like.c, like_match.c}
file. Some lines of iwchareq() are a duplication of pg_tolower().

Another example:
  backend/parser/scansup.c
  152 else if (ch >= 0x80 && isupper(ch))
  153 ch = tolower(ch);

Is this an intended behaviour or they're waiting for somebody to
clean them up.


Regards.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] number of loaded/unloaded COPY rows

2005-12-17 Thread Volkan YAZICI
On Dec 16 08:47, Bruce Momjian wrote:
> I think an int64 is the proper solution. If int64 isn't really
> 64-bits, the code should still work though.

(I'd prefer uint64 instead of an int64.) In src/include/c.h, in
this or that way it'll assign a type for uint64, so there won't
be any problem for both 64-bit and non-64-bit architectures.

I've attached the updated patch. This one uses uint64 and
UINT64_FORMAT while printing uint64 value inside string.

I used char[20+1] as buffer size to store uint64 value's string
representation. (Because, AFAIK, maximum decimal digit length of
an [u]int64 equals to 2^64 - 1 = 20.) In this context, when I
looked at the example usages (for instance as in
backend/commands/sequence.c) it's stored in a char[100] buffer.
Maybe we should define a constant in pg_config.h like
INT64_PRINT_LEN. This will define a standard behaviour with
INT64_FORMAT for using integers inside strings.

For instance:
  char buf[INT64_PRINT_LEN+1];
  snprintf(buf, sizeof(buf), INT64_FORMAT, var);

> In fact we have this TODO, which is related:
> 
>   * Change LIMIT/OFFSET and FETCH/MOVE to use int8
> 
> This requires the same type of change.
> 
> I have added this TODO:
> 
>   * Allow the count returned by SELECT, etc to be to represent
> as an int64 to allow a higher range of values
>
> This requires a change to es_processed, I think.

I think so. es_processed is defined as uint32. It should be
uint64 too.

I tried to prepare a patch for es_processed issue. But when I look
further in the code, found that there're lots of mixed usages of
"uint32" and "long" for row count related trackings. (Moreover,
as you can see from the patch, there's a problem with ULLONG_MAX
usage in there.)

I'm aware of the patch's out-of-usability, but I just tried to
underline some (IMHO) problems.

Last minute edit: Proposal: Maybe we can define a (./configure
controlled) type like pg_int (with bounds like PG_INT_MAX) to use
in counter related processes.

- * -

AFAIK, there're two ways to implement a counter:

1. Using integer types supplied by the compiler, like uint64 as we
   discussed above.
   Pros: Whole mathematical operations are handled by the compiler.
   Cons: Implementation is bounded by the system architecture.
   
2. Using arrays to hold numeric values, like we did in the
   implementation of SQL numeric types.
   Pros: Value lengths bounded by available memory.
   Cons: Mathematical operations have to be handled by software.
 Therefore, this will cause a small overhead in performance
 aspect compared to previous implementation.

I'm not sure if we can use the second implementation (in the
performance point of view) for the COPY command's counter. But IMHO
it can be agreeable for SELECT/INSERT/UPDATE/DELETE operations'
counters. OTOH, by using this way, we'll form a proper method for
counting without any (logical) bounds.

What's your opinion? If you aggree, I'll try to use the second
implementation for counters - except COPY.


Regards.

-- 
"We are the middle children of history, raised by television to believe
that someday we'll be millionaires and movie stars and rock stars, but
we won't. And we're just learning this fact," Tyler said. "So don't
fuck with us."
Index: src/backend/commands/portalcmds.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/portalcmds.c,v
retrieving revision 1.44
diff -u -r1.44 portalcmds.c
--- src/backend/commands/portalcmds.c   3 Nov 2005 17:11:35 -   1.44
+++ src/backend/commands/portalcmds.c   17 Dec 2005 11:44:29 -
@@ -395,7 +395,7 @@
 
if (!portal->atEnd)
{
-   longstore_pos;
+   uint64  store_pos;
 
if (portal->posOverflow)/* oops, cannot trust 
portalPos */
ereport(ERROR,
Index: src/backend/tcop/pquery.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/tcop/pquery.c,v
retrieving revision 1.98
diff -u -r1.98 pquery.c
--- src/backend/tcop/pquery.c   22 Nov 2005 18:17:21 -  1.98
+++ src/backend/tcop/pquery.c   17 Dec 2005 11:44:31 -
@@ -179,6 +179,7 @@
if (completionTag)
{
Oid lastOid;
+   charbuf[21];
 
switch (operation)
{
@@ -190,16 +191,22 @@
lastOid = queryDesc->estate->es_lastoid;
else
lastOid = InvalidOid;
+   snprintf(buf, sizeof(buf), UINT64_FORMAT,
+
queryDesc->estate->es_processed);
snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
-  "INSERT %u %u", lastOid, 
queryDesc->es

[HACKERS] number of loaded/unloaded COPY rows

2005-12-12 Thread Volkan YAZICI
I prepared a patch for "Have COPY return the number of rows
loaded/unloaded?" TODO. (Sorry for disturbing list with such a simple
topic, but per warning from Bruce Momjian, I send my proposal to -hackers
first.)

I used the "appending related information to commandTag" method which is
used for INSERT/UPDATE/DELETE/FETCH commands too. Furthermore, I edited
libpq to make PQcmdTuples() interpret affected rows from cmdStatus value
for COPY command. (Changes don't cause any compatibility problems for API
and seems like work with triggers too.)

One of the problems related with the used concept is trying to encapsulate
processed number of rows within an uint32 variable. This causes an internal
limit for counting COPY when we think it can process billions of rows. I
couldn't find a solution for this. (Maybe, two uint32 can be used to store
row count.) But other processed row counters (like INSERT/UPDATE) uses
uint32 too.

What's your suggestions and comments?


Regards.
Index: src/backend/commands/copy.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/copy.c,v
retrieving revision 1.255
diff -u -r1.255 copy.c
--- src/backend/commands/copy.c 22 Nov 2005 18:17:08 -  1.255
+++ src/backend/commands/copy.c 12 Dec 2005 17:18:44 -
@@ -102,6 +102,7 @@
int client_encoding;/* remote side's 
character encoding */
boolneed_transcoding;   /* client encoding diff 
from server? */
boolclient_only_encoding;   /* encoding not valid on 
server? */
+   uint32  processed;  /* # of tuples 
processed */
 
/* parameters from the COPY command */
Relationrel;/* relation to copy to or from 
*/
@@ -646,7 +647,7 @@
  * Do not allow the copy if user doesn't have proper permission to access
  * the table.
  */
-void
+uint32
 DoCopy(const CopyStmt *stmt)
 {
CopyState   cstate;
@@ -660,6 +661,7 @@
AclMode required_access = (is_from ? ACL_INSERT : ACL_SELECT);
AclResult   aclresult;
ListCell   *option;
+   uint32  processed;
 
/* Allocate workspace and zero all fields */
cstate = (CopyStateData *) palloc0(sizeof(CopyStateData));
@@ -935,7 +937,7 @@
initStringInfo(&cstate->line_buf);
cstate->line_buf_converted = false;
cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);
-   cstate->raw_buf_index = cstate->raw_buf_len = 0;
+   cstate->raw_buf_index = cstate->raw_buf_len = cstate->processed = 0;
 
/* Set up encoding conversion info */
cstate->client_encoding = pg_get_client_encoding();
@@ -1080,7 +1082,10 @@
pfree(cstate->attribute_buf.data);
pfree(cstate->line_buf.data);
pfree(cstate->raw_buf);
+
+   processed = cstate->processed;
pfree(cstate);
+   return processed;
 }
 
 
@@ -1310,6 +1315,8 @@
 
VARSIZE(outputbytes) - VARHDRSZ);
}
}
+
+   cstate->processed++;
}
 
CopySendEndOfRow(cstate);
@@ -1916,6 +1923,8 @@
 
/* AFTER ROW INSERT Triggers */
ExecARInsertTriggers(estate, resultRelInfo, tuple);
+
+   cstate->processed++;
}
}
 
Index: src/backend/tcop/utility.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/tcop/utility.c,v
retrieving revision 1.250
diff -u -r1.250 utility.c
--- src/backend/tcop/utility.c  29 Nov 2005 01:25:49 -  1.250
+++ src/backend/tcop/utility.c  12 Dec 2005 17:18:45 -
@@ -640,7 +640,12 @@
break;
 
case T_CopyStmt:
-   DoCopy((CopyStmt *) parsetree);
+   {
+   uint32  processed = DoCopy((CopyStmt *) 
parsetree);
+   
+   snprintf(completionTag, COMPLETION_TAG_BUFSIZE,
+"COPY %u", processed);
+   }
break;
 
case T_PrepareStmt:
Index: src/include/commands/copy.h
===
RCS file: /projects/cvsroot/pgsql/src/include/commands/copy.h,v
retrieving revision 1.25
diff -u -r1.25 copy.h
--- src/include/commands/copy.h 31 Dec 2004 22:03:28 -  1.25
+++ src/include/commands/copy.h 12 Dec 2005 17:19:07 -
@@ -17,6 +17,6 @@
 #include "nodes/parsenodes.h"
 
 
-extern void DoCopy(const CopyStmt *stmt);
+extern uint32 DoCopy(const CopyStmt *stmt);
 
 #endif   /* COPY_H */
Index: src/interfaces/libpq/fe-exec.c
===

Re: [HACKERS] int to inet conversion [or Re: inet to bigint?]

2005-12-10 Thread Volkan YAZICI
On Dec 08 04:36, Kai wrote:
> After working regularly with inet values in sql, it would be nice to be able
> to do this:
> 
>   => select '192.168.1.1'::inet + 1 as result;
>  result
>   -
>192.168.1.2
>   (1 row)

You may take a look at ip4r[1] project too. For a full list for its
availabilities (like +/- operators) here[2] is the related SQL file.

[1] http://pgfoundry.org/projects/ip4r/
[2] 
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/~checkout~/ip4r/ip4r/ip4r.sql.in?rev=1.4&content-type=text/plain


Regards.

-- 
"We are the middle children of history, raised by television to believe
that someday we'll be millionaires and movie stars and rock stars, but
we won't. And we're just learning this fact," Tyler said. "So don't
fuck with us."

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] PQfnumber() Fix Proposal

2005-11-27 Thread Volkan YAZICI
Related TODO paragraph:
  «Prevent PQfnumber() from lowercasing unquoted the column name.
   PQfnumber() should never have been doing lowercasing, but historically
   it has so we need a way to prevent it.»


PQfnumber() Fix Proposal

In the current version of PQfnumber(), if user doesn't put quotes around
requested column name, libpq tries to lowercase both requested column name
and actual column names, then compares each of them one by one. Here's the
deal:

[1] If user put quotes around requested column name, no lowercasing will be
made in the libpq while comparing column names. (Current behaviour for
quoted column names.)

[2] If user doesn't use any quotes, API will assume the requested column name
is lowercased. After that, it will compare the given input with the
"lowercased column names"(*).


(*) How can we transmit column names lowercased in the server side without
causing any compatibility problems?

When we look at the fe-protocol3.c, getRowDescriptions() splits data
(conn->workBuffer) in this way:

Offset:  04  610 12   16 18
++--++--++--+--+
||  ||  ||  |  |
++--++--++--+--+

[ 0: 3] tableid
[ 4: 5] columnid
[ 6: 9] typid
[10:11] typlen
[12:15] atttypmod
[16:17] format
[18:  ] name

Here's my proposal without breaking the current implementation:

[18:  ] name + '\0' + lowercased_name

("'\0' + lowercased_name" part will be added in the server side while building
column descriptions.)

Current functions in libpq, compares given column names using
strcmp(attDescs[i].name, input) call. So new implementation won't casue any
compatibility issues. But new getRowDescriptions() function will be aware of the
hidden "lowercased column name" in the retrieved column description data and
escape it to another variable in attDescs as (for instance) lowername.
Therefore, PQfnumber() can use attDescs[i].lowername while making
case insensitive compare.


Waiting for your comments.
Regards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Character Conversions Handling

2005-10-18 Thread Volkan YAZICI
Hi,

I'm trying to understand the schema laying behind
backend/utils/adt/like.c to downcase letters [1]. When I look at the
other tolower() implementations, there're lots of them spread around.
(In interfaces/libpq, backend/regex, backend/utils/adt/like and etc.)
For example, despite having pg_wc_tolower() function in regc_locale.c,
achieving same with manually in iwchareq() of like.c.

I'd so appreciated if somebody can point me the places where I should
start to look at to understand the character handling with different
encodings. Also, I wonder why didn't we use any btow/mbsrtowc/wctomb
like functions. Is this for portability with other compilers?

[1] iwchareq() is using pg_mb2wchar_with_len() which decides the right
mb2wchar function from pg_wchar_table. When I look at
backend/mb/wchar.c there're some other specific to locale mblen and
mb2wchar routines. For example, EUC_KR is handled with
pg_euc2wchar_with_len() function, but LATIN5 is handled with
pg_latin12wchar_with_len() function. Will we write a new function for
latin5 like pg_latin52wchar_with_len() if we'd encounter with a new
problem with latin5?

Regards.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] How TODO prevent PQfnumber() from lowercasing?

2005-10-14 Thread Volkan YAZICI
Hi,

On 10/13/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Really, PQfnumber shouldn't do any case folding at all; that's not in
> its charter if you ask me.  The problem is how to get there from here
> without too much compatibility pain.  Maybe invent a new routine that
> does it right and then deprecate the existing one?

Related with the ILIKE case (which requires lowercasing too), I've
been trying to implement a patch for MatchTextIC() in
backend/utils/adt/like_match.c and stucked at the same point with
PQfnumber() lowercasing. (Which is another bogus implementation.)

As I try and understand, it's so hard to implement a case processing
routine for both multi-byte and normal ASCII chars. wchar_t and char
types make comparisions really messy. By looking at some MySQL source
code, I suggest a new solution for string handling: If PostgreSQL is
compiled with --enable-mb parameter, then use wchar_t instead of char
in every string operation. I'm aware of the required huge
implementation for this purpose, but IMHO things will be at the right
position. It's a MB char or ASCII, not both. That's all.

Any opinions?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] How TODO prevent PQfnumber() from lowercasing?

2005-10-12 Thread Volkan YAZICI
On 10/12/05, Bruce Momjian  wrote:
> The question mark means we are not sure how to deal with it.  I think
> your idea of using quotes to preserve case is a good one.

I think related TODO is added for that gotcha which was written in
PQfnumber() comments in fe-exec.c: «Downcasing in the frontend might
follow different locale rules than downcasing in the backend.»

Returned column names from the backend were lowercased by the server
in this or that way. Furthermore, PQfnumber() makes not-quoted strings
downcasing on the client side and then performs the compare by using
the results returned from the backend. To sum up, at the moment I
couldn't see any possible solution for this TODO. (I'll be appreciated
to hear your suggestions on the case.) An un-fixable situation?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] How TODO prevent PQfnumber() from lowercasing?

2005-10-09 Thread Volkan YAZICI
Hi,

Which way do you suggest to "Prevent libpq's PQfnumber() from
lowercasing the column name" (which is listed as a TODO item). If
column name has quotes around it we're just removing the quotes and
comparing with the related column name. Else, lowercasing the column
name and then comparing.

I couldn't get the idea behind this TODO. Can somebody explain it a
little bit more?

Regards.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_get_prepared?

2005-07-15 Thread Volkan YAZICI
On 7/15/05, Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> > You're mentioning about PHP PostgreSQL API, right?
> 
> No, I'm talking about a PostgreSQL backend function.

Sorry, when you give a name like pg_get_prepared() (which is used in
PHP PostgreSQL API functions), I thought it's for PHP too.

> The use case is when you want to prepare a query, but only if it's not
> already prepared on that connection.

As Neil Conway explained on IRC - this feature can be useful for
connection pools. E.g. avoid re-preparing a stmt. if a previous client
using the conn. already prepared it.

> Same way that the EXECUTE command checks.

If it'll be done from backend, then there's no problem. I wondered the
PHP side of it. Anyway, it wasn't a PHP function.

Regards.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_get_prepared?

2005-07-15 Thread Volkan YAZICI
Hi,

On 7/15/05, Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> Would it be useful to have a pg_get_prepared(name) function that returns
> true or false depending on whether or not there is a prepared query of
> that name?

(You're mentioning about PHP PostgreSQL API, right?) I couldn't see
any proper usage area for this. Can you give some examples?

Furthermore, I wondered the steps you'll follow to achieve this. How
will you check if a prepared stmt exists or not?

Regards.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] 8.02 rpm error

2005-05-19 Thread Volkan YAZICI
Hi,

On 5/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> 8.0.2 and up should provide/require libpq.so.4 and so on.  Apparently
> there is something broken with this set of RPMs.

For futher of the discussion:
http://lists.pgfoundry.org/pipermail/pgsqlrpms-hackers/2005-April/000197.html

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly