Re: [HACKERS] issues/experience with building postgres on Windows

2008-07-24 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Martin Zaun wrote:
 - issues locating the 14 required software packages:
 - no luck getting Bison 1.875 or 2.2 Windows binaries

 bison 1.875 is available here: 
 http://sourceforge.net/project/showfiles.php?group_id=23617package_id=22822

To the best of my knowledge, any bison version = 1.875 works
fine; you need not insist on finding exactly those two releases.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] collation support revisited (phase 1)

2008-07-24 Thread Martijn van Oosterhout
On Tue, Jul 22, 2008 at 04:32:39PM +0200, Zdenek Kotala wrote:
 Oh, so you're thinking of a charset as a sort of check constraint. If
 your locale is turkish and you have a column marked charset ASCII then
 storing lower('HI') results in an error.
 
 Yeah, if you use strcoll function it fails when illegal character is found.
 See
 http://www.opengroup.org/onlinepubs/009695399/functions/strcoll.html

Wierd, at least in glibc and ICU it can't happen but perhaps there are
other implementations where it can...

 Collation cannot be defined on any character. There is not any relation 
 between
 Latin and Chines characters. Collation has sense when you are able to 
 specify  =  operators.

There is no standardised relation. However, if your collation library
decides to define all chinese characters after all latin characters,
they will have defined a collation that will work for all strings with
any characters... Which is basically the approach glibc/ICU takes.

I think the standard is kind of pathetic to say that strcoll can set
errno but have no value to indicate error. I wonder how many platforms
actually use that feature.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Peter Eisentraut
Am Wednesday, 23. July 2008 schrieb Marko Kreen:
 And the idea to turn pgfoundry into CPAN
 is pointless.  An user may willing to throw random modules to his
 random perl script, but not to his whole db architecture.

Based on what reasoning?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres-R: internal messaging

2008-07-24 Thread Markus Wanner

Hi,

Tom Lane wrote:

I hope you're not expecting the contents of shared memory to still be
trustworthy after a backend crash.


Hm.. that's a good point.

So I either need to bullet-proof the imessages with checksums or some 
such. I'm not sure that's doable reliably. Not to speak about performance.


Thus it might be better to just restart the replication manager as well. 
Note that this means leaving the replication group temporarily and going 
through node recovery to apply remote transactions it has missed in 
between. This sounds expensive, but it's certainly the safer way to do 
it. And as such backend crashes are Expected Not To Happen(tm) on 
production systems, that's probably good enough.



If the manager is working strictly
from its own local memory, then it would be reasonable to operate
as above.


That's not the case... :-(

Thanks for your excellent guidance.

Regards

Markus


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pltcl_*mod commands are broken on Solaris 10

2008-07-24 Thread Peter Eisentraut
Am Wednesday, 23. July 2008 schrieb Zdenek Kotala:
 Is it fixed only on head or do you plan to backported to older branch as
 well?

I don't see a need to backport this.  The only difference is that now you will 
get an error if no tclsh is found.  The call configure TCLSH=... is the 
same in all versions.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL extensions packaging

2008-07-24 Thread Dimitri Fontaine
Hi,

Le jeudi 24 juillet 2008, Tom Dunstan a écrit :
 I guess that means you missed both the original discussion at
 http://archives.postgresql.org/pgsql-hackers/2008-04/msg00132.php and
 my initial patch in that direction and subsequent discussion at
 http://archives.postgresql.org/pgsql-hackers/2008-04/msg00132.php then

Thanks for the links, I've read a little down there now :)

 There were two core components to my idea of modules/packages:

First reading makes me think your proposal is all about having a user-visible 
management of modules, which in my proposal are a part of packages, and a 
much needed one.
So it seems to me both proposals are complementary, in that I didn't go in any 
detail about how to manage this module part of a package declaration, and it 
looks like your work is all about this.

Where we're trying to solve the same issue(s) is on the OS level packaging.

  - Separation of installation at an OS level (RPM/yum, deb/dpkg, MSI
 installer etc) and installation into a database. The intention was a)
 to standardize package installation generally so that users didn't
 have to read n different sets of installation instructions for n
 different packages, and b) so that a db owner could install into their
 own database any module that had been installed on the system, even if
 that might include e.g. C functions that they otherwise would not be
 able to install without being a superuser.

I'm proposing that PostgreSQL includes a source level package management 
toolset, and the OS distributions take advantage of it to release binary 
packages easy to install, as it's done now with make  make install (using 
PGXS) at PG level.

As you're saying, OS install means the same thing as PGXS make install, that 
is having the .so and .sql files at the right place and in the right format. 
So even if PostgreSQL was to propose a source level integration with pg_pkg 
add package, distributions would still be left with the binary packaging 
work.

As for the database level installation, I think this is best done by 
PostgreSQL itself this time, I'd much prefer the distributions not to bother 
with pg_pkg install package database.
Of course, debian wrapper scripts would certainly repackage this in order for 
the user to choose which cluster to target here.

 - Have dependency tracking so that pg_dump could emit e.g. LOAD
 MODULE foo; rather than all the different instructions to recreate
 the module.

That could be what the module section of create package means internally. I 
don't foresee a need for separating module only management stuff out of 
package, but I'm all ears :)

 So the proposed installation procedure would be more along the lines of:

 yum install postgresql-module-postgis
 echo load module postgis | psql mydb

Agreed, but with those little differences:
 - PostgreSQL provides pg_pkg add to distribution to ease binary packaging
 - apt-get install postgresql-module-8.3-prefix
 - and either 
$ pg_pkg install prefix mydb
   or
$ psql -c INSTALL PACKAGE prefix mydb

 My intention was to use whatever native package manager was
 appropriate for your distro rather than trying to recreate CPAN,
 although some people in the original discussion wanted to go down that
 route.

I know nothing about CPAN, but I hope offering tools for packagers to ease 
their work is a good idea. Plus this allows for the PostgreSQL project 
approved extensions, -core level quality, reviewed code at an easy to grasp 
place.
And it allows advanced user, who compile their PostgreSQL theirself, to still 
benefit from a level of OS integration for packages.

 The patch that I provided didn't do any of the dependency stuff yet -
 I had been investigating various ways to do it automagically, although
 I haven't worked on it for a little while. It may be that the straight
 forward explicit declaration that you have here is a better way to do
 it.

It seems to me that your patch would certainly be a step towards implementing 
my idea of a package.

 I didn't have versioning and interdependencies between modules yet,
 although it's an obvious extension to the idea.

And a much necessary one. As soon as we have a SQL level object for modules, 
with oids in the catalog and all, we surely are able to add entries in 
pg_depend about this?

  A package can also host variables, which visibility are
  package global: any SQL into the package can refer directly to package
  variables.

 That was way out of scope for my more modest suggestion - I certainly
 wasn't going to change pl/pgsql semantics. For example, how do those
 variables behave upon a transaction rollback?

No idea yet, I just saw that Oracle packages host package level global 
variables, and I guess it would work the same as a SET [LOCAL] GUC, except 
you could only see the variable from objects within the package.

 This turns into recreating CPAN. I like the idea of a blessed set of
 packages, but would rather not require all postgresql users to have a
 

Re: [HACKERS] Concurrent VACUUM and ANALYZE

2008-07-24 Thread Jeroen Vermeulen

Jonah H. Harris wrote:

On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane [EMAIL PROTECTED] wrote:

Jonah H. Harris [EMAIL PROTECTED] writes:

The case I'm looking at is a large table which requires a lazy vacuum,
and a zero vacuum cost delay would cause too much I/O.  Yet, this
table has enough insert/delete activity during a vacuum, that it
requires a fairly frequent analysis to maintain proper plans.  I
patched as mentioned above and didn't run across any unexpected
issues; the only one expected was that mentioned by Alvaro.

I don't find this a compelling argument, at least not without proof that
the various vacuum-improvement projects already on the radar screen
(DSM-driven vacuum, etc) aren't going to fix your problem.


Is DSM going to be in 8.4?  The last I had heard, DSM+related
improvements weren't close to being guaranteed for this release.  If
it doesn't make it, waiting another year and a half for something
easily fixed would be fairly unacceptable.  Should I provide a patch
in the event that DSM doesn't make it?


For the immediate term, would it make sense for the ANALYZE to give up 
and simply return if a VACUUM was in progress?


At least that way a client that sees performance degrade quickly between 
vacuums can run the occasional preventative analyze without blocking 
completely on auto-vacuums.



Jeroen


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] issues/experience with building postgres on Windows

2008-07-24 Thread Martin Zaun

Tom Lane wrote:

Andrew Dunstan [EMAIL PROTECTED] writes:

Martin Zaun wrote:

- issues locating the 14 required software packages:
- no luck getting Bison 1.875 or 2.2 Windows binaries


bison 1.875 is available here: 
http://sourceforge.net/project/showfiles.php?group_id=23617package_id=22822


To the best of my knowledge, any bison version = 1.875 works
fine; you need not insist on finding exactly those two releases.


On my machine, using GnuWin32's Bison 2.1 resulted in compile errors
while cygwin's 2.2 is fine.  Have seen the details being discussed
on hackers some time ago.

Thanks,
Martin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Marko Kreen
On 7/24/08, Peter Eisentraut [EMAIL PROTECTED] wrote:
 Am Wednesday, 23. July 2008 schrieb Marko Kreen:
  And the idea to turn pgfoundry into CPAN
   is pointless.  An user may willing to throw random modules to his
   random perl script, but not to his whole db architecture.

 Based on what reasoning?

Based on my own behaviour.

-- 
marko

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Additional psql requirements

2008-07-24 Thread Simon Riggs
I have some suggestions for additional psql features. I'm not planning
to work on them myself, just proposing them so others can do so if they
agree and wish to do so.

* default values for substitution values
Need a command to set the default value of a substitution variable, so
that it takes a specific value if *not* explicitly set on the command
line (or any time prior to setting the default).
e.g. \default

* access to version number
e.g. special variables called 
PGMINORVERSION (= 8.3.3)
PGVERSION (= 8.3)

* simple mechanism for conditional execution
Similar to #IFDEF
e.g. 
\ifdef (PGVERSION = 8.3) labelname
...
\label labelname
(..or simply \endif, in which case I'd like \else as well please)

* ability to set substitution variables from command execution
Allow the result of an SQL command to be placed in a sub variable
e.g. \eval fooinfo 'select info from foo;'

* an option to echo an error statement to STDERR
-a echoes the SQL statement to STDOUT, so if you separate them you can't
match up which SQL had which error

The purpose of all of them is to enhance our ability to produce install
scripts for various things. Common uses would include

* setting a default schema, yet allowing overrides to that
* installing languages if not already installed
* checking whether other components are already correctly installed
* installing things based upon the encoding of the database
* skipping certain features if version incorrect
* conditional execution whether superuser or not

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Uncopied parameters on CREATE TABLE LIKE

2008-07-24 Thread ITAGAKI Takahiro
CREATE TABLE LIKE is useful to create a new partition from a template
table. We can use 3 options (INCLUDING DEFAULTS, CONSTRAINTS and INDEXES)
to copy more parameters from the template, but there are still some
uncopied parameters:

  1. column storage parameters (toast options)
  2. reloptions on table (on index is copied already)
  3. comments

I'd like to modify CREATE TABLE LIKE to copy those parameters. Especially
1 and 2 are important when we use CREATE TABLE LIKE on partitioning because
all of the child tables should have same physical properties.

I'm thinking to modify 1 and 2 to be copied as default from a tempalte to
a new table because reloptions of indexes are automatically copied already.
I'm not sure for 3, but as of now I have a plan to add INCLUDING COMMENTS
option to CREATE TABLE LIKE.

Comments welcome.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

2008-07-24 Thread Simon Riggs

On Thu, 2008-07-24 at 19:09 +0900, ITAGAKI Takahiro wrote:
 CREATE TABLE LIKE is useful to create a new partition from a template
 table. We can use 3 options (INCLUDING DEFAULTS, CONSTRAINTS and INDEXES)
 to copy more parameters from the template, but there are still some
 uncopied parameters:
 
   1. column storage parameters (toast options)
   2. reloptions on table (on index is copied already)

Sounds good

   3. comments

Not sure about that. If comments are just user text, then probably not.
I'm guessing you might want that for something else?

I would prefer it if you had a plan to introduce user definable
parameters, similar to custom_variable_classes. Perhaps call this
custom_table_options. So when we load a table and it has an option we
don't recognise we ignore it if it is one of the customer_table_options.

custom_table_options will help us define special behaviours for
datatypes, indexes, replication etc that relate to the specific role and
purpose of individual tables.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] issues/experience with building postgres on Windows

2008-07-24 Thread Hiroshi Saito

Hi.

Sorry, very late reaction

I'm based on the environment of VC++2005 and MinGW by the reason for requiring 
official
support. and since it does not have many resources, the environment of VC++2008 is 
restricted.

Therefore, many suggestion can't be performed now:-(

- Original Message - 
From: Martin Zaun [EMAIL PROTECTED]



(snip)

  - locating a OSSP-UUID binary package for Windows
(.dll but no uuid.h with Hiroshi Saito's binary package?)


Ahh, sorry, Although I adjusted with Ralf-san of the official central figure of 
OSSP,
I did not synchronize with the release timing of PostgreSQLThen, It was 
limited
to guidance of my web page.

(snip)

   - ossp-uuid
 Downloaded source from http://www.ossp.org/pkg/lib/uuid/.

 Problem: from where to get a ossp-uuid Windows binary?


Yeah, I will propose to Ralf-san.

(snip)

8) Getting a Windows OSSP-UUID library

   Google found me a postgres hackers email thread with this link
 http://winpg.jp/~saito/pg_work/OSSP_win32/
   to 1.6.2 uuid-ossp.dll mingw-compiled by Hiroshi Saito. Thanks!

   Problem: from where to get the also needed uuid.h?

   I've seen that Hiroshi Saito has worked on a patch for building the
   uuid-ossp package on Windows.  I didn't succeed in building
   ossp-uuid using wingw on Windows and took the shortcut of using
   cygwin to generate the uuid.h.

   But that hack didn't work, I'm getting linkage errors:

uuid.lib(uuid_str.obj) : error LNK2019: unresolved external symbol _va_copy referenced in 
function _uuid_str_vasprintf

.\Release\uuid-ossp\uuid-ossp.dll : fatal error LNK1120: 1 unresolved externals

   == Where can I find a complete ossp-uuid package for Windows?


Um, I do not understand Cygwin
Please try define of config.h
---
/* Predefined possible va_copy() implementation (id: ASS) */
#define __VA_COPY_USE_ASS(d, s) do { (d) = (s); } while (0)
---

Regards,
Hiroshi Saito 



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

2008-07-24 Thread Simon Riggs

On Thu, 2008-07-24 at 10:30 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  I would prefer it if you had a plan to introduce user definable
  parameters, similar to custom_variable_classes. Perhaps call this
  custom_table_options. So when we load a table and it has an option we
  don't recognise we ignore it if it is one of the customer_table_options.
 
  custom_table_options will help us define special behaviours for
  datatypes, indexes, replication etc that relate to the specific role and
  purpose of individual tables.
 
 GUC parameters that silently alter the semantics of SQL statements
 should be introduced only with great trepidation, not just because
 someone thought them up one day. 

I agree. I don't really want to alter semantics.

  What is the real use-case for
 this bit of complication?  

Reloptions are additional performance options.

 Given the very short list of supported
 reloptions right now, why would you imagine that there will ever
 be such a thing as installation-local reloptions?

There's a ton of ways to introduce installation-local code, and we
support custom_variable_classes to support that. We just need some
additional flexibility at object level also.

It's already possible via comments, so why not make it official?

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: DTrace probes (merged version) ver_03

2008-07-24 Thread Zdenek Kotala
I performed review and I prepared own patch which contains only probes without 
any issue. I suggest commit this patch because the rest of patch is independent 
and it can be committed next commit fest after rework.


I found following issues:

1) SLRU probes.

I think it is good to have probes there but they needs polish. See my comments
http://reviewdemo.postgresql.org/r/25/


2) XLOG probes

I think there is confuse placement of probes after merge. It needs cleanup.


3) Executor probes

I would like to see any use case for them/

4) smgr probes

I prefer to have this probes in smgr instead of md. The reason why Robert put 
them into md is that it returns number of written/read bytes, but it is always 
BLCKSZ which could be returned from smgr directly. Only difference is

when error occurs during write/read and not all data are written/read.

It needs discuss.

5) autovacuum start probes

I would like to see also stat/stop for any other process types. It was discussed 
but no comment from author(s).


6) idle transaction

See my comments
http://reviewdemo.postgresql.org/r/25/

7) query-reewrite is missing


8) mark dirty and BM_HINT... flag

I remove these because I don't see any use case for it. It would be nice provide 
some dtrace script or describe basic ideas.




Thats all Zdenek


Index: src/backend/access/transam/clog.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/clog.c,v
retrieving revision 1.46
diff -c -r1.46 clog.c
*** src/backend/access/transam/clog.c   1 Jan 2008 19:45:46 -   1.46
--- src/backend/access/transam/clog.c   24 Jul 2008 14:19:23 -
***
*** 36,41 
--- 36,42 
  #include access/slru.h
  #include access/transam.h
  #include postmaster/bgwriter.h
+ #include pg_trace.h
  
  /*
   * Defines for CLOG page sizes.  A page is the same BLCKSZ as is used
***
*** 323,329 
--- 324,332 
  CheckPointCLOG(void)
  {
/* Flush dirty CLOG pages to disk */
+   TRACE_POSTGRESQL_CLOG_CHECKPOINT_START();
SimpleLruFlush(ClogCtl, true);
+   TRACE_POSTGRESQL_CLOG_CHECKPOINT_DONE();
  }
  
  
Index: src/backend/access/transam/multixact.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/multixact.c,v
retrieving revision 1.27
diff -c -r1.27 multixact.c
*** src/backend/access/transam/multixact.c  1 Jan 2008 19:45:46 -   
1.27
--- src/backend/access/transam/multixact.c  24 Jul 2008 14:19:24 -
***
*** 57,62 
--- 57,63 
  #include storage/lmgr.h
  #include utils/memutils.h
  #include storage/procarray.h
+ #include pg_trace.h
  
  
  /*
***
*** 1526,1531 
--- 1527,1534 
  void
  CheckPointMultiXact(void)
  {
+   TRACE_POSTGRESQL_MULTIXACT_CHECKPOINT_START();
+ 
/* Flush dirty MultiXact pages to disk */
SimpleLruFlush(MultiXactOffsetCtl, true);
SimpleLruFlush(MultiXactMemberCtl, true);
***
*** 1540,1545 
--- 1543,1550 
 */
if (!InRecovery)
TruncateMultiXact();
+ 
+   TRACE_POSTGRESQL_MULTIXACT_CHECKPOINT_DONE();
  }
  
  /*
Index: src/backend/access/transam/subtrans.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/subtrans.c,v
retrieving revision 1.22
diff -c -r1.22 subtrans.c
*** src/backend/access/transam/subtrans.c   26 Mar 2008 18:48:59 -  
1.22
--- src/backend/access/transam/subtrans.c   24 Jul 2008 14:19:24 -
***
*** 32,37 
--- 32,38 
  #include access/subtrans.h
  #include access/transam.h
  #include utils/snapmgr.h
+ #include pg_trace.h
  
  
  /*
***
*** 281,287 
--- 282,290 
 * it merely to improve the odds that writing of dirty pages is done by
 * the checkpoint process and not by backends.
 */
+   TRACE_POSTGRESQL_SUBTRANS_CHECKPOINT_START();
SimpleLruFlush(SubTransCtl, true);
+   TRACE_POSTGRESQL_SUBTRANS_CHECKPOINT_DONE();
  }
  
  
Index: src/backend/access/transam/twophase.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/twophase.c,v
retrieving revision 1.43
diff -c -r1.43 twophase.c
*** src/backend/access/transam/twophase.c   19 May 2008 18:16:26 -  
1.43
--- src/backend/access/transam/twophase.c   24 Jul 2008 14:19:24 -
***
*** 57,62 
--- 57,63 
  #include storage/smgr.h
  #include utils/builtins.h
  #include utils/memutils.h
+ #include pg_trace.h
  
  
  /*
***
*** 1387,1392 
--- 1388,1396 
 */
if (max_prepared_xacts = 0)
return; /* nothing to do */
+ 
+   TRACE_POSTGRESQL_TWOPHASE_CHECKPOINT_START();
+ 
xids = 

Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Hannu Krosing
On Tue, 2008-07-22 at 17:24 -0400, Tom Lane wrote:
 In the case of plproxy, I think an integrated solution is pronounced
 SQL-MED, and likewise plproxy in its present form doesn't move us
 toward that goal.

While pl/proxy can be tweaked into a way of achieving functionality of
SQL-MED (SQL/MED provides extensions to SQL that define foreign-data
wrappers and datalink types to allow SQL to manage external data), it
is in in no way more than a tiny piece of pl/proxy's possible
functionality.

As I see it, pl/proxy extends postgresql into yet another orthogonally
way of being extensible, doing it in a well defined, but minimalist
way.

 An important point here is that acceptance of a feature into core (or
 even contrib) puts us on the hook to worry about upward compatibility
 for it, maybe not forever but for a long time into the future.

In some weird way, accepting any bigger piece of code into the core
often comes with its maintainer, thus providing scalability in the
maintenance front ;)

At least I'm sure that Marko will carry the main burden of maintaining
pl/proxy - maybe not forever but for a long time into the future.

 I don't think I want to buy into that for either of these as presently
 constituted --- they don't match up with what I think the long-term
 goals ought to be in these areas.

pl/proxy provides one way (often called Sharding) of achieving
essentially unlimited scalability for a frequently occurring real-world
class of data management problems, while interfering minimally with
postgresql's internals. The unlimited part is especially true if
pl/proxy is used together with pg/bouncer.

I'm pretty sure that there is no general golden-bullet solution for
achieving this, and thus I can't see how pl/proxy can conflict with any
long-term goals in these areas, for any value of these areas.

pl/proxy also has some other possible uses, like doing callbacks in
independent transactions, simple remote calls, simple RO load balancing,
etc.

--
Hannu




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

2008-07-24 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Thu, 2008-07-24 at 10:30 -0400, Tom Lane wrote:
 Given the very short list of supported
 reloptions right now, why would you imagine that there will ever
 be such a thing as installation-local reloptions?

 There's a ton of ways to introduce installation-local code, and we
 support custom_variable_classes to support that. We just need some
 additional flexibility at object level also.

Anyone who's capable of introducing a new reloption is also capable of
modifying reloptions.c to accept it.  There is a very specific technical
reason for the existence of custom_variable_classes, which is that the
postmaster will flat out refuse to boot if you have a bogus variable
in postgresql.conf, and the code that might want to accept such a
variable might not have been loaded yet.  That problem doesn't apply to
reloptions.  It's already the case that we ignore bogus values in an
already-stored reloption, and I see no reason to accept a value during
CREATE or ALTER TABLE that we don't currently believe is OK.

Now, if you're suggesting we need a plugin hook somewhere in or around
default_reloptions, that's possibly reasonable; but a GUC like you're
suggesting seems quite pointless.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Teodor Sigaev

* shiftList() holds an exclusive lock on metapage throughout its run,
which means that it's impossible for two of them to run concurrently.
So why bother with concurrent deletion detection?
Because metapage is locked immediately before shiftList call, while  metapage is 
 unlocked another process could produce locking metapage and execution of 
shiftList. So, when shiftList starts it should check of already deleted page. If 
shiftList sees already deleted page then it doesn't do anything and reports to 
the caller.



* shiftList does LockBufferForCleanup, which means that it can be blocked
for an indefinitely long time by a concurrent scan, and since it's holding
exclusive lock on metapage no new scans or insertions can start meanwhile.
This is not only horrid from a performance standpoint but it very probably
can result in deadlocks --- which will be deadlocks on LWLocks and thus
not even detected by the system.
Ops, I see possible scenario: UPDATE tbl SET gin_indexed_field = ... where 
gin_indexed_field   with concurrent shiftList. Will fix. Thank you.


Nevertheless,  shiftList should be fast in typical scenario: it doesn't do 
complicated work but just marks as deleted pages which already was readed before.



* GIN index scans release lock and pin on one pending-list page before
acquiring pin and lock on the next, which means there's a race condition:
shiftList could visit and delete the next page before we get to it,
because there's a window where we're holding no buffer lock at all.

Agree, will fix.


* It seems also possible that once a list page has been marked
GIN_DELETED, it could be re-used for some other purpose before a
scan-in-flight reaches it -- reused either as a regular index page or as a
Impossible - because deletion is running from the head of list and scan too. But 
deletion locks metapage and locks pages for cleanup. So, scan may start only 
from not yet deleted page and will go through the list before deletion process.




* There is a bigger race condition, which is that after a scan has
returned a tuple from a pending page, vacuum could move the index entry
into the main index structure, and then that same scan could return that
same index entry a second time.  This is a no-no, and I don't see any easy
fix.
Hmm, isn't it allowed for indexes? At least GiST has this behaviour from its 
birth date.




--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent VACUUM and ANALYZE

2008-07-24 Thread Simon Riggs

On Mon, 2008-07-21 at 17:50 -0400, Jonah H. Harris wrote:
 Currently, one cannot perform a concurrent VACUUM and ANALYZE.  This
 is a significant problem for tables which are not only large and have
 designated cost-delays, but which are also heavily inserted into and
 deleted from.

I suspect this is not the root problem, but one solution to it.

If the stats need such frequent updating, then the code that handles the
stats probably needs extension/improvement to avoid such sensitivities.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] plproxy v2

2008-07-24 Thread Hannu Krosing
On Tue, 2008-07-22 at 11:25 -0400, Tom Lane wrote:
 Marko Kreen [EMAIL PROTECTED] writes:
  And user can execute only pre-determines queries/functions on system2.
 
 If that were actually the case then the security issue wouldn't loom
 quite so large, but the dynamic_query example in the plproxy regression
 tests provides a perfect example of how to ruin your security.

The idea is to allow the pl/proxy user only access to the needed
functions and nothing else on the remote db side.

dynamic_query ruins your security, if your pl/proxy remote user has too
much privileges.

  Do you still see a big hole?
 
 Truck-sized, at least.
 
 The complaint here is not that it's impossible to use plproxy securely;
 the complaint is that it's so very easy to use it insecurely.

You mean easy like it is very easy to always use your OS as root ?

On Unix this is fixed by stating it as a bad idea in docs (and numerous
books), on windows you have a privileged checkbox when creating new
users.

---
Hannu



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Joshua D. Drake
On Thu, 2008-07-24 at 18:38 +0300, Hannu Krosing wrote:
 On Tue, 2008-07-22 at 17:24 -0400, Tom Lane wrote:
  In the case of plproxy, I think an integrated solution is pronounced
  SQL-MED, and likewise plproxy in its present form doesn't move us
  toward that goal.

 I'm pretty sure that there is no general golden-bullet solution for
 achieving this, and thus I can't see how pl/proxy can conflict with any
 long-term goals in these areas, for any value of these areas.
 
 pl/proxy also has some other possible uses, like doing callbacks in
 independent transactions, simple remote calls, simple RO load balancing,
 etc.

Hannu,

These are all excellent points but I think the real problem here is:

There is nothing that requires pl/proxy to be in core.

Everyone already agrees pl/proxy is very cool technology for PostgreSQL.

I used to make a lot of arguments about pushing things into core, I was
big fanboy of getting Tsearch2 into core. Looking back and getting older
and wiser (no laughing :P) I realize that its almost kind of silly to
keep pushing this stuff into core.

Lots of people talk about legitimacy of the package or some sort of
artificial endorsement that gets created by being in core. Some of it is
personal, it is a big feeling of pride to have a piece of code accepted
to core. It is also a way to beef up the resume and yes generally a way
to deal with more ignorant hosting shops that won't install external
modules.

However this is not a core problem. It is not a hacker problem. It is
and education and advocacy problem. We don't need pl/proxy in core. What
pl/proxy needs is a solid project of its own, with good documentation,
and community members.


Sincerely,

Joshua D. Drake


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: DTrace probes (merged version) ver_03

2008-07-24 Thread Theo Schlossnagle


On Jul 24, 2008, at 11:11 AM, Zdenek Kotala wrote:

I performed review and I prepared own patch which contains only  
probes without any issue. I suggest commit this patch because the  
rest of patch is independent and it can be committed next commit  
fest after rework.


I found following issues:

1) SLRU probes.

I think it is good to have probes there but they needs polish. See  
my comments

http://reviewdemo.postgresql.org/r/25/


The slru's are quite useful and general enough to use easily.  I used  
them to verify the metered checkpointing stuff:


http://lethargy.org/~jesus/archives/112-Probing-for-Success.html


2) XLOG probes

I think there is confuse placement of probes after merge. It needs  
cleanup.



3) Executor probes

I would like to see any use case for them/


I added them with two thoughts (and knowing that they cost nothing).
(1) you can trace them to assist in debugging an explain plan and to  
better understand the flow of the execution engine.  This is not a  
compelling reason, but a reason none-the-less.
(2) you can trace and existing long-running query for which you do not  
have the original plan (may have changed) and make an educated guess  
at the plan chosen at time of execution.



4) smgr probes

I prefer to have this probes in smgr instead of md. The reason why  
Robert put them into md is that it returns number of written/read  
bytes, but it is always BLCKSZ which could be returned from smgr  
directly. Only difference is

when error occurs during write/read and not all data are written/read.

It needs discuss.

5) autovacuum start probes

I would like to see also stat/stop for any other process types. It  
was discussed but no comment from author(s).


6) idle transaction

See my comments
http://reviewdemo.postgresql.org/r/25/

7) query-reewrite is missing


8) mark dirty and BM_HINT... flag

I remove these because I don't see any use case for it. It would be  
nice provide some dtrace script or describe basic ideas.



Perhaps I misunderstood what mark dirty does, but here was my thinking:

Because of the background writer, it is difficult to understand which  
postgres process (and thus query) induced disk writes.  Marking a page  
as dirty is a good indication that a query will be causing I/O and you  
can measure calls to mark dirty per query as a telling metric.


Perhaps I misunderstood, but I have a very serious problem that I  
can't reliably track write I/O to postgresql process ID as the  
bgwriter and the kernel are flushing those dirty blocks to disk while  
the process isn't running.  In my (albeit naive) tests, the mark dirty  
gave me quite expected results for correlating query execution to disk  
I/O to be induced.


--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extending grant insert on tables to sequences

2008-07-24 Thread Jaime Casanova
Sorry for the delay in the answer but i was busy with 2 projects and a talk...

On Sat, Jul 12, 2008 at 3:50 PM, Tom Lane [EMAIL PROTECTED] wrote:
 I think it's probably reasonable as long as we keep the implicitly
 granted rights as narrow as possible.  INSERT on the parent table
 would normally be hard to use correctly if you can't nextval() the
 sequence, so automatically allowing nextval() seems pretty reasonable.
 I think the case for granting anything more than that is weak ---
 even without considering backwards-compatibility arguments.


ok. at least two more reviewers make questions against the SELECT permission...
my point was that if keep INSERT and UPDATE permissions then keep
SELECT as well... but let *only* INSERT's seems enough to me...

 A fairly important practical problem is whether this will keep pg_dump
 from correctly reproducing the state of a database.  Assume that someone
 did revoke the implicitly-granted rights on the sequence --- would a
 dump and reload correctly preserve that state?  It'd depend on the order
 in which pg_dump issued the GRANTs, and I'm not at all sure pg_dump
 could be relied on to get that right.  (Even if we fixed it to account
 for the issue today, what of older dump scripts?)


good point! a simple test make me think that yes, i will try some
complex cases to be sure (actually i think it should be a problem
here)

 Another issue is the interaction with the planned column-level GRANT
 feature.


Although that is a feature we want, is a WIP one... do we stop patches
because it can conflict with a project we don't know will be applied
soon?

In any case, i will review that patch to see where we are on that and
to try make those two compatible...


 I thought for a bit about abandoning the proposed implementation and
 instead having nextval/currval check at runtime: IOW, if the check for
 ACL_USAGE on the sequence fails, look to see if the sequence is owned
 and if so look to see if the user has ACL_INSERT on the parent table.
 (This seems a bit slow but maybe it wouldn't be a problem, or maybe we
 could arrange to cache the lookup results.)  This would avoid the
 action at a distance behavior in GRANT and thereby cure both of
 the problems mentioned above.  However, it would mean that it'd be
 impossible to grant INSERT without effectively granting sequence USAGE
 --- revoking USAGE on the sequence wouldn't stop anything.  Plus, \z on
 the sequence would fail to tell you about those implicitly held rights.

seems like a hackish... do we want this? comments?

i will work on this patch for the next days...

-- 
regards,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent VACUUM and ANALYZE

2008-07-24 Thread Jonah H. Harris
On Thu, Jul 24, 2008 at 9:06 AM, Simon Riggs [EMAIL PROTECTED] wrote:
 I suspect this is not the root problem, but one solution to it.

Agreed.  It is not the root problem.  However, until DSM is fully
implemented and working, not having the ability to gather statistics
during long vacuums is problematic.  Of course, you can try and
optimize vacuum by minimizing relation sizes using partitioning, but
that doesn't work in all cases.

 If the stats need such frequent updating, then the code that handles the
 stats probably needs extension/improvement to avoid such sensitivities.

Agreed, the optimizer's sensitivity to statistics in certain query
conditions is the root problem.

-- 
Jonah H. Harris, Senior DBA
myYearbook.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
 * There is a bigger race condition, which is that after a scan has
 returned a tuple from a pending page, vacuum could move the index entry
 into the main index structure, and then that same scan could return that
 same index entry a second time.  This is a no-no, and I don't see any easy
 fix.

 Hmm, isn't it allowed for indexes? At least GiST has this behaviour from its 
 birth date.

Really?  Then GiST needs to be fixed too.  Otherwise you risk having
queries return the same row twice.  A bitmap indexscan plan would mask
such an index bug ... but a plain indexscan won't.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extending grant insert on tables to sequences

2008-07-24 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 Another issue is the interaction with the planned column-level GRANT
 feature.

 Although that is a feature we want, is a WIP one... do we stop patches
 because it can conflict with a project we don't know will be applied
 soon?

Well, considering that that one is implementing a feature required by
SQL spec, your feature will lose any tug-of-war ;-).  So yeah, you
ought to consider how to make yours play nice when (not if) that
happens.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Tom Lane
I wrote:
 Really?  Then GiST needs to be fixed too.  Otherwise you risk having
 queries return the same row twice.  A bitmap indexscan plan would mask
 such an index bug ... but a plain indexscan won't.

BTW, there's another issue I forgot about yesterday, which is that
the planner assumes that all index AMs work correctly for backwards
scan.  The place where the rubber meets the road here is that
if you DECLARE SCROLL CURSOR for a plan implemented as a plain
indexscan, then FETCH BACKWARDS is supposed to reliably generate
results consistent with previous FETCH FORWARDS, to wit the same
tuples in the reverse order.

We can assume that the query is using an MVCC snapshot, which means
that at the index level it's okay for the index to return newly-inserted
entries that weren't returned in the previous forward scan, or to not
return entries that were removed meanwhile by VACUUM.  But re-ordering
live tuples is bad news.

The idea of copying the pending-tuples list into local scan state would
make this work as expected as far as the proposed patch goes, but I'm
wondering whether the behavior isn't completely broken anyway by
operations such as page splits.  Do we need to change the planner to
assume that this only works nicely for btree?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uncopied parameters on CREATE TABLE LIKE

2008-07-24 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 I would prefer it if you had a plan to introduce user definable
 parameters, similar to custom_variable_classes. Perhaps call this
 custom_table_options. So when we load a table and it has an option we
 don't recognise we ignore it if it is one of the customer_table_options.

 custom_table_options will help us define special behaviours for
 datatypes, indexes, replication etc that relate to the specific role and
 purpose of individual tables.

GUC parameters that silently alter the semantics of SQL statements
should be introduced only with great trepidation, not just because
someone thought them up one day.  What is the real use-case for
this bit of complication?  Given the very short list of supported
reloptions right now, why would you imagine that there will ever
be such a thing as installation-local reloptions?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Hannu Krosing
On Wed, 2008-07-23 at 12:38 -0400, Tom Lane wrote:
 Greg Sabino Mullane [EMAIL PROTECTED] writes:
  Code outside of core, is, in reality, less reviewed, less likely to work
  well with recent PG versions, and more likely to cause problems. It's also
  less likely to be found by people, less likely to be used by people, and
  less likely to be included by distros. Not to say that everything should get
  shoved into core, of course, but there are strong arguments for both sides.
 
 These are all true statements, of course, but ISTM they should be looked
 on as problems to be solved.  Pushing stuff into core instead of solving
 these problems is not a scalable long-term answer.

And being in core does in no way guarantee reviews and updates if stuff
changes in the backend, as long as regression tests pass - as a proof
take a look at pl/python ugliness. it has not been updated in any major
way since it was first written and so does not make use of any newer
ways of writing PLs. 

I am currently working on get this fixed, looking, ironically, much at
pl/proxy code to do so.

I was away from net for last 3 weeks, (climbed mt. Elbrus) but I'll get
my patches brushed up in 2-3 weeks to bring pl/python on par with other
PLs.

OTOH, until we have solid foundation for believing that we can move all
(or at least most) PLs out of core, I'd like pl/proxy to be in the
core, at least being in the core CVS sense.

--
Hannu



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] [ADMIN] shared_buffers and shmmax

2008-07-24 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
NotDashEscaped: You need GnuPG to verify this message


 shared_buffers is in disk block size, typically 8K

 The table the OP is looking at (table 17.2 in the 8.3 docs) predates
 the ability to specify shared_buffers in KB or MB instead of
 number-of-buffers.  I agree it's not entirely obvious that what it
 means is multiply your setting in KB/MB by 8400/8192.  Anybody have
 an idea how to clarify things?

Bite the bullet and start showing the buffer settings as a pure number of bytes
everywhere, and get rid of the confusing '8kB' unit in pg_settings? Things like
this don't help our cause:

test=# show shared_buffers;
 shared_buffers

 24MB
(1 row)

test=# set temp_buffers = '24MB';
SET

test=# show temp_buffers;
 temp_buffers
--
 3072

test=# select name, setting from pg_settings where name ~ 'buffers';
  name  | setting
+-
 shared_buffers | 3072
 temp_buffers   | 3072
 wal_buffers| 8

test=# show wal_buffers;
 wal_buffers
-
 64kB


--
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation
PGP Key: 0x14964AC8 200807241351
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkiIwYYACgkQvJuQZxSWSsiY5wCfU/tca+1JakWaMCDDRHEHk/Uj
1rcAoMi1FNGSpJhyXWde1psygq6v3MlS
=gCPg
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] [ADMIN] shared_buffers and shmmax

2008-07-24 Thread Joshua D. Drake
On Thu, 2008-07-24 at 17:54 +, Greg Sabino Mullane wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160
 NotDashEscaped: You need GnuPG to verify this message
 
 
  shared_buffers is in disk block size, typically 8K
 
  The table the OP is looking at (table 17.2 in the 8.3 docs) predates
  the ability to specify shared_buffers in KB or MB instead of
  number-of-buffers.  I agree it's not entirely obvious that what it
  means is multiply your setting in KB/MB by 8400/8192.  Anybody have
  an idea how to clarify things?
 
 Bite the bullet and start showing the buffer settings as a pure number of 
 bytes
 everywhere, and get rid of the confusing '8kB' unit in pg_settings? 

+1

We have helper functions like pg_size_pretty() to resolve the other
issues.

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] So, what's the base dn in an LDAP URL again?

2008-07-24 Thread Tom Lane
The fine manual claims that the base dn part of an LDAP URL
is meaningful:

The server will bind to the distinguished name specified as base
dn using the user name supplied by the client. If prefix and
suffix is specified, it will be prepended and appended to the
user name before the bind.

But looking at CheckLDAPAuth() just now, it doesn't do anything at all
with the basedn part of the string.  Seems to me this is either a code
bug or a docs bug.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Teodor Sigaev

operations such as page splits.  Do we need to change the planner to
assume that this only works nicely for btree?


It seems to that direction (backward or forward) has meaning only for indexes 
with amcanorder = true. With amcanorder=false results will be occasionally for 
any direction.


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
 operations such as page splits.  Do we need to change the planner to
 assume that this only works nicely for btree?

 It seems to that direction (backward or forward) has meaning only for
 indexes with amcanorder = true. With amcanorder=false results will be
 occasionally for any direction.

Well, no; amcanorder specifies that the index can return results that
are sorted according to some externally meaningful ordering.  The
question at hand is just whether the results of a single indexscan
are self-consistent.  That's a property that can reasonably be expected
to hold regardless of amcanorder; it does hold for hash indexes for
instance.  (In the case of hash we have to forbid splitting a bucket
that's actively being scanned in order to make it true.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

2008-07-24 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 Reviewers, please let me know if you find problems with the
 patches. If none, I would like to commit this weekend.

Given that everyone who has tested this has found a different way to
crash it, and that the frequency of crash reports shows no signs of
slowing down, I have to think that committing it is premature.

I tried to look through the patch just now and failed to make any
sense of it, because of the complete absence of documentation.
Two unexplained examples added to the SELECT reference page don't
do it for me.  I want to see an explanation of exactly what behaviors
are intended to be provided (and, in view of the long TODO list that
was posted awhile back, what isn't provided).  And there needs to be
more than zero internal documentation.  A README file, or perhaps
a very long file header comment, needs to be provided to explain
what's supposed to happen, when, and where when processing a recursive
query.  (For comparison look at the README.HOT file that was created
to explain the HOT patch --- something at about that level of detail
would help this patch a lot.  Or consider adding a section to
chapter 43 in the SGML docs.)

We really can't accept a patch that is so poorly documented as to
be unreviewable.  Unreviewable also means it'll be unmaintainable
going forward.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Teodor Sigaev

queries return the same row twice.  A bitmap indexscan plan would mask
such an index bug ... but a plain indexscan won't.


Fuh. :(. Well. Will fix.

GiST:
 - GiST already supports both scan directions in theory, but page split may 
change order between forward and backward scans (user-defined pageSplit doesn't 
preserve order of tuples). Holding of split until end of scan will produce 
unacceptable concurrency level.
 - GiST  can return one itempointer twice. It's fixable by storing content of 
current page in memory instead of just keeping page pinned. Will do (backpatches 
too).


GIN:
 - GIN doesn't support backward scan direction and will not support in close 
future.
 - Right now GIN doesn't return twice the same itempointer, but with current 
fast_insert patch it might return. So, suppose, to fix that it's enough just to 
remember itempointers returned from pending list and use it as filter for 
results from  regular structure. Will do.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Hannu Krosing
On Thu, 2008-07-24 at 09:06 -0700, Joshua D. Drake wrote:
 On Thu, 2008-07-24 at 18:38 +0300, Hannu Krosing wrote:
  On Tue, 2008-07-22 at 17:24 -0400, Tom Lane wrote:
   In the case of plproxy, I think an integrated solution is pronounced
   SQL-MED, and likewise plproxy in its present form doesn't move us
   toward that goal.
 
  I'm pretty sure that there is no general golden-bullet solution for
  achieving this, and thus I can't see how pl/proxy can conflict with any
  long-term goals in these areas, for any value of these areas.
  
  pl/proxy also has some other possible uses, like doing callbacks in
  independent transactions, simple remote calls, simple RO load balancing,
  etc.
 
 Hannu,
 
 These are all excellent points but I think the real problem here is:
 
 There is nothing that requires pl/proxy to be in core.

AFAIK, there is nothing that requires pl/perl, pl/tcl or pl/python to be
in core either.

Actually, I think that being an independent language / postgresql
extension tool, pl/proxy is _more_ fit to be in core than external
language adapters.

And it would be nice, if some well-maintained sample language (pl/sh or
even pl/dummy) which serves as a sample of latest ways to make use of
pl/function support in core pg code would be included in core as well.

with some slight restructuring (separation of pl-clue and actrual
cacheing/execution) pl/proxy could serve this space as well

 Everyone already agrees pl/proxy is very cool technology for PostgreSQL.
 
 I used to make a lot of arguments about pushing things into core, I was
 big fanboy of getting Tsearch2 into core. Looking back and getting older
 and wiser (no laughing :P) I realize that its almost kind of silly to
 keep pushing this stuff into core.

Not silly at all.

Tsearch in core seems a wise choice, as well as _some_ implementation of
multiple locales. 

 Lots of people talk about legitimacy of the package or some sort of
 artificial endorsement that gets created by being in core. Some of it is
 personal, it is a big feeling of pride to have a piece of code accepted
 to core. 

Usually it is also a way of getting the _core_ better/more functional.

 It is also a way to beef up the resume and yes generally a way
 to deal with more ignorant hosting shops that won't install external
 modules.
 
 However this is not a core problem. It is not a hacker problem. It is
 and education and advocacy problem. We don't need pl/proxy in core. What
 pl/proxy needs is a solid project of its own, with good documentation,
 and community members.

As mentioned in another mail, we don't _need_ other pl-s (except maybe
pl/pgsql) to be in core either.

And it is an additional bonus for consultants, if we keep some of the
best parts separate ;)

-
Hannu


PS. Thinking more of it, I don't even understand, what it means for a
PL to be in core ;)

Are they are under src/pl just for the reason that there is not
contrib/pl ?

Does pushing something into core give impression of trying to get rid of
the responsibility of managing that piece of code ?

--
Hannu



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-24 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
   - GiST already supports both scan directions in theory, but page split may 
 change order between forward and backward scans (user-defined pageSplit 
 doesn't 
 preserve order of tuples). Holding of split until end of scan will produce 
 unacceptable concurrency level.

   - GIN doesn't support backward scan direction and will not support in close 
 future.

Okay.  I'll see about fixing the planner to not assume that GIST or GIN
indexscans are scrollable.

The cleanest way to do this is to introduce a new bool column in pg_am
rather than hard-wiring assumptions about which AMs can do it.  However
(a) that's not back-patchable and (b) it'll create a merge conflict with
your patch, if you're still going to add a new AM function column.
I think that aminsertcleanup per se isn't needed, but if we want an
amanalyze there'd still be a conflict.  Where are we on that?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 On Thu, 2008-07-24 at 09:06 -0700, Joshua D. Drake wrote:
 These are all excellent points but I think the real problem here is:
 There is nothing that requires pl/proxy to be in core.

 AFAIK, there is nothing that requires pl/perl, pl/tcl or pl/python to be
 in core either.

True, but I think it's a good idea to have at least one such in core,
as a prototype to help us track the issues associated with loading a
large third-party library along with a PL.  The fact that we have three
is historical, but on the other hand I believe we've seen distinct
issues crop up from each one, so maybe only one isn't enough either.

 Actually, I think that being an independent language / postgresql
 extension tool, pl/proxy is _more_ fit to be in core than external
 language adapters.

It teaches us nothing about connecting to outside code, though.

 And it would be nice, if some well-maintained sample language (pl/sh or
 even pl/dummy) which serves as a sample of latest ways to make use of
 pl/function support in core pg code would be included in core as well.

And why do you think the above three don't serve that purpose?  Or even
more to the point, how likely is it that an unused dummy language
would be well-maintained?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Robert Haas
On Thu, Jul 24, 2008 at 4:37 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
 On Thu, 2008-07-24 at 09:06 -0700, Joshua D. Drake wrote:
 These are all excellent points but I think the real problem here is:
 There is nothing that requires pl/proxy to be in core.

 AFAIK, there is nothing that requires pl/perl, pl/tcl or pl/python to be
 in core either.

 True, but I think it's a good idea to have at least one such in core,
 as a prototype to help us track the issues associated with loading a
 large third-party library along with a PL.  The fact that we have three
 is historical, but on the other hand I believe we've seen distinct
 issues crop up from each one, so maybe only one isn't enough either.

ISTM that if that if you're willing to admit, even with caveats, that
PL/perl, PL/tcl, or PL/python doesn't need to be in core, then
excluding anything else from core on the basis that it doesn't need to
be there is silly.  The extent to which the feature is useful to a
large number of users (or not) and the extent to which it complicates
maintenance of the code base (or not) seem like much more important
issues.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump vs data-only dumps vs --disable-triggers

2008-07-24 Thread Tom Lane
There's some fairly squirrely logic in pg_dump/pg_restore that tries to
detect whether it's doing a data-only operation, ie, no schema
information is to be dumped or restored.  The reason it wants to
know this is to decide whether to enable the --disable-triggers
code.  However, since --disable-triggers is off by default and has
to be manually requested, I'm not sure why we've got all this extra
complexity in there.  (Actually, I'm sure the reason is that that
code predates the existence of the --disable-triggers switch, but
anyway...)

Simon's patch to split up --schema-only into two switches has broken
this logic, but I'm inclined to just rip it out rather than trying
to fix it.  If the user says --disable-triggers, he should get
trigger disable commands around the data part of the dump, no matter
what he said or didn't say about schema dumping.

Objections?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Tom Lane
Robert Haas [EMAIL PROTECTED] writes:
 ISTM that if that if you're willing to admit, even with caveats, that
 PL/perl, PL/tcl, or PL/python doesn't need to be in core, then
 excluding anything else from core on the basis that it doesn't need to
 be there is silly.

You are merely setting up a straw man, as no one has suggested such a
policy.  Any specific decision of this type is going to involve a
combination of factors, and that's only one.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump vs data-only dumps vs --disable-triggers

2008-07-24 Thread Andrew Dunstan



Tom Lane wrote:

If the user says --disable-triggers, he should get
trigger disable commands around the data part of the dump, no matter
what he said or didn't say about schema dumping.

  


Right. They seem like orthogonal issues.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

2008-07-24 Thread David Fetter
On Thu, Jul 24, 2008 at 01:55:37PM +0900, Tatsuo Ishii wrote:
  Program received signal SIGSEGV, Segmentation fault.
 
 Thanks for the report. Here is the new patches from Yoshiyuki.

Thanks for the patch :)

Now, I get a different problem, this time with the following code
intended to materialize paths on the fly and summarize down to a
certain depth in a tree:

CREATE TABLE tree(
id INTEGER PRIMARY KEY,
parent_id INTEGER REFERENCES tree(id)
);

INSERT INTO tree
VALUES (1, NULL), (2, 1), (3,1), (4,2), (5,2), (6,2), (7,3), (8,3),
   (9,4), (10,4), (11,7), (12,7), (13,7), (14, 9), (15,11), (16,11);

WITH RECURSIVE t(id, path) AS (
VALUES(1,ARRAY[NULL::integer])
UNION ALL
SELECT tree.id, t.path || tree.id
FROM tree JOIN t ON (tree.parent_id = t.id)
)
SELECT
t1.id, count(t2.*)
FROM
t t1
JOIN
t t2
ON (
t1.path[1:2] = t2.path[1:2]
AND
array_upper(t1.path,1) = 2
AND
array_upper(t2.path,1)  2
)
GROUP BY t1.id;
ERROR: unrecognized node type: 203

Please apply the attached patch to help out with tab
completion in psql.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
***
*** 613,621  psql_completion(char *text, int start, int end)
COMMENT, COMMIT, COPY, CREATE, DEALLOCATE, DECLARE,
DELETE FROM, DISCARD, DROP, END, EXECUTE, EXPLAIN, 
FETCH,
GRANT, INSERT, LISTEN, LOAD, LOCK, MOVE, NOTIFY, 
PREPARE,
!   REASSIGN, REINDEX, RELEASE, RESET, REVOKE, ROLLBACK,
SAVEPOINT, SELECT, SET, SHOW, START, TRUNCATE, 
UNLISTEN,
!   UPDATE, VACUUM, VALUES, NULL
};
  
static const char *const backslash_commands[] = {
--- 613,621 
COMMENT, COMMIT, COPY, CREATE, DEALLOCATE, DECLARE,
DELETE FROM, DISCARD, DROP, END, EXECUTE, EXPLAIN, 
FETCH,
GRANT, INSERT, LISTEN, LOAD, LOCK, MOVE, NOTIFY, 
PREPARE,
!   REASSIGN, RECURSIVE, REINDEX, RELEASE, RESET, 
REVOKE, ROLLBACK,
SAVEPOINT, SELECT, SET, SHOW, START, TRUNCATE, 
UNLISTEN,
!   UPDATE, VACUUM, VALUES, WITH, NULL
};
  
static const char *const backslash_commands[] = {
***
*** 2044,2049  psql_completion(char *text, int start, int end)
--- 2044,2058 
  pg_strcasecmp(prev2_wd, ANALYZE) == 0))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
  
+ /* WITH [RECURSIVE] */
+   else if (pg_strcasecmp(prev_wd, WITH) == 0)
+   {
+   static const char *const list_WITH[] =
+   {RECURSIVE, NULL};
+ 
+   COMPLETE_WITH_LIST(list_WITH);
+   }
+ 
  /* ANALYZE */
/* If the previous word is ANALYZE, produce list of tables */
else if (pg_strcasecmp(prev_wd, ANALYZE) == 0)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] WITH RECUSIVE patches 0723

2008-07-24 Thread Tatsuo Ishii
 Now, I get a different problem, this time with the following code
 intended to materialize paths on the fly and summarize down to a
 certain depth in a tree:
 
 CREATE TABLE tree(
 id INTEGER PRIMARY KEY,
 parent_id INTEGER REFERENCES tree(id)
 );
 
 INSERT INTO tree
 VALUES (1, NULL), (2, 1), (3,1), (4,2), (5,2), (6,2), (7,3), (8,3),
(9,4), (10,4), (11,7), (12,7), (13,7), (14, 9), (15,11), (16,11);
 
 WITH RECURSIVE t(id, path) AS (
 VALUES(1,ARRAY[NULL::integer])
 UNION ALL
 SELECT tree.id, t.path || tree.id
 FROM tree JOIN t ON (tree.parent_id = t.id)
 )
 SELECT
 t1.id, count(t2.*)
 FROM
 t t1
 JOIN
 t t2
 ON (
 t1.path[1:2] = t2.path[1:2]
 AND
 array_upper(t1.path,1) = 2
 AND
 array_upper(t2.path,1)  2
 )
 GROUP BY t1.id;
 ERROR: unrecognized node type: 203

Thanks for the report. We will look into this.

 Please apply the attached patch to help out with tab
 completion in psql.

Ok, it will appear in the next patches.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump vs data-only dumps vs --disable-triggers

2008-07-24 Thread Stephen Frost
* Tom Lane ([EMAIL PROTECTED]) wrote:
 Simon's patch to split up --schema-only into two switches has broken
 this logic, but I'm inclined to just rip it out rather than trying
 to fix it.  If the user says --disable-triggers, he should get
 trigger disable commands around the data part of the dump, no matter
 what he said or didn't say about schema dumping.

Agreed entirely.  I was suspicious of that bit of code but, honestly,
wasn't quite sure what the original intent had been.  Sorry I havn't
finished my review of his latest, he went and changed all the macro
names. ;)  If you're taking care of it then I won't spend any further
time on it..  I'm pretty comfortable with it now, honestly, provided the
renaming didn't change anything and my last set of comments were
included.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Additional psql requirements

2008-07-24 Thread ITAGAKI Takahiro

Simon Riggs [EMAIL PROTECTED] wrote:

 * access to version number
 * simple mechanism for conditional execution
 * ability to set substitution variables from command execution
 * conditional execution whether superuser or not

Can we use pgScript for such flow controls?
http://pgscript.projects.postgresql.org/INDEX.html

I'm not sure pgScript can be used in pgAdmin already, but if we support
it both psql and pgAdmin, the scripting syntax will be a defact standard
because they are the most major user interfaces to postgres. I think it is
not good to add another dialect that can be used only in psql.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we really want to migrate plproxy and citext into PG core distribution?

2008-07-24 Thread Josh Tolley
On Thu, Jul 24, 2008 at 2:37 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
 And it would be nice, if some well-maintained sample language (pl/sh or
 even pl/dummy) which serves as a sample of latest ways to make use of
 pl/function support in core pg code would be included in core as well.

 And why do you think the above three don't serve that purpose?  Or even
 more to the point, how likely is it that an unused dummy language
 would be well-maintained?

For whatever it's worth, I'm in the middle of writing a PL
(PL/LOLCODE, specifically), and have found helpful examples of how to
do stuff in PL/pgSQL, PL/Perl, *and* pl/proxy. The examples in the
documentation followed by a bunch of hair pulling while reading
PL/pgSQL were enough to get started, without the benefit of a dummy
language. That's not to say that a dummy language wouldn't be useful,
only that for a coder of my caliber (i.e. Not Terribly Skilled but
Able to Code Myself Out of a Wet Paper Bag) it wasn't necessary.
Because pl/proxy is not in core, I didn't immediately look to it for
examples, but was pointed there by a helpful soul on IRC.

My own opinion is that though there have been several in recent years,
new PLs are written rarely enough that best practices don't change a
whole lot. PL/Perl and PL/pgSQL particularly are very well maintained,
and thus demonstrate in most cases a perfectly acceptable way of
writing a PL.

As to whether or not pl/proxy should be in core, I have no particular
opinion. PL/LOLCODE probably should not be. :)

- Josh / eggyknap

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [RFC] Unsigned integer support.

2008-07-24 Thread Ryan Bradetich
Hello hackers,

I know the development community is in the middle of the July 2008
commit-fest, so I apologize if this design proposals are in
appropriate at this time.

I am looking to take advantage of PostgreSQL extensible type system
and implement unsigned integer support.   The data I am dealing with
contains
many unsigned data types and I am planning on using this type to
reduce storage consumption. I am initially looking to add support for
16-bit and
32-bit unsigned integers with the potential to add 8-bit and 64-bit
unsigned integers if needed or desired by the community.

Searching through the list archives, I found two objections raised in the past:


1. Implicit casts between different data types.

I am hoping the removal of many of the implicit casts in
PostgreSQL 8.3 will simplify this task to where this objection can be
removed.

My plan (without much experimentation) is to have maybe a handful
of casts (something like):
 * uint4 - integer
 * integer - uint4
 * bigint - uint4
 * integer - uint2
 * uint2 - smallint

and then provide operators to provide a reasonable set of
functionality.   My initial thought for this functionality is to
provide default operators on any
type that is not implicitly casted on the psql command-line.

As an example, I am planning for the following SQL statements to
work correctly:

30::uint4 + 10  and
30::uint4  10

   My understanding is the SQL standard does not provide support for
unsigned integers, so I am planning on making all casts from unsigned
integers
   to other data types explicit.   Is this acceptable to the community?

   Another question for the community is should we allow the following cast?
   -1::uint4

Even though this is acceptable c-code, I am leaning towards
throwing an out-of-range error when this occurs.



Are there some areas I am missing or should investigate further
before working on this project?


2. There is not much demand for unsigned integer types.

 Not much I can do about that :)  I am willing to post my work as
a PgFoundry project.

 PgFoundry already has an uint project:
http://pgfoundry.org/projects/uint/

Unfortunately this project seems to have not gone anywhere.  Last
activity was late 2006 and there are not any files checked into the
SCM repository.
Is it acceptable to hijack this PgFoundry project?  Or should I
start a new project (assuming there is any interest in publishing this
work).


Although I am not targeting inclusion for this type in the core
PostgreSQL code, I would like to post code for review and receive
feedback from the
community on this work.  As I understand this RFC is the first step in
the process :)  Once I have some code ready for review, is it
acceptable to use the
commit-fest wiki for this project?

Thanks much for your time!

- Ryan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] Unsigned integer support.

2008-07-24 Thread Tom Lane
Ryan Bradetich [EMAIL PROTECTED] writes:
 I am looking to take advantage of PostgreSQL extensible type system
 and implement unsigned integer support.

This has been proposed before, and foundered before on the question
of implicit coercions.  If you're willing to make all coercions *to*
unsigned types be explicit (or at most assignment), then I think
it can be made to work without breaking anything.  But usually the
folk who ask for this feature are hoping that bare integer literals
like 42 will get interpreted as unsigned when they want them to be.
The problem with that wish is illustrated by

select 15 + 15;

These literals might be either int4 or uint4, therefore this command
might yield either an integer-overflow error or 30::uint4.
That's not a distinction you can fuzz over --- it's got to be one
or the other, and backwards compatibility says it'd better be the
first.

 I am hoping the removal of many of the implicit casts in
 PostgreSQL 8.3 will simplify this task to where this objection can be
 removed.

The implicit casts we removed were cross-type-category cases.
If you hope for unsigned types to be considered part of the numeric
category, there's no guidance for you there.  In fact, the real nub
of the problem is what type shall be initially assigned to an
integer-looking literal, and how will you get things to behave sanely
if that initial choice wasn't what was desired.  We still have some
issues around the fact that 42 isn't considered a smallint.  Throwing
in another possible meaning isn't going to help.

My understanding is the SQL standard does not provide support for
 unsigned integers, so I am planning on making all casts from unsigned
 integers to other data types explicit.

It's really the other direction that would be contentious ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Column level privileges was:(Re: [HACKERS] Extending grant insert on tables to sequences)

2008-07-24 Thread Jaime Casanova
On Thu, Jul 24, 2008 at 12:09 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Jaime Casanova [EMAIL PROTECTED] writes:
 Another issue is the interaction with the planned column-level GRANT
 feature.

 Although that is a feature we want, is a WIP one... do we stop patches
 because it can conflict with a project we don't know will be applied
 soon?

 Well, considering that that one is implementing a feature required by
 SQL spec, your feature will lose any tug-of-war ;-).

i knew the answer already but...

ok, seems this is the last one for column level patch
http://archives.postgresql.org/pgsql-patches/2008-04/msg00417.php

any one working it...

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Guayaquil - Ecuador
Cel. (593) 87171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] Unsigned integer support.

2008-07-24 Thread Ryan Bradetich
Hello Tom,

On Thu, Jul 24, 2008 at 10:10 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Ryan Bradetich [EMAIL PROTECTED] writes:
 I am looking to take advantage of PostgreSQL extensible type system
 and implement unsigned integer support.

 This has been proposed before, and foundered before on the question
 of implicit coercions.  If you're willing to make all coercions *to*
 unsigned types be explicit (or at most assignment), then I think
 it can be made to work without breaking anything.  But usually the
 folk who ask for this feature are hoping that bare integer literals
 like 42 will get interpreted as unsigned when they want them to be.
 The problem with that wish is illustrated by

select 15 + 15;

 These literals might be either int4 or uint4, therefore this command
 might yield either an integer-overflow error or 30::uint4.
 That's not a distinction you can fuzz over --- it's got to be one
 or the other, and backwards compatibility says it'd better be the
 first.

I am in agreement with you on this.  Since SQL does not specify
unsigned types, I was assuming only explicit and assignment casts.
I should have probably mentioned that in the RFC.  Thanks for
pointing this out.

My main goal for this type is the reduced storage space.  I am fine
with people needing to cast to the unsigned types to benefit from the
reduced storage space.

My plans for the example above would be:

  1. SELECT 15 + 15  -- Throws overflow error.
  2. SELECT 15::uint4 + 15 -- Returns 30::uint4.



 I am hoping the removal of many of the implicit casts in
 PostgreSQL 8.3 will simplify this task to where this objection can be
 removed.

 The implicit casts we removed were cross-type-category cases.
 If you hope for unsigned types to be considered part of the numeric
 category, there's no guidance for you there.  In fact, the real nub
 of the problem is what type shall be initially assigned to an
 integer-looking literal, and how will you get things to behave sanely
 if that initial choice wasn't what was desired.  We still have some
 issues around the fact that 42 isn't considered a smallint.  Throwing
 in another possible meaning isn't going to help.

My understanding is the SQL standard does not provide support for
 unsigned integers, so I am planning on making all casts from unsigned
 integers to other data types explicit.

 It's really the other direction that would be contentious ...

regards, tom lane

Thanks for your comments!  I have already started to play around a bit with
the types and will hopefully have some code ready for review / feedback soon.

- Ryan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers