[HACKERS] System catalog bloat removing safety

2013-09-18 Thread Sergey Konoplev
Hi all,

How safe is it to use the technique described by the link below with
system catalog tables to remove bloat?

(in a couple of words it is about moving tuples to the beginning of
the table with a special way of updating)

http://www.depesz.com/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/

Are there any caveats?

Thank you.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: simple date constructor from numeric values

2013-09-18 Thread Jeevan Chalke
Hi Pavel,

I have reviewed your patch.

Patch looks excellent and code changes match with similar constructs
elsewhere. That's great.

However, it was not applying with git apply command but able to apply it
with patch -p1 with some offsets. make and make install was smooth too.
Regression suite didn't complain as expected.

I did my own testing and din't get any issues with that. Code walk-through
was good too.

I was little bit worried as we are allowing 60 for seconds in which case we
are wrapping it to next minute and setting sec to 0. But this logic was not
true for minutes. There we are throwing an error when min = 60.

But I don't blame on this patch as other constructs does same too. Like
select time '15:60:20' throws an error where as select time '15:30:60'
does not.

However, in attached patch I have fixed the typo identified by Alvaro.

Please have a look before I submit it to the committer.

Thanks



On Sat, Jul 13, 2013 at 5:32 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 2013/7/12 Peter Eisentraut pete...@gmx.net:
  There is a small inconsistency:
 
  select time '12:30:57.123456789';
 
  gives
 
  12:30:57.123457
 
  but
 
  select make_time(12, 30, 57.123456789);
 
  gives
 
  12:30:57.123456

 fixed - see attached patch

 Regards

 Pavel

 


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Jeevan B Chalke
Principal Software Engineer, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Phone: +91 20 30589500

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are
not the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.


make_date_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2013-09-18 Thread Jaime Casanova
On Tue, Sep 17, 2013 at 4:03 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Thom Brown wrote:

 Thanks for testing.

 Thanks for the patch, but I seem to have immediately hit a snag:

 pgbench=# CREATE INDEX minmaxtest ON pgbench_accounts USING minmax (aid);
 PANIC:  invalid xlog record length 0

 Silly mistake I had already made in another patch.  Here's an
 incremental patch which fixes this bug.  Apply this on top of previous
 minmax-1.patch.

 I also renumbered the duplicate OID pointed out by Peter, and fixed the
 two compiler warnings reported by Jaime.

 Note you'll need to re-initdb in order to get the right catalog entries.


Hi,

Found another problem with the this steps:

create table t1 (i int);
create index idx_t1_i on t1 using minmax(i);
insert into t1 select generate_series(1, 200);
ERROR:  could not read block 1 in file base/12645/16397_vm: read
only 0 of 8192 bytes
STATEMENT:  insert into t1 select generate_series(1, 200);
ERROR:  could not read block 1 in file base/12645/16397_vm: read
only 0 of 8192 bytes

After that, i keep receiving these messages (when autovacuum tries to
vacuum this table):

ERROR:  could not truncate file base/12645/16397_vm to 2 blocks:
it's only 1 blocks now
CONTEXT:  automatic vacuum of table postgres.public.t1
ERROR:  could not truncate file base/12645/16397_vm to 2 blocks:
it's only 1 blocks now
CONTEXT:  automatic vacuum of table postgres.public.t1

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 beta breaks some extensions make install

2013-09-18 Thread Marti Raudsepp
On Tue, Sep 17, 2013 at 10:37 AM, Cédric Villemain
ced...@2ndquadrant.com wrote:
 Erm, isn't apt.postgresql.org supposed to ship the *official*
 PostgreSQL versions? Given that this issue affects all distros, I
 don't see why Ubuntu/Debian need to be patched separately.

 Well, the patches are applyed on the debian packages (not only in
 apt.pgdg.org).
 The packages provided by apt.postgresql.org are based on the 'official
 packages' from debian. (if you allow me this circle)

Oh I see, that makes sense.

 PS: Where are the sources used to build packages on
 9.3:
 http://anonscm.debian.org/loggerhead/pkg-postgresql/postgresql-9.3/trunk/changes

Thanks!

Regards,
Marti


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] when construct new tuple for update?

2013-09-18 Thread mohsen soodkhah mohammadi
hi
I want that find where did a new tuple data construct in postgresql code
when query is update.
I find that ExecModiryTable is an upper function for do it. but I want to
find exact place that create the data of one row of table.
can you help me?


Re: [HACKERS] [RFC] Extend namespace of valid guc names

2013-09-18 Thread Andres Freund
On 2013-09-18 11:55:24 +0530, Amit Kapila wrote:

  I think that ship has long since sailed. postgresql.conf has allowed
  foo.bar style GUCs via custom_variable_classes for a long time, and
  these days we don't even require that but allow them generally. Also,
  SET, postgres -c, and SELECT set_config() already don't have the
  restriction to one dot in the variable name.
 
 It's even explained in document that a two-part name is allowed for
 Customized Options at link:
 http://www.postgresql.org/docs/devel/static/runtime-config-custom.html

Oh I somehow missed that. I'll need to patch that as well.

 Apart from this, quite a few negative tests (setting invalid names)
 also works fine (select pg_reload_conf() shows error on server).

 On a side note, although it doesn't have any relation with your patch,
 I found some minor problems in setting of configuration during test of
 this patch, so I am mentioning it here. I will look into these in
 detail later:

Most of those have been discussed in another subthread. While I still am
not sure that I agree, the concensus was that we shouldn't do anything
about that for now.

 
 Test-1
 postgres=# select set_config('a.b.1.c','c',false);
  set_config
 
  c
 (1 row)
 
 postgres=# show a.b.1.c;
 ERROR:  syntax error at or near .1
 LINE 1: show a.b.1.c;

You can show it with a.b.1.c, but those can't be set via guc-file.l.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog bloat removing safety

2013-09-18 Thread Andres Freund
Hi,

On 2013-09-17 23:12:24 -0700, Sergey Konoplev wrote:
 How safe is it to use the technique described by the link below with
 system catalog tables to remove bloat?
 
 (in a couple of words it is about moving tuples to the beginning of
 the table with a special way of updating)
 
 http://www.depesz.com/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/
 
 Are there any caveats?

At least for pg_index it's not safe, we use the xmin of rows when
indcheckxmin = true.
I am not aware of other cases, but I'd be hesitant to do so.

You have bloat because of lots of temporary tables IIRC?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-09-18 Thread Andres Freund
On 2013-09-18 00:54:38 -0500, Peter Geoghegan wrote:
  At some point we might to extend that logic to more cases, but that
  should be separate discussion imo.
 
 This is essentially why I went and added a row locking component over
 your objections.

I didn't object to implementing row level locking. I said that if your
basic algorithm without row level locks is viewn as being broken, it
won't be fixed by implementing row level locking.

What I meant here is just that we shouldn't implement a mode with less
waiting for now even if there might be usecases because that will open
another can of worms.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql sets up cancel handler very early

2013-09-18 Thread Dean Rasheed
On 14 May 2013 16:35, Peter Eisentraut pete...@gmx.net wrote:
 Sometimes, the psql startup hangs when it cannot resolve or connect to a
 host.  Intuitively, I would like to press Ctrl+C and correct the
 connection string or investigate.  But that doesn't work because Ctrl+C
 is already bound to the query cancel handler by that time.

 It seems to me that there is no point in setting up the cancel handler
 before the database connection is established.  Example patch attached.
  Comments?


That makes sense to me, and the patch appears to work as advertised.

The objections to the previous patch were that it did nothing in the
\c case, or if the server becomes unreachable mid-session. Those feel
like much less common cases, but maybe they're still worth thinking
about. However, IMO the solution to those issues is likely to be a
significantly different (and larger) patch.

Also, even if those issues do get addressed one day, the change in
this patch still seems like the right thing to do on initial startup,
which IME is the most common case, so +1 for this patch.

I'm marking it ready for committer.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance problem in PLPgSQL

2013-09-18 Thread dlight
So if I run function 1 with varible inside the query in one session it's
replan each time. 

But if whant to teling postgres  do not doing this, what shoud i do? 

We have more than 1 runs in one session with varible inside sql. And
have big performance problem in 9.2 and 9.3.

Here is my tests.
http://postgresql.1045698.n5.nabble.com/function-execute-on-v-9-2-slow-down-tp5767170p5771403.html

We want change pg version but can't doing that because of performance
regression in 9.2 and 9.3.:(





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-tp5764796p5771405.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Andres Freund
Hi,

If you do ALTER TABLE ... DISABLE TRIGGER ALL; and then individually
re-enable the disabled triggers it's easy to miss internal triggers.
A \d+ tablename will not show anything out of the ordinary for that
situation since we don't show internal triggers. But foreign key checks
won't work.
So, how about displaying disabled internal triggers in psql?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WHERE CURRENT OF behaviour is not what's documented

2013-09-18 Thread Boszormenyi Zoltan

Hi,

I have experimented with cursors a little and found that the part about FOR SHARE/FOR 
UPDATE in


http://www.postgresql.org/docs/9.2/interactive/sql-declare.html

i.e. the sensitive cursor is not what actually happens. BTW, 9.3 has the same contents 
for the same page.



If the cursor's query includes FOR UPDATE or FOR SHARE, then returned rows are locked at 
the time they are first fetched, in the same way as for a regular SELECT 
http://www.postgresql.org/docs/9.3/interactive/sql-select.html command with these 
options. In addition, the returned rows will be the most up-to-date versions; therefore 
these options provide the equivalent of what the SQL standard calls a sensitive cursor. 
(Specifying INSENSITIVE together with FOR UPDATE or FOR SHARE is an error.)



The statement that the most up-to-date versions of the rows are returned
doesn't reflect the reality anymore:

$ psql
psql (9.2.4)
Type help for help.

zozo= create table xxx (id serial primary key, t text);
NOTICE:  CREATE TABLE will create implicit sequence xxx_id_seq for serial column 
xxx.id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index xxx_pkey for table 
xxx
CREATE TABLE
zozo= insert into xxx (t) values ('a'), ('b'), ('c');
INSERT 0 3
zozo= begin;
BEGIN
zozo= declare mycur cursor for select * from xxx for update;
DECLARE CURSOR
zozo= fetch all from mycur;
 id | t
+---
  1 | a
  2 | b
  3 | c
(3 rows)

zozo= move absolute 0 in mycur;
MOVE 0
zozo= fetch from mycur;
 id | t
+---
  1 | a
(1 row)

zozo= update xxx set t = t || '_x' where current of mycur;
UPDATE 1
zozo= move absolute 0 in mycur;
MOVE 0
zozo= fetch all from mycur;
 id | t
+---
  2 | b
  3 | c
(2 rows)

What happened to the most up-to-date row of id == 1?

zozo= select * from xxx where id = 1;
 id |  t
+-
  1 | a_x
(1 row)

The same behaviour is experienced under 9.2.4 and 9.3.0.

As a side note, I couldn't test 8.4.17, 9.0.13 and 9.1.9 under Fedora 19,
because initdb fails for all 3 versions. I am bitten by the same as what's
described here: 
http://www.postgresql.org/message-id/14242.1365200...@sss.pgh.pa.us

It the above cursor behaviour is the accepted/expected one?

Since SCROLL (with or without INSENSITIVE) cannot be specified together
with FOR UPDATE/FOR SHARE, I know the MOVE ABSOLUTE 0 is on the
verge of being invalid in this case.

But in any case, either the documentation should tell that the UPDATEd
rows will be missing from a reset executor run or MOVE ABSOLUTE
with a value smaller than portal-portalPos should also be refused
just like MOVE BACKWARD.

As another side note, portal-portalPos mentions it can overflow,
so I suggest using int64 explicitly, so it's ensured that 32-bit systems
get the same overflow behaviour as 64-bit ones. Or (the horror, the horror!) 
int128_t.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [HACKERS] WHERE CURRENT OF behaviour is not what's documented

2013-09-18 Thread Andres Freund
On 2013-09-18 14:23:19 +0200, Boszormenyi Zoltan wrote:
 Hi,
 
 I have experimented with cursors a little and found that the part about FOR
 SHARE/FOR UPDATE in
 
 http://www.postgresql.org/docs/9.2/interactive/sql-declare.html
 
 i.e. the sensitive cursor is not what actually happens. BTW, 9.3 has the
 same contents for the same page.
 
 
 If the cursor's query includes FOR UPDATE or FOR SHARE, then returned rows
 are locked at the time they are first fetched, in the same way as for a
 regular SELECT
 http://www.postgresql.org/docs/9.3/interactive/sql-select.html command
 with these options. In addition, the returned rows will be the most
 up-to-date versions; therefore these options provide the equivalent of what
 the SQL standard calls a sensitive cursor. (Specifying INSENSITIVE
 together with FOR UPDATE or FOR SHARE is an error.)
 
 
 The statement that the most up-to-date versions of the rows are returned
 doesn't reflect the reality anymore:

I think it's not referring to the behaviour inside a single session but
across multiple sessions. I.e. when we follow the ctid chain of a tuple
updated in a concurrent session.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Robert Haas
On Sun, Sep 15, 2013 at 10:51 AM, Peter Eisentraut pete...@gmx.net wrote:
 On Sun, 2013-09-15 at 16:09 +0200, Dimitri Fontaine wrote:
 Peter Eisentraut pete...@gmx.net writes:
  It shouldn't be in the commit fest if it has no patch.

 What should I do if my goal is to get community consensus on the best
 way to solve a problem, and want to start the discussion with some
 proposals?

 Post it to the pgsql-hackers list.

The idea of using the CommitFest process to request design review was
floated at one of the last couple of developer meetings in Ottawa.
Personally, I'm for it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Dimitri Fontaine
Andres Freund and...@2ndquadrant.com writes:
 So, how about displaying disabled internal triggers in psql?

+1

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Robert Haas
On Sat, Sep 14, 2013 at 4:15 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 We can attack the problem in several ways:

   - have an initdb switch to tweak the library path per cluster,

I see no advantage to making this impossible to change after initdb time.

   - have a superuser-only GUC to tweak the library path,

I could live with a GUC.  Like Andres, I think it should be PGC_POSTMASTER.

   - consider on-disk extension as templates and move their module files
 somewhere private in $PGDATA and load the code from there

I think this will be horrid mess of security vulnerabilities and upgrade woes.

Here's another idea.  At initdb time, create an empty directory called
called pg_you_can_load_stuff_from_here (pick a better name) inside
$PGDATA.  Allow it to be replaced with a symlink.  This would be
similar to what we do today with pg_xlog.  In fact, you can imagine an
equivalent of initdb -X that does something precisely analogous.  This
feels a bit more natural to me than a GUC.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo fix in spgtextproc.c

2013-09-18 Thread Robert Haas
On Tue, Sep 17, 2013 at 5:16 AM, Etsuro Fujita
fujita.ets...@lab.ntt.co.jp wrote:
 I ran into a typo.  Attached is a patch.

Committed, thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Docs fix in advanced.sgml

2013-09-18 Thread Robert Haas
On Tue, Sep 17, 2013 at 5:45 AM, Etsuro Fujita
fujita.ets...@lab.ntt.co.jp wrote:
 I think the document in advanced.sgml should be corrected, though I might
 misunderstand the rules of usage.  Attached is a patch.

I think you're right, because the existing text makes it sounds like
the operator is =, but the query says .

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Bernd Helmle



--On 18. September 2013 13:52:29 +0200 Andres Freund 
and...@2ndquadrant.com wrote:



If you do ALTER TABLE ... DISABLE TRIGGER ALL; and then individually
re-enable the disabled triggers it's easy to miss internal triggers.
A \d+ tablename will not show anything out of the ordinary for that
situation since we don't show internal triggers. But foreign key checks
won't work.
So, how about displaying disabled internal triggers in psql?


Hi had exactly the same concerns this morning while starting to look at the 
ENABLE/DISABLE constraint patch. However, i wouldn't display them as 
triggers, but maybe more generally as disabled constraints or such.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-09-18 Thread Robert Haas
On Mon, Sep 16, 2013 at 8:49 AM, MauMau maumau...@gmail.com wrote:
 2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
 contain Unicode data.
...
 3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
 Fixed-width encoding may allow faster string manipulation as described in
 Oracle's manual.  But I'm not sure about this, because UTF-16 is not a real
 fixed-width encoding due to supplementary characters.

It seems to me that these two points here are the real core of your
proposal.  The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding.  In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases?  What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes.  You haven't mentioned any of those
issues or discussed how you would solve them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Andres Freund
On 2013-09-18 15:15:55 +0200, Bernd Helmle wrote:
 
 
 --On 18. September 2013 13:52:29 +0200 Andres Freund
 and...@2ndquadrant.com wrote:
 
 If you do ALTER TABLE ... DISABLE TRIGGER ALL; and then individually
 re-enable the disabled triggers it's easy to miss internal triggers.
 A \d+ tablename will not show anything out of the ordinary for that
 situation since we don't show internal triggers. But foreign key checks
 won't work.
 So, how about displaying disabled internal triggers in psql?
 
 Hi had exactly the same concerns this morning while starting to look at the
 ENABLE/DISABLE constraint patch. However, i wouldn't display them as
 triggers, but maybe more generally as disabled constraints or such.

Well, that will lead the user in the wrong direction, won't it? They
haven't disabled the constraint but the trigger. Especially as we
already have NOT VALID and might grow DISABLED for constraint
themselves...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freezing without write I/O

2013-09-18 Thread Andres Freund
On 2013-09-16 16:59:28 +0300, Heikki Linnakangas wrote:
 Here's a rebased version of the patch, including the above-mentioned fixes.
 Nothing else new.

* We need some higherlevel description of the algorithm somewhere in the
  source. I don't think I've understood the concept from the patch alone
  without having read the thread previously.
* why do we need to do the PageUpdateNeedsFreezing() dance in
  heap_page_prune? No xids should change during it.
* Why can we do a GetOldestXmin(allDbs = false) in
  BeginXidLSNRangeSwitch()?
* Is there any concrete reasoning behind the current values for
  XID_LSN_RANGE_INTERVAL and NUM_XID_LSN_RANGES or just gut feeling?
* the lsn ranges file can possibly become bigger than 512bytes (the size
  we assume to be written atomically) and you write it inplace. If we
  fail halfway through writing, we seem to be able to recover by using
  the pageMatureLSN from the last checkpoint, but it seems better to
  do the fsync(),rename(),fsync() dance either way.
* Should we preemptively freeze tuples on a page in lazy_scan_heap if we
  already have dirtied the page? That would make future modifcations
  cheaper.
* lazy_scan_heap now blocks acquiring a cleanup lock on every buffer
  that contains dead tuples. Shouldn't we use some kind of cutoff xid
  there? That might block progress too heavily. Also the comment above
  it still refers to the old logic.
* There's no way to force a full table vacuum anymore, that seems
  problematic to me.
* I wonder if CheckPointVarsup() doesn't need to update
  minRecoveryPoint. StartupVarsup() should be ok, because we should only
  read one from the future during a basebackup?
* xidlsnranges_recently[_dirtied] are not obvious on a first glance. Why
  can't we just reset dirty before the WriteXidLSNRangesFile() call?
  There's only one process doing the writeout. Just because the
  checkpointing process could be killed?
* I think we should either not require consuming an multixactid or use a
  function that doesn't need MultiXactIdSetOldestMember(). If the
  transaction doing so lives for long it will unnecessarily prevent
  truncation of mxacts.
* switchFinishXmin and nextSwitchXid should probably be either volatile
  or have a compiler barrier between accessing shared memory and
  checking them. The compiler very well could optimize them away and
  access shmem all the time which could lead to weird results.
* I wonder whether the fact that we're doing the range switches after
  acquiring an xid could be problematic if we're preventing xid
  allocation due to the checks earlier in that function?
* I think heap_lock_tuple() needs to unset all-visible, otherwise we
  won't vacuum that page again which can lead to problems since we
  don't do full-table vacuums again?

So, I think that's enough for a first look. Will think about general
issues a bit more.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
   - consider on-disk extension as templates and move their module files
 somewhere private in $PGDATA and load the code from there

 I think this will be horrid mess of security vulnerabilities and upgrade woes.

I think it's a solution to that horrid mess. Care to expand on your thoughs?

 Here's another idea.  At initdb time, create an empty directory called
 called pg_you_can_load_stuff_from_here (pick a better name) inside
 $PGDATA.  Allow it to be replaced with a symlink.  This would be
 similar to what we do today with pg_xlog.  In fact, you can imagine an
 equivalent of initdb -X that does something precisely analogous.  This
 feels a bit more natural to me than a GUC.

I like that too.

Now the thing I've heard the most about our extension infrastructure is
how to make it support loading different versions of the same module in
different databases in the same cluster. The second one on that topic
might be why are extensions managed per-database except for their binary
module parts?

For that reason I proposed have the same mechanism as what you say here,
but with a per-database directory layout.

Thinking about it more, I think we would actually need both: some
extensions need shared_preload_libraries and cluster-wide settings is
all we can offer here. For local_preload_libraries compatible ones, then
per-database setting would be best.

Maybe adding a system view listing all the currently loaded modules,
which extension made the system load them (if any) and where it was
loaded from, superuser only, would then be appropriate.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WHERE CURRENT OF behaviour is not what's documented

2013-09-18 Thread Boszormenyi Zoltan

2013-09-18 14:27 keltezéssel, Andres Freund írta:

On 2013-09-18 14:23:19 +0200, Boszormenyi Zoltan wrote:

Hi,

I have experimented with cursors a little and found that the part about FOR
SHARE/FOR UPDATE in

http://www.postgresql.org/docs/9.2/interactive/sql-declare.html

i.e. the sensitive cursor is not what actually happens. BTW, 9.3 has the
same contents for the same page.


If the cursor's query includes FOR UPDATE or FOR SHARE, then returned rows
are locked at the time they are first fetched, in the same way as for a
regular SELECT
http://www.postgresql.org/docs/9.3/interactive/sql-select.html command
with these options. In addition, the returned rows will be the most
up-to-date versions; therefore these options provide the equivalent of what
the SQL standard calls a sensitive cursor. (Specifying INSENSITIVE
together with FOR UPDATE or FOR SHARE is an error.)


The statement that the most up-to-date versions of the rows are returned
doesn't reflect the reality anymore:

I think it's not referring to the behaviour inside a single session but
across multiple sessions. I.e. when we follow the ctid chain of a tuple
updated in a concurrent session.


But the documentation doesn't spell it out. Perhaps a little too terse.

Quoting the SQL2011 draft, 4.33.2 Operations on and using cursors, page 112:

If a cursor is open, and the SQL-transaction in which the cursor was opened makes a 
significant change to
SQL-data, then whether that change is visible through that cursor before it is closed is 
determined as follows:

— If the cursor is insensitive, then significant changes are not visible.
— If the cursor is sensitive, then significant changes are visible.
— If the cursor is asensitive, then the visibility of significant changes is 
implementation-dependent.


SQL2003 has the same wording in 4.32.2 Operations on and using cursors
on page 96.

So, a SENSITIVE cursor shows significant changes (I guess a modified
row counts as one) and they should be shown in the _same_ transaction
where the cursor was opened. If anything, the PostgreSQL cursor
implementation for FOR SHARE/FOR UPDATE is asensitive.

Also, 14.10 update statement: positioned, paragraph 14) in General Rules
in SQL2003 (page 848) or 15.6 Effect of a positioned update, paragraph 16)
in SQL2011 draft (page 996) says the new row replaces the old row
*in the cursor*, not just in the table. Quote:


Let R1 be the candidate new row and let R be the current row of CR.
...
The current row R of CR is replaced by R1.


Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix picksplit with nan values

2013-09-18 Thread Alexander Korotkov
On Tue, Sep 17, 2013 at 5:04 PM, Alexander Korotkov aekorot...@gmail.comwrote:

 On Mon, Sep 16, 2013 at 4:13 PM, Andrew Gierth 
 and...@tao11.riddles.org.uk wrote:

  Alexander == Alexander Korotkov aekorot...@gmail.com writes:

  Alexander 2) NaN coordinates should be processed in GiST index scan
  Alexander like in sequential scan.

 postgres=# select * from pts order by a - '(0,0)' limit 10;
 a
 --
  (1,1)
  (7,nan)
  (9,nan)
  (11,nan)
  (4,nan)
  (nan,6)
  (2,1)
  (1,2)
  (2,2)
  (3,1)
 (10 rows)

 postgres=# set enable_indexscan=false;
 SET

 postgres=# select * from pts order by a - '(0,0)' limit 10;
a
 ---
  (1,1)
  (2,1)
  (1,2)
  (2,2)
  (3,1)
  (1,3)
  (3,2)
  (2,3)
  (4,1)
  (1,4)
 (10 rows)

 this data set was created by:
 insert into pts
   select point(i,j)
 from (select generate_series(1,100)::float8 union all select 'nan')
 s1(i),
  (select generate_series(1,100)::float8 union all select 'nan')
 s2(j)
order by random();


 Thanks, Andrew! Good spot.
 I didn't examine order by operators for work with NaNs.
 I think this time problem is in GiST itself rather than in opclass. I'm
 going to fix it in a separate patch.


Attached patch fixes knn GiST behaviour with NaN. It makes RB-tree
comparison function in GiST work like float8 btree opclass comparison
function.

--
With best regards,
Alexander Korotkov.


fix-knn-gist-nan.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] REVIEW: Allow formatting in log_line_prefix

2013-09-18 Thread Albe Laurenz
This is a review for patch
caaphdvpagtypzb2kwa0mmtksayg9+vashyjmjfatngxr1ad...@mail.gmail.com

The patch is readable, applies fine and builds without warnings.

It contains sufficient documentation.

It works as it should, no crashes or errors.

It is well written, in fact it improves the readability of
the log_line_prefix function in backend/utils/error/elog.c.

I think that this is a useful feature as it can help to make
stderr logging more readable for humans.

I have a few gripes with the English:

- In the documentation patch:

  + numeric literal after the % and before the option. A negative
  + padding will cause the status information to be padded on the
  + right with spaces to give it a minimum width. Whereas a positive
  + padding will pad on the left. Padding can be useful keep log
  + files more human readable.

  I think that there should be a comma, not a period, before whereas.

- The description for the function process_log_prefix_padding
  should probably mention that it returns NULL if it encounters
  bad syntax.

- This comment:

  +   /* Format is invalid if it ends with the padding number */

  should begin with lower case.

- In this comment:

  +   /*
  +* Process any formatting which may exist after the '%'.
  +* Note that this moves p passed the padding number
  +* if it exists.
  +*/

It should be past, not passed.

If these details get fixed, I'll mark the patch ready for committer.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] where we are with dbuckets calculation?

2013-09-18 Thread Pavel Stehule
Hello

we found a strange slow hash join operations - and it looks so this behave
is related to underestimation. I found a Simon's proposal
http://www.postgresql.org/message-id/ca+u5nmj21sxchk6sg2oq7t0ztuaoebfhuprczfbbmmfezam...@mail.gmail.com

Is there any progress?

Regards

Pavel

  -  Hash Join  (cost=3827235.70..3874397.28 rows=2834 width=16) (actual
time=115596.500..724273.041 rows=1892595 loops=1)
 Hash Cond: (f_zendesktags_aaacrw6b1kat37e.ticket_id_id =
f_zendesktickets_aaarrsq5gxavz9o.id)
 -  HashAggregate  (cost=2950811.92..2967206.09 rows=1639417
width=4) (actual time=100035.413..114817.301 rows=12301177 loops=1)
   -  Seq Scan on f_zendesktags_aaacrw6b1kat37e
(cost=0.00..2319375.28 rows=126287328 width=4) (actual
time=0.012..34951.543 rows=126287769 loops

same join with better estimation has about 20x faster

  -  Hash Join  (cost=3734043.50..3787700.73 rows=242545 width=12) (actual
time=102901.756..137782.114 rows=1892595 loops=1)
 Hash Cond: (f_zendesktags_aaacrw6b1kat37e.ticket_id_id =
f_zendesktickets_aaarrsq5gxavz9o.id)
 -  HashAggregate  (cost=2950811.92..2967206.09 rows=1639417
width=4) (actual time=99323.801..114452.277 rows=12301177 loops=1)
   -  Seq Scan on f_zendesktags_aaacrw6b1kat37e
(cost=0.00..2319375.28 rows=126287328 width=4) (actual
time=0.033..34878.214 rows=126287769 loops=1)
 -  Hash  (cost=760482.69..760482.69 rows=1819911 width=12)
(actual time=3576.360..3576.360 rows=1892595 loops=1)


Re: [HACKERS] information schema parameter_default implementation

2013-09-18 Thread Amit Khandekar
I have assigned myself as reviewer for this one.

The logic of pg_get_function_arg_default() looks good. I will reply with
any code-level comments later, but just a quick question before that:

What's the reason behind calling pg_has_role(proowner, 'USAGE') before
calling pg_get_function_arg_default() ? :

 CASE WHEN pg_has_role(proowner, 'USAGE')
  THEN pg_get_function_arg_default(p_oid, (ss.x).n)
  ELSE NULL END

There is already a pg_has_role() filter added while fetching the pg_proc
entries   :
  FROM pg_namespace n, pg_proc p
  WHERE n.oid = p.pronamespace
AND (pg_has_role(p.proowner, 'USAGE') OR
 has_function_privilege(p.oid, 'EXECUTE'))) AS ss

So the proc oid  in pg_get_function_arg_default(p_oid, (ss.x).n) belongs to
a procedure for which the current user has USAGE privilege.


On 15 September 2013 01:35, Peter Eisentraut pete...@gmx.net wrote:

 Here is an updated patch which fixes the bug you have pointed out.

 On Thu, 2013-01-31 at 18:59 +0500, Ali Dar wrote:

  I checked our your patch. There seems to be an issue when we have OUT
  parameters after the DEFAULT values.

 Fixed.

  Some other minor observations:
  1) Some variables are not lined in pg_get_function_arg_default().

 Are you referring to indentation issues?  I think the indentation is
 good, but pgindent will fix it anyway.

  2) I found the following check a bit confusing, maybe you can make it
  better
  if (!argmodes || argmodes[i] == PROARGMODE_IN || argmodes[i] ==
  PROARGMODE_INOUT || argmodes[i] == PROARGMODE_VARIADIC)

 Factored that out into a separate helper function.
 
  2) inputargn can be assigned in declaration.

 I'd prefer to initialize it close to where it is used.

  3) Function level comment for pg_get_function_arg_default() is
  missing.

 I think the purpose of the function is clear.

  4) You can also add comments inside the function, for example the
  comment for the line:
  nth = inputargn - 1 - (proc-pronargs - proc-pronargdefaults);

 Suggestion?

  5) I think the line added in the
  documentation(informational_schema.sgml) is very long. Consider
  revising. Maybe change from:
 
  The default expression of the parameter, or null if none or if the
  function is not owned by a currently enabled role. TO
 
  The default expression of the parameter, or null if none was
  specified. It will also be null if the function is not owned by a
  currently enabled role.
 
  I don't know what do you exactly mean by: function is not owned by a
  currently enabled role?

 I think this style is used throughout the documentation of the
 information schema.  We need to keep the descriptions reasonably
 compact, but I'm willing to entertain other opinions.




 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] [PATCH] Add an ldapoption to disable chasing LDAP referrals

2013-09-18 Thread Peter Eisentraut
On 7/8/13 9:33 PM, James Sewell wrote:
 New patch attached. I've moved from using a boolean to an enum trivalue.

When ldapreferrals is set to something other than 0 or 1 exactly, it
just ignores the option.  That's not good, I think.  It should probably
be an error.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Bernd Helmle



--On 18. September 2013 15:19:27 +0200 Andres Freund 
and...@2ndquadrant.com wrote:



Well, that will lead the user in the wrong direction, won't it? They
haven't disabled the constraint but the trigger. Especially as we
already have NOT VALID and might grow DISABLED for constraint
themselves...



Valid point. But it is also nice to know in detail, which constraints 
stopped working. Ok, it is documented which constraints are affected and 
maybe i'm lost within too much detail atm, but i find people getting 
confused about this internal trigger thingie sometimes. Won't they get 
confused about a suddenly appearing RI_ConstraintTrigger_a_54015, too?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance problem in PLPgSQL

2013-09-18 Thread Pavel Stehule
Hello

can you try this patch (commit)
https://github.com/postgres/postgres/commit/a5f11e24a4d1afb213c780812a3df14c04d7f845#diff-fc73a24ee7d0692c2a0c639870223d70?

Regards

Pavel


2013/9/18 dlight avinf...@gmail.com

 So if I run function 1 with varible inside the query in one session
 it's
 replan each time.

 But if whant to teling postgres  do not doing this, what shoud i do?

 We have more than 1 runs in one session with varible inside sql. And
 have big performance problem in 9.2 and 9.3.

 Here is my tests.

 http://postgresql.1045698.n5.nabble.com/function-execute-on-v-9-2-slow-down-tp5767170p5771403.html

 We want change pg version but can't doing that because of performance
 regression in 9.2 and 9.3.:(





 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-tp5764796p5771405.html
 Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Assertions in PL/PgSQL

2013-09-18 Thread Pavel Stehule
Hello


2013/9/18 Marko Tiikkaja ma...@joh.to

 On 2013-09-16 21:24, Pavel Stehule wrote:

 2. a failed assert should to raise a exception, that should not be handled
 by any exception handler - similar to ERRCODE_QUERY_CANCELED - see
 exception_matches_conditions.


 I'm not sure what I think about that idea.  I see decent arguments for it
 working either way.  Care to unravel yours a bit more?


yes

so

CREATE OR REPLACE FUNCTION foo(a int) RETURNS int
BEGIN
  ASSERT a BETWEEN 1 AND 100;
  RETURNS a;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION proc()
RETURNS int AS $$
BEGIN
  do some complex logic that exec a foo function

EXCEPTION WHEN OTHERS THEN
  -- log some errors
  INSERT INTO log VALUES(...)
END;
$$ LANGUAGE plpgsql;

In this code a assert fail can be lost in app log. Or can be knowingly
handled and ignored - what is wrong, and should not be allowed.

When I wrote a little bit complex procedures, I had to use a EXCEPTION WHEN
OTHERS clause - because I would not to lost a transaction. It worked, but
searching a syntax errors was significantly harder - so on base of this
experience I am thinking so some errors can be handled (related to database
usage) and others not - like syntax errors in PL/pgSQL or possible
assertions (although we can handle syntax error, but I don't think so it is
practical). It significantly increase a work that is necessary for error
identification.

Regards

Pavel













 Regards,
 Marko Tiikkaja



Re: [HACKERS] record identical operator

2013-09-18 Thread Robert Haas
On Tue, Sep 17, 2013 at 8:23 AM, Kevin Grittner kgri...@ymail.com wrote:
 To have clean semantics, I think the operator should mean that the
 stored format of the row is the same.  Regarding the array null
 bitmap example, I think it would be truly weird if the operator
 said that the stored format was the same, but this happened:

 test=# select pg_column_size(ARRAY[1,2,3]);
  pg_column_size
 
  36
 (1 row)

 test=# select pg_column_size((ARRAY[1,2,3,NULL])::int4[3]);
  pg_column_size
 
  44
 (1 row)

 They have the same stored format, but a different number of
 bytes?!?

Hmm.  For most of this thread, I was leaning toward the view that
comparing the binary representations was the wrong concept, and that
we actually needed to have type-specific operators that understand the
semantics of the data type.

But I think this example convinces me otherwise.  What we really want
to do here is test whether two values are the same, and if you can
feed two values that are supposedly the same to some function and get
two different answers, well then they're not really the same, are
they?

Now, I grant that the array case is pretty weird.  An array with an
all-zeroes null bitmap is basically semantically identical to one with
no null bitmap at all.  But there are other such cases as well.  You
can have two floats that print the same way except when
extra_float_digits=3, for example, and I think that's probably a
difference that we *wouldn't* want to paper over.  You can have a
long-form numeric that represents a value that could have been
represented as a short-form numeric, which is similar to the array
case.  There are probably other examples as well.  But in each of
those cases, the point is that there *is* some operation which will
distinguish between the two supposedly-identical values, and therefore
they are not identical for all purposes.  Therefore, I see no harm in
having an operator that tests for
are-these-values-identical-for-all-purposes.  If that's useful for
RMVC, then there may be other legitimate uses for it as well.

And once we decide that's OK, I think we ought to document it.  Sure,
it's a little confusing, but we can explain it, I think.  It's a good
opportunity to point out to people that, most of the time, they really
want something else, like the equality operator for the default btree
opclass.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Robert Haas
On Wed, Sep 18, 2013 at 9:26 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
   - consider on-disk extension as templates and move their module files
 somewhere private in $PGDATA and load the code from there

 I think this will be horrid mess of security vulnerabilities and upgrade 
 woes.

 I think it's a solution to that horrid mess. Care to expand on your thoughs?

I think that would largely be rehashing previous discussions, in which
it's already been established that we don't see eye to eye on this
issue.  But briefly, I think that replacing shared libraries ought to
be the job of the OS packaging system.  System administrators are
already familiar with the notion that they need to run 'yum update' or
whatever the comparable command is from time to time, or they're going
to be vulnerable.  But such packaging systems aren't going to update
copies of shared libraries we've placed inside $PGDATA.

And, I can't see what advantage it offers.  If you want to be able to
have more than one version of the extension installed, you can solve
that at the OS level by choosing distinct names for their shared
libraries.  If you want to be able to download and install C code into
your PostgreSQL database via a libpq connection, someone can write an
extension to do that today, without any core changes at all.  If it's
nifty enough, it might even be worth shipping in contrib as an example
of how you can make deployment really, really easy at some price in
security.  If your concern is that someone might modify the
OS-installed versions of those extensions in a way that renders them
incompatible with your running PostgreSQL binaries, well, they
shouldn't be doing that *anyway*.  If your OS distributor is in the
habit of shipping new versions of shared libraries that break backward
compatibility in minor OS releases, you need a new OS distributor.

If you upgrade your PostgreSQL distribution, and it breaks ABI
compatibility, then using the shared libraries baked into the old
cluster is exactly the wrong thing to do.  You need to use the NEW
shared libraries compiled against the new ABI, or your code may do
anything from not load to crash he server.  You had better hope
that both the old and new libraries have the same user-facing API, or
there's going to be trouble, but we've taken care to do that with the
things we ship in contrib and there's no reason to assume other module
authors can't do likewise.

 Here's another idea.  At initdb time, create an empty directory called
 called pg_you_can_load_stuff_from_here (pick a better name) inside
 $PGDATA.  Allow it to be replaced with a symlink.  This would be
 similar to what we do today with pg_xlog.  In fact, you can imagine an
 equivalent of initdb -X that does something precisely analogous.  This
 feels a bit more natural to me than a GUC.

 I like that too.

 Now the thing I've heard the most about our extension infrastructure is
 how to make it support loading different versions of the same module in
 different databases in the same cluster. The second one on that topic
 might be why are extensions managed per-database except for their binary
 module parts?

The basic problem here is that extensions are allowed to do anything,
including cluster-wide stuff like allocate and use chunks of shared
memory.  If you try to load and use two different and incompatible
versions of pg_stat_statements in two different databases, there will
not be a happy ending.  Also, if you load a version of some library in
shared_preload_libraries, you won't subsequently be able to decide to
load some other version in a backend bound to some particular
database.  Maybe these problems are solvable, but as things stand
today I think that trying to use identically-named .so files in
different databases at the same time should be regarded as dangerously
unsupported.  In any event, any kind of infrastructure intended to
support such use cases  is clearly a different project from simply
allowing modules to be loaded from a different location.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] relscan_details.h

2013-09-18 Thread Robert Haas
On Tue, Sep 17, 2013 at 4:54 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Now, htup_details.h was a bit different than the case at hand because
 there's evidently lots of code that want to deal with the guts of
 tuples, but for scans you mainly want to start one, iterate and finish,
 but don't care much about the innards.  So the cleanup work required is
 going to be orders of magnitude smaller.

I think the point is that we failed to predict the knock-on
consequences of that refactoring accurately.  If we make enough such
changes, we will probably face such issues again.  Sure, we can hope
that our ability to predict which changes will be disruptive will
improve with practice, but I doubt it's ever going to be perfect.

I certainly don't have the only vote here.  I'm just telling you that
from my point of view the last round of changes was a noticeable
headache and I don't really feel that I'm better off because of it, so
I'm in not in favor of continuing to make such changes on a regular
basis.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance problem in PLPgSQL

2013-09-18 Thread Andrew Gierth
 dlight == dlight  avinf...@gmail.com writes:

 dlight So if I run function 1 with varible inside the query in
 dlight one session it's replan each time.

 dlight But if whant to teling postgres do not doing this, what shoud
 dlight i do?

 dlight We have more than 1 runs in one session with varible
 dlight inside sql. And have big performance problem in 9.2 and 9.3.

 dlight Here is my tests.
 dlight 
http://postgresql.1045698.n5.nabble.com/function-execute-on-v-9-2-slow-down-tp5767170p5771403.html

One interesting question is: why on earth is that query taking ~500ms
just to plan?

(Second question is, what are the plans and costs for the generic and
custom plans being generated, but that's harder to determine)

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 Therefore, I see no harm in
 having an operator that tests for
 are-these-values-identical-for-all-purposes.  If that's useful for
 RMVC, then there may be other legitimate uses for it as well.
 
 And once we decide that's OK, I think we ought to document it.  Sure,
 it's a little confusing, but we can explain it, I think.  It's a good
 opportunity to point out to people that, most of the time, they really
 want something else, like the equality operator for the default btree
 opclass.

For my 2c on this, while this can be useful for *us*, and maybe folks
hacking pretty close to PG, I can't get behind introducing this as an
'===' or some such operator.  I've missed why this can't be a simple
function and why in the world we would want to encourage users to use
this by making it look like a normal language construct of SQL, which
damn well better consider numbers which are equal in value to be equal,
regardless of their representation.

What the heck is the use case for this being a user-oriented, SQL
operator..?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] record identical operator

2013-09-18 Thread Andres Freund
On 2013-09-18 11:06:13 -0500, Merlin Moncure wrote:
  Ugh.  This feels like a pretty ugly hack to deal with that.  I haven't
  got any magical wand to address it, but making an SQL operator for 'are
  these really the same bytes' to deal with what is essentially
  implementation detail is _very_ grotty.

I know the feeling, but I don't have a better suggestion either, so...

 Having matviews using SQL expressible features is a *good* thing.
 Having a user accessible operator is nice to have (if for no other
 reason than to allow testing for which matview rows would be
 refreshed).  I just don't understand what all the fuss is about except
 to make sure not to utilize an operator name that is better suited for
 other purposes.

It's an externally exposed API with not easily understandable semantics
that's not actually all that useful outside specific usecases. If we
decide to change it we're creating an API breakage. And we get to deal
with people saying it's broken because they don't understand the
semantics.

That said, I am ok with this if we use strange operator names and
document that the semantics are complicated...

==!!
==!!=
...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Andres Freund
On 2013-09-18 11:50:23 -0400, Stephen Frost wrote:
 For my 2c on this, while this can be useful for *us*, and maybe folks
 hacking pretty close to PG, I can't get behind introducing this as an
 '===' or some such operator.  I've missed why this can't be a simple
 function and why in the world we would want to encourage users to use
 this by making it look like a normal language construct of SQL, which
 damn well better consider numbers which are equal in value to be equal,
 regardless of their representation.

I certainly understand the feeling...

I think this really needs to have an obscure name. Like ==!!== or
somesuch (is equal very much, but doesn't actually test for equality ;))

 What the heck is the use case for this being a user-oriented, SQL
 operator..?

The materalized view code uses generated SQL, so it has to be SQL
accessible. And it needs to be an operator because the join planning
code requires that :(

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Stephen Frost sfr...@snowman.net wrote:

 making an SQL operator for 'are these really the same bytes' to
 deal with what is essentially implementation detail is _very_
 grotty.

We already have some such operators, although Andres argues that
comparing to that isn't fair because we at least know it is a
string of characters; we're just ignoring character boundaries and
collations.  Some of the operators use for the existing byte
comparison opclasses are:

~~ ~=~ ~=~ ~~

Go ahead and try them out with existing text values.  Andres has
said that he has seen these used in production systems.

= and  aren't listed above even though they do a byte-for-byte
comparison because, well, I guess because we have chosen to treat
two UTF8 strings which produce the same set of glyphs using
different bytes as unequal.  :-/

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 I think that would largely be rehashing previous discussions, in which
 it's already been established that we don't see eye to eye on this
 issue.  But briefly, I think that replacing shared libraries ought to

Partly yes, but as I'm feeling that we are getting closer than ever to a
consensus of a first move to be made, I want to be sure to understand
the limits of that very move so that I'm able to implement the list
consensus and nothing else.

While I don't agree with all you said in your answer, I clearly
understand that part: per-database management of modules is its own can
of worms and another discussion and patch altogether.

   Maybe these problems are solvable, but as things stand
 today I think that trying to use identically-named .so files in
 different databases at the same time should be regarded as dangerously
 unsupported.  In any event, any kind of infrastructure intended to
 support such use cases  is clearly a different project from simply
 allowing modules to be loaded from a different location.

Baring objections, I'll then implement a patch for allowing the core
code to load modules from a per-cluster location within $PGDATA.

The patch should also have a new initdb option much alike -X for pg_xlog
so that it's possible to relocate that new per-cluster modules directory
anywhere on the file system at initdb time.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 Kevin Grittner kgri...@ymail.com wrote:
 To have clean semantics, I think the operator should mean that the
 stored format of the row is the same.  Regarding the array null
 bitmap example, I think it would be truly weird if the operator
 said that the stored format was the same, but this happened:

 test=# select pg_column_size(ARRAY[1,2,3]);
   pg_column_size
 
   36
 (1 row)

 test=# select pg_column_size((ARRAY[1,2,3,NULL])::int4[3]);
   pg_column_size
 
   44
 (1 row)

 They have the same stored format, but a different number of
 bytes?!?

 Hmm.  For most of this thread, I was leaning toward the view that
 comparing the binary representations was the wrong concept, and that
 we actually needed to have type-specific operators that understand the
 semantics of the data type.

 But I think this example convinces me otherwise.  What we really want
 to do here is test whether two values are the same, and if you can
 feed two values that are supposedly the same to some function and get
 two different answers, well then they're not really the same, are 
 they?

Right.  Not only would the per-type solution make materialized views
maintenance broken by default, requiring per-type work to make it
work reasonably, with silent failures for any type you didn't know
about, but no user-visible differences is a pretty slippery
concept.  Did you really think of all the functions someone might
use to look at a value?  Might there be performance differences we
care about that should be handled, even if the user has no way to
dig out the difference?  Will that change in a future release?

 Now, I grant that the array case is pretty weird.  An array with an
 all-zeroes null bitmap is basically semantically identical to one with
 no null bitmap at all.  But there are other such cases as well.  You
 can have two floats that print the same way except when
 extra_float_digits=3, for example, and I think that's probably a
 difference that we *wouldn't* want to paper over.  You can have a
 long-form numeric that represents a value that could have been
 represented as a short-form numeric, which is similar to the array
 case.  There are probably other examples as well.  But in each of
 those cases, the point is that there *is* some operation which will
 distinguish between the two supposedly-identical values, and therefore
 they are not identical for all purposes.  Therefore, I see no harm in
 having an operator that tests for
 are-these-values-identical-for-all-purposes.  If that's useful for
 RMVC, then there may be other legitimate uses for it as well.

 And once we decide that's OK, I think we ought to document it.

That seems to be the consensus.  I don't think we can really
document this form of record comparison without also documenting
how equality works.  I'll work something up for the next version of
the patch.

 Sure, it's a little confusing, but we can explain it, I think.  It's a good
 opportunity to point out to people that, most of the time, they really
 want something else, like the equality operator for the default btree
 opclass.

I think the hardest part will be documenting the difference between
the row value constructor semantics (which are all that is
currently documented) and the record equality semantics (used for
sorting and building indexes).  In a green field I think I would
have argued for having just the standard semantics we have
documented, and modifying our sort execution nodes and index builds
to deal with that.  This is one of those cases where the breakage
from changing to that is hard to justify on a cleaner conceptual
semantics basis.

There also seems to be universal agreement that the operator names
should be something other than what I put in the v1 patch, but we
don't have agreement on what should be used instead.  We need six
operators, to support the btree am requirements.  Currently the
patch has:

=== !== ==  == 

Suggested same as operators so far are:


=
=
==

Anyone want to champion one of those, or something else?  How about
the other five operators to go with your favorite?

Keep in mind that this thread has also turned up strong support for
an operator to express IS NOT DISTINCT FROM -- so that it can be
used with ANY/ALL, among other things.  Long term, having an
opfamily for that might help us clean up the semantics of record
comparison when there are NULLs involved.  Currently we use the =
operator but act as though IS NOT DISTINCT FROM was specified
(except for some cases involving a row value constructor).  Any
serious discussion of that should probably move to a new thread,
but I mention it here because some people wanted to reserve
operator space for that, which could conflict with same as
operators.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make 

Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Kevin Grittner (kgri...@ymail.com) wrote:
 Right.  Not only would the per-type solution make materialized views
 maintenance broken by default, requiring per-type work to make it
 work reasonably, with silent failures for any type you didn't know
 about, but no user-visible differences is a pretty slippery
 concept.  

I don't like those possibilities, of course, but I'm starting to wonder
about this whole concept of looking at it byte-wise.  If I'm following
correctly, what we're looking at here is having a way for matviews to
tell if these bytes are the same as those bytes, for the purpose of
deciding to update the matview, right?  Yet we can then have cases where
the row isn't *actually* different from a value perspective, yet we're
going to update it anyway because it's represented slightly differently?

What happens if we later want to add support for users to have a matview
trigger that's called when a matview row *actually* changes?  We'd end
up calling it on what are essentially false positives, or having to do
some double-check later on well, did it *really* change?, neither of
which is good at all.  If we had the IS NOT DISTINCT FROM operators
discussed, would that work for this even if it isn't as performant?  Or
is there an issue with that?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] record identical operator

2013-09-18 Thread Merlin Moncure
On Wed, Sep 18, 2013 at 10:59 AM, Stephen Frost sfr...@snowman.net wrote:
 * Andres Freund (and...@2ndquadrant.com) wrote:
 I think this really needs to have an obscure name. Like ==!!== or
 somesuch (is equal very much, but doesn't actually test for equality ;))

 hah.

  What the heck is the use case for this being a user-oriented, SQL
  operator..?

 The materalized view code uses generated SQL, so it has to be SQL
 accessible. And it needs to be an operator because the join planning
 code requires that :(

 Ugh.  This feels like a pretty ugly hack to deal with that.  I haven't
 got any magical wand to address it, but making an SQL operator for 'are
 these really the same bytes' to deal with what is essentially
 implementation detail is _very_ grotty.

Having matviews using SQL expressible features is a *good* thing.
Having a user accessible operator is nice to have (if for no other
reason than to allow testing for which matview rows would be
refreshed).  I just don't understand what all the fuss is about except
to make sure not to utilize an operator name that is better suited for
other purposes.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Andres Freund (and...@2ndquadrant.com) wrote:
 I think this really needs to have an obscure name. Like ==!!== or
 somesuch (is equal very much, but doesn't actually test for equality ;))

hah.

  What the heck is the use case for this being a user-oriented, SQL
  operator..?
 
 The materalized view code uses generated SQL, so it has to be SQL
 accessible. And it needs to be an operator because the join planning
 code requires that :(

Ugh.  This feels like a pretty ugly hack to deal with that.  I haven't
got any magical wand to address it, but making an SQL operator for 'are
these really the same bytes' to deal with what is essentially
implementation detail is _very_ grotty.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal: simple date constructor from numeric values

2013-09-18 Thread Pavel Stehule
Hello

thank you,

I have no comments

Regards

Pavel


2013/9/18 Jeevan Chalke jeevan.cha...@enterprisedb.com

 Hi Pavel,

 I have reviewed your patch.

 Patch looks excellent and code changes match with similar constructs
 elsewhere. That's great.

 However, it was not applying with git apply command but able to apply it
 with patch -p1 with some offsets. make and make install was smooth too.
 Regression suite didn't complain as expected.

 I did my own testing and din't get any issues with that. Code walk-through
 was good too.

 I was little bit worried as we are allowing 60 for seconds in which case we
 are wrapping it to next minute and setting sec to 0. But this logic was not
 true for minutes. There we are throwing an error when min = 60.

 But I don't blame on this patch as other constructs does same too. Like
 select time '15:60:20' throws an error where as select time '15:30:60'
 does not.

 However, in attached patch I have fixed the typo identified by Alvaro.

 Please have a look before I submit it to the committer.

 Thanks



 On Sat, Jul 13, 2013 at 5:32 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 2013/7/12 Peter Eisentraut pete...@gmx.net:
  There is a small inconsistency:
 
  select time '12:30:57.123456789';
 
  gives
 
  12:30:57.123457
 
  but
 
  select make_time(12, 30, 57.123456789);
 
  gives
 
  12:30:57.123456

 fixed - see attached patch

 Regards

 Pavel

 


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Jeevan B Chalke
 Principal Software Engineer, Product Development
 EnterpriseDB Corporation
 The Enterprise PostgreSQL Company

 Phone: +91 20 30589500

 Website: www.enterprisedb.com
 EnterpriseDB Blog: http://blogs.enterprisedb.com/
 Follow us on Twitter: http://www.twitter.com/enterprisedb

 This e-mail message (and any attachment) is intended for the use of the
 individual or entity to whom it is addressed. This message contains
 information from EnterpriseDB Corporation that may be privileged,
 confidential, or exempt from disclosure under applicable law. If you are
 not the intended recipient or authorized to receive this for the intended
 recipient, any use, dissemination, distribution, retention, archiving, or
 copying of this communication is strictly prohibited. If you have received
 this e-mail in error, please notify the sender immediately by reply e-mail
 and delete this message.



Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Merlin Moncure (mmonc...@gmail.com) wrote:
 Having matviews using SQL expressible features is a *good* thing.

Fine, then it should be implemented *using SQL*, which is based on
*values*, not on how the value is represented in bits and bytes.

 Having a user accessible operator is nice to have (if for no other
 reason than to allow testing for which matview rows would be
 refreshed).  

If it's not actually *changing* (wrt its value), then I'm not at all
impressed with the notion that it's going to get updated anyway.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Kevin Grittner (kgri...@ymail.com) wrote:
 = and  aren't listed above even though they do a byte-for-byte
 comparison because, well, I guess because we have chosen to treat
 two UTF8 strings which produce the same set of glyphs using
 different bytes as unequal.  :-/

I tend to side with Andres on this case actually- we're being asked to
store specific UTF8 bytes by the end user.  That is not the same as
treating two different numerics which are the same *number* as
different because they have different binary representations, which is
entirely an internal-to-postgres consideration.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Where to load modules from?

2013-09-18 Thread Andres Freund
On 2013-09-18 08:46:08 -0400, Robert Haas wrote:
 Here's another idea.  At initdb time, create an empty directory called
 called pg_you_can_load_stuff_from_here (pick a better name) inside
 $PGDATA.  Allow it to be replaced with a symlink.  This would be
 similar to what we do today with pg_xlog.  In fact, you can imagine an
 equivalent of initdb -X that does something precisely analogous.  This
 feels a bit more natural to me than a GUC.

I think I'd prefer a GUC that allows specifying multiple directories
that are searched in order to a single symlinked directory.

Also, aren't symlinks an absolute PITA to manipulate by hand on
windows?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Steve Singer

On 09/18/2013 11:39 AM, Stephen Frost wrote:

* Kevin Grittner (kgri...@ymail.com) wrote:

= and  aren't listed above even though they do a byte-for-byte
comparison because, well, I guess because we have chosen to treat
two UTF8 strings which produce the same set of glyphs using
different bytes as unequal.  :-/

I tend to side with Andres on this case actually- we're being asked to
store specific UTF8 bytes by the end user.  That is not the same as
treating two different numerics which are the same *number* as
different because they have different binary representations, which is
entirely an internal-to-postgres consideration.



The problem is that there are datatypes (citext, postgis,...) that have 
defined = to return true when comparing two values that are different 
not just stored differently.  Are you saying that matview's should 
update only when the = operator of the datatype returns false and if 
people don't like this behaviour they should fix the datatypes?




Thanks,

Stephen




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
* Steve Singer (st...@ssinger.info) wrote:
 The problem is that there are datatypes (citext, postgis,...) that
 have defined = to return true when comparing two values that are
 different not just stored differently.

If the definition of the type is that they're equal, then they're equal.
Certainly there are cases where this is really rather broken
(particularly in the postgis case that you mention), but I don't think
that means we should change our definition of equality to generally be
are the bytes the same- clearly that'd lead to incorrect behavior in
the NUMERIC case.

 Are you saying that
 matview's should update only when the = operator of the datatype
 returns false and if people don't like this behaviour they should
 fix the datatypes?

imv, we are depending on the = operator to tell us when the
values are equal, regardless of type.  I have a hard time seeing how we
can do anything else.  The PostGIS situation is already broken when you
consider UNIQUE constraints and, while it's unfortunate that they'd need
to change their data type to fix that, I do feel it's on them to deal
with it.

Anyone can create an extension with their own data type which returns
wrong data and results, it's not on us to figure out how to make those
work even in the face of blatent violations like making = not actually
mean these values are the same.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-09-18 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Mon, Sep 16, 2013 at 8:49 AM, MauMau maumau...@gmail.com wrote:
 2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
 contain Unicode data.
 ...
 3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
 Fixed-width encoding may allow faster string manipulation as described in
 Oracle's manual.  But I'm not sure about this, because UTF-16 is not a real
 fixed-width encoding due to supplementary characters.

 It seems to me that these two points here are the real core of your
 proposal.  The rest is just syntactic sugar.

 Let me start with the second one: I don't think there's likely to be
 any benefit in using UTF-16 as the internal encoding.  In fact, I
 think it's likely to make things quite a bit more complicated, because
 we have a lot of code that assumes that server encodings have certain
 properties that UTF-16 doesn't - specifically, that any byte with the
 high-bit clear represents the corresponding ASCII character.

Another point to keep in mind is that UTF16 is not really any easier
to deal with than UTF8, unless you write code that fails to support
characters outside the basic multilingual plane.  Which is a restriction
I don't believe we'd accept.  But without that restriction, you're still
forced to deal with variable-width characters; and there's nothing very
nice about the way that's done in UTF16.  So on the whole I think it
makes more sense to use UTF8 for this.

I share Robert's misgivings about difficulties in dealing with characters
that are not representable in the database's principal encoding.  Still,
you probably won't find out about many of those until you try it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog bloat removing safety

2013-09-18 Thread Sergey Konoplev
On Wed, Sep 18, 2013 at 2:06 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-09-17 23:12:24 -0700, Sergey Konoplev wrote:
 How safe is it to use the technique described by the link below with
 system catalog tables to remove bloat?
 (in a couple of words it is about moving tuples to the beginning of
 the table with a special way of updating)
 http://www.depesz.com/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/

 At least for pg_index it's not safe, we use the xmin of rows when
 indcheckxmin = true.
 I am not aware of other cases, but I'd be hesitant to do so.

Thank you, Andres, I did not know this. Just wondering, why it was
made of type bool and not xid?

 You have bloat because of lots of temporary tables IIRC?

Actually I am developing a bloat removing tool based on the updating technique:

https://code.google.com/p/pgtoolkit/source/browse/trunk/README

Now the --system-catalog feature marked as experimental. About
pg_index - I think the solution is to exclude it from processing
strictly.

Anyone has other ideas of what system tables it might be affected
except pg_index?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for typo in src/bin/psql/command.c

2013-09-18 Thread Fujii Masao
On Wed, Sep 18, 2013 at 1:42 PM, Ian Lawrence Barwick barw...@gmail.com wrote:
 Attached.

Committed. Thanks!

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-09-18 Thread Fujii Masao
On Wed, Sep 18, 2013 at 2:41 PM, Sameer Thakur samthaku...@gmail.com wrote:
 You seem to have forgotten to include the pg_stat_statements--1.2.sql
 and pg_stat_statements--1.1--1.2.sql in the patch.
 Sorry again. Please find updated patch attached.

 I did not add pg_stat_statements--1.2.sql. I have added that now and updated
 the patch again.

Thanks!

I got the segmentation fault when I tested the case where the least-executed
query statistics is discarded, i.e., when I executed different queries more than
pg_stat_statements.max times. I guess that the patch might have a bug.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic shared memory

2013-09-18 Thread Andres Freund
Hi Robert, Hi Amit,

Ok, first read through the patch.

On 2013-09-13 15:32:36 -0400, Robert Haas wrote:
 -AC_CHECK_FUNCS([cbrt dlopen fdatasync getifaddrs getpeerucred getrlimit 
 mbstowcs_l memmove poll pstat readlink setproctitle setsid sigprocmask 
 symlink sync_file_range towlower utime utimes wcstombs wcstombs_l])
 +AC_CHECK_FUNCS([cbrt dlopen fdatasync getifaddrs getpeerucred getrlimit 
 mbstowcs_l memmove poll pstat readlink setproctitle setsid shm_open 
 sigprocmask symlink sync_file_range towlower utime utimes wcstombs 
 wcstombs_l])

Maybe also check for shm_unlink or is that too absurd?

 --- /dev/null
 +++ b/src/backend/storage/ipc/dsm.c
 +#define PG_DYNSHMEM_STATE_FILE   PG_DYNSHMEM_DIR /state
 +#define PG_DYNSHMEM_NEW_STATE_FILE   PG_DYNSHMEM_DIR /state.new

Hm, I guess you dont't want to add it  to global/ or so because of the
mmap implementation where you presumably scan the directory?

 +struct dsm_segment
 +{
 + dlist_node  node;   /* List link in 
 dsm_segment_list. */
 + ResourceOwner resowner; /* Resource owner. */
 + dsm_handle  handle; /* Segment name. */
 + uint32  control_slot;   /* Slot in control segment. */
 + void   *impl_private;   /* Implementation-specific 
 private data. */
 + void   *mapped_address; /* Mapping address, or NULL if 
 unmapped. */
 + uint64  mapped_size;/* Size of our mapping. */
 +};

Document that's backend local?

 +typedef struct dsm_control_item
 +{
 + dsm_handle  handle;
 + uint32  refcnt; /* 2+ = active, 1 = 
 moribund, 0 = gone */
 +} dsm_control_item;
 +
 +typedef struct dsm_control_header
 +{
 + uint32  magic;
 + uint32  nitems;
 + uint32  maxitems;
 + dsm_control_itemitem[FLEXIBLE_ARRAY_MEMBER];
 +} dsm_control_header;

And those are shared memory?

 +static void dsm_cleanup_using_control_segment(void);
 +static void dsm_cleanup_for_mmap(void);
 +static bool dsm_read_state_file(dsm_handle *h);
 +static void dsm_write_state_file(dsm_handle h);
 +static void dsm_postmaster_shutdown(int code, Datum arg);
 +static void dsm_backend_shutdown(int code, Datum arg);
 +static dsm_segment *dsm_create_descriptor(void);
 +static bool dsm_control_segment_sane(dsm_control_header *control,
 +  uint64 mapped_size);
 +static uint64 dsm_control_bytes_needed(uint32 nitems);
 +
 +/* Has this backend initialized the dynamic shared memory system yet? */
 +static bool dsm_init_done = false;
 +
 +/*
 + * List of dynamic shared memory segments used by this backend.
 + *
 + * At process exit time, we must decrement the reference count of each
 + * segment we have attached; this list makes it possible to find all such
 + * segments.
 + *
 + * This list should always be empty in the postmaster.  We could probably
 + * allow the postmaster to map dynamic shared memory segments before it
 + * begins to start child processes, provided that each process adjusted
 + * the reference counts for those segments in the control segment at
 + * startup time, but there's no obvious need for such a facility, which
 + * would also be complex to handle in the EXEC_BACKEND case.  Once the
 + * postmaster has begun spawning children, there's an additional problem:
 + * each new mapping would require an update to the control segment,
 + * which requires locking, in which the postmaster must not be involved.
 + */
 +static dlist_head dsm_segment_list = DLIST_STATIC_INIT(dsm_segment_list);
 +
 +/*
 + * Control segment information.
 + *
 + * Unlike ordinary shared memory segments, the control segment is not
 + * reference counted; instead, it lasts for the postmaster's entire
 + * life cycle.  For simplicity, it doesn't have a dsm_segment object either.
 + */
 +static dsm_handle dsm_control_handle;
 +static dsm_control_header *dsm_control;
 +static uint64 dsm_control_mapped_size = 0;
 +static void  *dsm_control_impl_private = NULL;
 +
 +/*
 + * Start up the dynamic shared memory system.
 + *
 + * This is called just once during each cluster lifetime, at postmaster
 + * startup time.
 + */
 +void
 +dsm_postmaster_startup(void)
 +{
 + void   *dsm_control_address = NULL;
 + uint32  maxitems;
 + uint64  segsize;
 +
 + Assert(!IsUnderPostmaster);
 +
 + /* If dynamic shared memory is disabled, there's nothing to do. */
 + if (dynamic_shared_memory_type == DSM_IMPL_NONE)
 + return;
 +
 + /*
 +  * Check for, and remove, shared memory segments left behind by a dead
 +  * postmaster.  This isn't necessary on Windows, which always removes 
 them
 +  * when the last reference is gone.
 +  */
 + switch (dynamic_shared_memory_type)
 + {
 + case DSM_IMPL_POSIX:
 + case 

Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-09-18 Thread Fujii Masao
On Thu, Sep 19, 2013 at 2:41 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Sep 18, 2013 at 2:41 PM, Sameer Thakur samthaku...@gmail.com wrote:
 You seem to have forgotten to include the pg_stat_statements--1.2.sql
 and pg_stat_statements--1.1--1.2.sql in the patch.
 Sorry again. Please find updated patch attached.

 I did not add pg_stat_statements--1.2.sql. I have added that now and updated
 the patch again.

pg_stat_statements--1.1.sql should be removed.

+  entrystructfieldqueryid/structfield/entry
+  entrytypebigint/type/entry
+  entry/entry
+  entryUnique value of each representative statement for the
current statistics session.
+   This value will change for each new statistics session./entry

What does statistics session mean?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Stephen Frost sfr...@snowman.net wrote:

 If it's not actually *changing* (wrt its value), then I'm not at
 all impressed with the notion that it's going to get updated
 anyway.

But PostgreSQL very specifically (and as far as I can tell
*intentionally*) allows you to *change* a value and have it still
be considered *equal*.  The concept of equal values really means
more like equivalent or close enough for common purposes.  It
very specifically does *not* mean the same value.

As just one example, think how much easier the citext type would be
to implement if it folded all values to lower case as they were
input, rather than preserving the data as entered and considering
different capitalizations as equal.

The notion that in PostgreSQL a value has not changed if the new
value is equal to the old is just flat out wrong.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Stephen Frost
Kevin,

On Wednesday, September 18, 2013, Kevin Grittner wrote:

 Stephen Frost sfr...@snowman.net javascript:; wrote:

  If it's not actually *changing* (wrt its value), then I'm not at
  all impressed with the notion that it's going to get updated
  anyway.

 But PostgreSQL very specifically (and as far as I can tell
 *intentionally*) allows you to *change* a value and have it still
 be considered *equal*.


I'm curious where you're going with that- of course you can update a value
and have the same value (and possibly the same byte representation) stored
over the old.


 The concept of equal values really means
 more like equivalent or close enough for common purposes.  It
 very specifically does *not* mean the same value.


I'm really curious about your thoughts on unique indexes then. Should two
numerics which are the same value but different byte representations be
allowed in a unique index?


 As just one example, think how much easier the citext type would be
 to implement if it folded all values to lower case as they were
 input, rather than preserving the data as entered and considering
 different capitalizations as equal.


If the type operator says they're equal, then I think we need to consider
them as equal. If an update happens with a conditional of:

where col1 = 'Abc'

When col1 is 'ABC' using citext, should we still issue the update?


The notion that in PostgreSQL a value has not changed if the new
 value is equal to the old is just flat out wrong.


The value *can* be changed to be equal to the existing value but that
doesn't make the two values *not equal*.

Thanks,

Stephen


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Stephen Frost sfr...@snowman.net wrote:

 I don't think that means we should change our definition of
 equality to generally be are the bytes the same- clearly that'd
 lead to incorrect behavior in the NUMERIC case.

Nobody is talking in any way, shape, or form about changing our
concept of what is equal.  We're talking about recognizing that
in PostgreSQL equal does *not* mean the same.  If we used the
equal concept for determining what has changed, if someone was
tracking numeric data without precision and scale so that they
could track accuracy (by storing the correct number of decimal
positions) the accuracy could not be replicated to a materialized
view.  Of course, streaming replication would replicate the
change, but if '1.4' was stored in a column copied into a matview
and they later updated the source to '1.40' the increase in
accuracy would not flow to the matview.  That would be a bug, not a
feature.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 05:53 PM, Andres Freund wrote:
 On 2013-09-18 11:50:23 -0400, Stephen Frost wrote:
 For my 2c on this, while this can be useful for *us*, and maybe folks
 hacking pretty close to PG, I can't get behind introducing this as an
 '===' or some such operator.  I've missed why this can't be a simple
 function and why in the world we would want to encourage users to use
 this by making it look like a normal language construct of SQL, which
 damn well better consider numbers which are equal in value to be equal,
 regardless of their representation.
 I certainly understand the feeling...

 I think this really needs to have an obscure name. Like ==!!== or
 somesuch (is equal very much, but doesn't actually test for equality ;))
In PostgreSQL equality can be anything :)

In other words, we have pluggable equality, so it is entirely
feasible to have an opclass where binary equality is *the* equality

the problem started with some opclass equality (case insensitive
comparison) missing user-visible changes.

Cheers


-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-09-18 Thread Heikki Linnakangas

On 18.09.2013 16:16, Robert Haas wrote:

On Mon, Sep 16, 2013 at 8:49 AM, MauMaumaumau...@gmail.com  wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual.  But I'm not sure about this, because UTF-16 is not a real
fixed-width encoding due to supplementary characters.


It seems to me that these two points here are the real core of your
proposal.  The rest is just syntactic sugar.

Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding.  In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases?  What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes.  You haven't mentioned any of those
issues or discussed how you would solve them.


I'm not too thrilled about complicating the system for that, either. If 
you really need to deal with many different languages, you can do that 
today by using UTF-8 everywhere. Sure, it might not be the most 
efficient encoding for some characters, but it works.


There is one reason, however, that makes it a lot more compelling: we 
already support having databases with different encodings in the same 
cluster, but the encoding used in the shared catalogs, for usernames and 
database names for example, is not well-defined. If we dealt with 
different encodings in the same database, that inconsistency would go away.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Please mark new patches on the next CF

2013-09-18 Thread David Fetter
If you've got something that's not already on the current commitfest,
please put it on the next one.

Thanks!

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Stephen Frost sfr...@snowman.net
 Kevin Grittner  wrote:
 Stephen Frost sfr...@snowman.net wrote:

 If it's not actually *changing* (wrt its value), then I'm not
 at all impressed with the notion that it's going to get updated
 anyway.

 But PostgreSQL very specifically (and as far as I can tell
 *intentionally*) allows you to *change* a value and have it
 still be considered *equal*.  

 I'm curious where you're going with that- of course you can
 update a value and have the same value (and possibly the same
 byte representation) stored over the old.

The way I see it, you can update a column to a different value
which will compare as equal.  That's fine.  Nobody wants to change
that.  But it is still not the same value.

 The concept of equal values really means
 more like equivalent or close enough for common purposes.  It
 very specifically does *not* mean the same value.

 I'm really curious about your thoughts on unique indexes then.
 Should two numerics which are the same value but different byte
 representations be allowed in a unique index?

Not if it is defined with the default opclass, which will use an
equal operator.  Of course, this patch would allow an index on a
record to be defined with record_image_ops, in which case it would
sort by the raw bytes in the values of the record.  That's not
going to be useful in very many places, which is why it would not
be the default.  You don't get that behavior unless you ask for it.


See this docs page for a similar example related to complex numbers:

http://www.postgresql.org/docs/current/interactive/xindex.html#XINDEX-EXAMPLE


 If the type operator says they're equal, then I think we need to
 consider them as equal.

Absolutely.  Two different values may be equal within an opclass.

 If an update happens with a conditional of:

 where col1 = 'Abc'

 When col1 is 'ABC' using citext, should we still issue the
 update?

Absolutely not, because the update was requested in the case that
the equality test was true.  Yet if a row is updated to replace
'Abc' with 'ABC', then streaming replication should copy the
different but equal value (it does), a normal view should now
show 'ABC' (it does), and a refresh of a matview should cause the
matview to show 'ABC' (it doesn't in git, but this patch would make
that work).

 The value *can* be changed to be equal to the existing value but
 that doesn't make the two values *not equal*.

Nobody has ever argued that they should be considered *not equal*. 
It's just about providing a way to recognize when two equal values
*are not the same*.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:

 If an update happens with a conditional of:

 where col1 = 'Abc'

 When col1 is 'ABC' using citext, should we still issue the
 update?

 Absolutely not, because the update was requested in the case that
 the equality test was true.

Sorry, as if this thread were not long enough, I misread that and
gave the wrong answer.  Yes, the equal operator was used and the
equal operator for two citext values says those are equal, so the
row *should* be updated.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Dimitri Fontaine
Kevin Grittner kgri...@ymail.com writes:
 change, but if '1.4' was stored in a column copied into a matview
 and they later updated the source to '1.40' the increase in
 accuracy would not flow to the matview.  That would be a bug, not a
 feature.

Maybe the answer to that use case is to use the seg extension?

  http://www.postgresql.org/docs/9.3/interactive/seg.html

IOW, colour me unconvinced about that binary-equality opclass use case
in MatViews. We are trusting the btree equality operator about
everywhere in PostgreSQL and it's quite disturbing to be told that in
fact we should not trust it.

Would it make sense for you to produce a patch without the extra
operators, behavior, opclass and opfamily here so that we can focus on
the MatView parts of it?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Kevin Grittner kgri...@ymail.com writes:
 change, but if '1.4' was stored in a column copied into a matview
 and they later updated the source to '1.40' the increase in
 accuracy would not flow to the matview.  That would be a bug, not a
 feature.

 Maybe the answer to that use case is to use the seg extension?

   http://www.postgresql.org/docs/9.3/interactive/seg.html

You are arguing that we should provide lesser support for numeric
columns (and who knows how many other types) in materialized views
than we do in streaming replication, pg_dump,
suppress_redundant_updates_trigger(), and other places?  Why?

 IOW, colour me unconvinced about that binary-equality opclass use case
 in MatViews. We are trusting the btree equality operator about
 everywhere in PostgreSQL and it's quite disturbing to be told that in
 fact we should not trust it.

Who said we should not trust it?  I have said that equality in
PostgreSQL does not mean the two values appear the same or behave
the same in all cases -- the definer of the class gets to determine
how many definitions of equality there are, and which (if any) is
tied to the = operator name.  That should not be news to anybody;
it's in the documentation.  I'm proposing a second definition of
equality with a different operator name for comparing two records,
without in any way disturbing the existing definition.  I am taken
completely by surprise that in this case creating a second opclass
for something is somehow controversial.  The documentation I cited
previously provides a clear example of another case where two
completely different concepts of equality for a type are useful.

We have, as a community, gone to a fair amount of trouble  to make
the concept of equality pluggable and allow multiple types of
equality per type.  To me it seems the perfect tool to solve this
problem.  Why the fuss?

 Would it make sense for you to produce a patch without the extra
 operators, behavior, opclass and opfamily here so that we can focus on
 the MatView parts of it?

No, matviews cannot be fixed without the new operators.  Here are
the stats on the patch:

kgrittn@Kevin-Desktop:~/pg/master$ git diff --stat master..matview
 contrib/citext/expected/citext.out   |   41 +++
 contrib/citext/expected/citext_1.out |   41 +++
 contrib/citext/sql/citext.sql    |   23 ++
 src/backend/commands/matview.c   |    7 +-
 src/backend/utils/adt/rowtypes.c |  482 ++
 src/include/catalog/pg_amop.h    |   10 +
 src/include/catalog/pg_amproc.h  |    1 +
 src/include/catalog/pg_opclass.h |    1 +
 src/include/catalog/pg_operator.h    |   14 +
 src/include/catalog/pg_opfamily.h    |    1 +
 src/include/catalog/pg_proc.h    |   12 +-
 src/include/utils/builtins.h |    7 +
 src/test/regress/expected/opr_sanity.out |    7 +-
 13 files changed, 642 insertions(+), 5 deletions(-)

The changes to matview.c are the only ones that are
matview-specific.  Basically, that consists of using the new
operator instead of = in a couple places.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freezing without write I/O

2013-09-18 Thread Jeff Janes
On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:


 Here's a rebased version of the patch, including the above-mentioned
 fixes. Nothing else new.


I've applied this to 0892ecbc015930d, the last commit to which it applies
cleanly.

When I test this by repeatedly incrementing a counter in a randomly chosen
row, then querying the whole table and comparing the results to what my
driver knows they should be, I get discrepancies.

No crash/recovery needs to be done to get the behavior.

The number of rows is correct, so one version of every row is visible, but
it is the wrong version.

The discrepancy arises shortly after the first time this type of message
appears:

6930 UPDATE 2013-09-18 12:36:34.519 PDT:LOG:  started new XID range, XIDs
133-, MultiXIDs 1-, tentative LSN 0/FA517F8
6930 UPDATE 2013-09-18 12:36:34.519 PDT:STATEMENT:  update foo set
count=count+1 where index=$1
6928 UPDATE 2013-09-18 12:36:34.521 PDT:LOG:  closed old XID range at
1000193 (LSN 0/FA58A08)
6928 UPDATE 2013-09-18 12:36:34.521 PDT:STATEMENT:  update foo set
count=count+1 where index=$1

I'll work on getting the driver to shutdown the database the first time it
finds a problem so that autovac doesn't destroy evidence.

Cheers,

Jeff


Re: [HACKERS] Not In Foreign Key Constraint

2013-09-18 Thread Jim Nasby

On 9/16/13 6:16 AM, Misa Simic wrote:

Hi hackers,

I just wonder how hard would be to implement something like Not In FK 
Constraint or opposite to FK...

i.e:

  FK ensures that value of FK column of inserted row exists in refferenced Table

  NotInFK should ensure  that value of NotInFK column of inserted row does not 
Exist in referenced Table...


The only difference/problem I see is that adding that constraint on an Table - 
Forces the same Constraint on another table (but in opposite direction)


i.e.

TableA(tableA_pk, other_columns)
TableB(tableb_fk_tableA_pk, other_columns)
TableC(tablec_notInfk_tableA_pk, other_column)


each _pk column is Primary Key of its Table
TableB has on PK FK to TableA on the same time...

INSERT INTO TableA VALUES ('tableAPK1', 'somedata')

INSERT INTO TableB VALUES ('tableAPK1'. 'somedata')

everything ok,


now, we would like to Add NotInFK on TableC To TableA

INSERT INTO TableC VALUES ('tableAPK1'. 'somedata')

Should Fail - because of 'tableAPK1' exists in TableA

INSERT INTO TableC VALUES ('tableAPK2'. 'somedata')

Should pass - because of 'tableAPK2'  does not exist in TableA...

How ever, now

INSERT INTO TableA VALUES ('tableAPK2'. 'somedata')

should fail as well - because of that value exists in TableC


I guess that rule can be achieved with triigers on TableA and TableC - but the 
same is true for FK (and FK constraint is more effective then trigger - that is 
why I wonder would it be useful/achievable to create that kind of constraint)

Thoughts, ideas?


You're unlikely to find much support for this without use cases.

Why would you want an Anti-FK?
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Questions about checksum feature in 9.3

2013-09-18 Thread Jim Nasby

On 9/16/13 10:14 AM, David Johnston wrote:

The single core aspect is interesting.  Does the implementation have a
dedicated core to perform these calculations or must the same thread that
handles the relevant query perform this work as well?  How much additional
impact/overhead does having to multitask have on the maximum throughput of a
single core in processing checksums?


Postgres doesn't currently have any real kind of parallelism, so whatever 
process needs to do the checksum will be the process actually running the 
checksum.

That said, there are background processes that could potentially be involved 
here, depending on exactly where checksums are being calculated (I don't 
remember exactly when the checks are done). For example, if a buffer is being 
written out by the bgwriter, then it's the bgwriter process that will actually 
do the checksum, not a backend process.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-18 Thread Jim Nasby

On 9/14/13 11:55 PM, Pavel Stehule wrote:




2013/9/15 Marko Tiikkaja ma...@joh.to mailto:ma...@joh.to

On 2013-09-15 00:09, Pavel Stehule wrote:

this is a possibility for introduction a new hook and possibility 
implement
asserions and similar task in generic form (as extension). it can be
assertions, tracing, profiling.


You can already do tracing and profiling in an extension.  I don't see what 
you would put inside the function body for these two, either.


you cannot mark a tracing points explicitly in current (unsupported now) 
extensions.

These functions share  same pattern:

CREATE OR REPLACE FUNCTION assert(boolean)
RETURNS void AS $$
IF current_setting('plpgsq.assertions') = 'on' THEN
   IF $1 THEN
 RAISE EXCEPTION 'Assert fails';
   END IF;
END IF;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION trace(text)
RETURNS void AS $$
IF current_setting('plpgsq.trace') = 'on' THEN
 RAISE WARNING 'trace: %', $1; END IF;
END;
$$ LANGUAGE plpgsql;

Depends on usage, these functions will not be extremely slow against to builtin 
solution - can be faster, if we implement it in C, and little bit faster if we 
implement it as internal PLpgSQL statement. But if you use a one not simple 
queries, then overhead is not significant (probably).

You have to watch some global state variable and then execute (or not) some 
functionality.


FWIW, we've written a framework (currently available in the EnovaTools project 
on pgFoundry) that allows for very, very fine-grain control over asserts.

- Every assert has a name (and an optional sub-name) as well as a level
- You can globally set the minimum level that will trigger an assert. This is 
useful for some debugging stuff; have an assert with a negative level and 
normally it won't fire unless you set the minimum level to be less than zero.
- You can disable an assert globally (across all backends)
- You can disable an assert only within your session

We should eventually allow for disabling an assert only for your transaction; 
we just haven't gotten around to it yet.

The reason for all this flexibility is the concept of it should be very difficult 
but not impossible for the code to do X. We use it for sanity-checking things.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] encouraging index-only scans

2013-09-18 Thread Jim Nasby

On 9/17/13 6:10 PM, Andres Freund wrote:

What if we maintained XID stats for ranges of pages in a separate
fork? Call it the XidStats fork. Presumably the interesting pieces
would be min(xmin) and max(xmax) for pages that aren't all visible. If
we did that at a granularity of, say, 1MB worth of pages[1] we're
talking 8 bytes per MB, or 1 XidStats page per GB of heap. (Worst case
alignment bumps that up to 2 XidStats pages per GB of heap.)



Yes, I have thought about similar ideas as well, but I came to the
conclusion that it's not worth it. If you want to make the boundaries
precise and the xidstats fork small, you're introducing new contention
points because every DML will need to make sure it's correct.


Actually, that's not true... the XidStats only need to be relatively precise. 
IE: within a few hundred or thousand XIDs.

So for example, you'd only need to attempt an update if the XID already stored 
was more than a few hundred/thousand/whatever XIDs away from your XID. If it's 
any closer don't even bother to update.

That still leaves potential for thundering herd on the fork buffer lock if 
you've got a ton of DML on one table across a bunch of backends, but there 
might be other ways around that. For example, if you know you can update the 
XID with a CPU-atomic instruction, you don't need to lock the page.


Also, the amount of code that would require seems to be bigger than
justified by the increase of precision when to vacuum.


That's very possibly true. I haven't had a chance to see how much VM bits help 
reduce vacuum overhead yet, so I don't have anything to add on this front. 
Perhaps others might.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 09:19 PM, Dimitri Fontaine wrote:
 Kevin Grittner kgri...@ymail.com writes:
 change, but if '1.4' was stored in a column copied into a matview
 and they later updated the source to '1.40' the increase in
 accuracy would not flow to the matview.  That would be a bug, not a
 feature.
 Maybe the answer to that use case is to use the seg extension?

   http://www.postgresql.org/docs/9.3/interactive/seg.html

 IOW, colour me unconvinced about that binary-equality opclass use case
 in MatViews. We are trusting the btree equality operator about
 everywhere in PostgreSQL and it's quite disturbing to be told that in
 fact we should not trust it.
The problem is, that in this case the simple VIEW and MATVIEW
would yield different results.


-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 09:41 PM, Kevin Grittner wrote:
 Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Kevin Grittner kgri...@ymail.com writes:
 change, but if '1.4' was stored in a column copied into a matview
 and they later updated the source to '1.40' the increase in
 accuracy would not flow to the matview.  That would be a bug, not a
 feature.
 Maybe the answer to that use case is to use the seg extension?

http://www.postgresql.org/docs/9.3/interactive/seg.html
 You are arguing that we should provide lesser support for numeric
 columns (and who knows how many other types) in materialized views
 than we do in streaming replication, pg_dump,
 suppress_redundant_updates_trigger(), and other places?  Why?

 IOW, colour me unconvinced about that binary-equality opclass use case
 in MatViews. We are trusting the btree equality operator about
 everywhere in PostgreSQL and it's quite disturbing to be told that in
 fact we should not trust it.
 Who said we should not trust it?  I have said that equality in
 PostgreSQL does not mean the two values appear the same or behave
 the same in all cases -- the definer of the class gets to determine
 how many definitions of equality there are, and which (if any) is
 tied to the = operator name.  That should not be news to anybody;
 it's in the documentation.  I'm proposing a second definition of
 equality with a different operator name for comparing two records,
 without in any way disturbing the existing definition. 
Basically what proposed === does is is guaranteed to be equal.
If it is not *guaranteed* it is safe to re-evaluate, either using
equal or something else.



-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 06:05 PM, Stephen Frost wrote:
 * Kevin Grittner (kgri...@ymail.com) wrote:
 Right.  Not only would the per-type solution make materialized views
 maintenance broken by default, requiring per-type work to make it
 work reasonably, with silent failures for any type you didn't know
 about, but no user-visible differences is a pretty slippery
 concept.  
 I don't like those possibilities, of course, but I'm starting to wonder
 about this whole concept of looking at it byte-wise.  If I'm following
 correctly, what we're looking at here is having a way for matviews to
 tell if these bytes are the same as those bytes, for the purpose of
 deciding to update the matview, right? 
Basically what is needed is to check if the rowm *might* have
changed, so the new value, which may or may not be equal
would be refreshed into matview.
  Yet we can then have cases where
 the row isn't *actually* different from a value perspective, yet we're
 going to update it anyway because it's represented slightly differently?

 What happens if we later want to add support for users to have a matview
 trigger that's called when a matview row *actually* changes?  We'd end
 up calling it on what are essentially false positives, or having to do
 some double-check later on well, did it *really* change?, neither of
 which is good at all.  If we had the IS NOT DISTINCT FROM operators
 discussed, would that work for this even if it isn't as performant?  Or
 is there an issue with that?
IS NOT DISTINCT solves the problem with weird equality of NULLS
and nothing else, so it would not help here.
What the proposed operator family solves is possiby changed

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 05:54 PM, Kevin Grittner wrote:
 ...
 I think the hardest part will be documenting the difference between
 the row value constructor semantics (which are all that is
 currently documented) and the record equality semantics (used for
 sorting and building indexes).  In a green field I think I would
 have argued for having just the standard semantics we have
 documented, and modifying our sort execution nodes and index builds
 to deal with that.  This is one of those cases where the breakage
 from changing to that is hard to justify on a cleaner conceptual
 semantics basis.

 There also seems to be universal agreement that the operator names
 should be something other than what I put in the v1 patch, but we
 don't have agreement on what should be used instead.  We need six
 operators, to support the btree am requirements.  Currently the
 patch has:

 === !== ==  == 

 Suggested same as operators so far are:

 
 =
 =
 ==

 Anyone want to champion one of those, or something else?  How about
 the other five operators to go with your favorite?
ANother take would be using possibly unequal for this with operator
defined as

*==* (*definitely* equal , or guaranteed to be equal)

the inequality operator would thus become !== (may be not equal)

and ordering ops  would be ?==(maybe smaller or equal), ?(maybe smaller)
and same for larger ?== and ?

as a table

*==*   binary equal, surely very equal by any other definition as wall
!==?  maybe not equal -- binary inequal, may still be equal by
other comparison methods
==?  binary smaller or equal, may be anything by other comparison
methods
?   
==?
?


Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Hannu Krosing
On 09/18/2013 07:53 PM, Stephen Frost wrote:

 I'm really curious about your thoughts on unique indexes then. Should
 two numerics which are the same value but different byte
 representations be allowed in a unique index?
You could have multiple btree opclasses defined which would enforce
different kind of uniqueness

For example you could have an opclass which considers two strings
equal if four first bytes are equal.

If you would create an unique index using that opclass you could not
have both industrial and
induction as primary keys as the same time, as the unique index would
consider them equal.

But you would still want to see the change in your matview after you do

UPDATE mytable set id = 'industrial' where id = 'induction';

Cheers


-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Dead code or buggy code?

2013-09-18 Thread Greg Stark
The following code is in the ProcSleep at proc.c:1138.
GetBlockingAutoVacuumPgproc() should presumably always return a vacuum
pgproc entry since the deadlock state says it's blocked by autovacuum.
But I'm not really familiar enough with this codepath to know whether
there's not a race condition here where it can sometimes return null.
The following code checks autovac != NULL but the PGXACT initializer
would have seg faulted if it returned NULL if that's possible.

if (deadlock_state == DS_BLOCKED_BY_AUTOVACUUM 
allow_autovacuum_cancel)
{
PGPROC   *autovac = GetBlockingAutoVacuumPgproc();
PGXACT   *autovac_pgxact =
ProcGlobal-allPgXact[autovac-pgprocno];

LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);

/*
 * Only do it if the worker is not working to protect against Xid
 * wraparound.
 */
if ((autovac != NULL) 
(autovac_pgxact-vacuumFlags  PROC_IS_AUTOVACUUM) 
!(autovac_pgxact-vacuumFlags  PROC_VACUUM_FOR_WRAPAROUND))
{


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] relscan_details.h

2013-09-18 Thread Noah Misch
On Tue, Sep 17, 2013 at 05:54:04PM -0300, Alvaro Herrera wrote:
 Robert Haas escribi?:
 
  Personally, I'm not particularly in favor of these kinds of changes.
  The changes we made last time broke a lot of extensions - including
  some proprietary EDB ones that I had to go fix.  I think a lot of
  people spent a lot of time fixing broken builds, at EDB and elsewhere,
  as well as rebasing patches.  And the only benefit we have to balance
  that out is that incremental recompiles are faster, and I'm not really
  sure how important that actually is.  On my system, configure takes 25
  seconds and make -j3 takes 65 seconds; so, even a full recompile is
  pretty darn fast, and an incremental recompile is usually really fast.
   Now granted this is a relatively new system, but still.
 
 Fortunately the machines I work on now are also reasonably fast.  There
 was a time when my desktop was so slow that it paid off to tweak certain
 file timestamps to avoid spurious recompiles.  Now I don't have to
 worry.  But it still annoys me that I have enough time to context-switch
 to, say, the email client or web browser, from where I don't switch back
 so quickly; which means I waste five or ten minutes for a task that
 should have taken 20 seconds.

Right.  If we can speed up a representative sample of incremental recompiles
by 20%, then I'm on board.  At 3%, probably not.  (Alas, even 20% doesn't move
it out of the causes-context-switch category.  For that, I think you need
fundamentally smarter tools.)

 Now, htup_details.h was a bit different than the case at hand because
 there's evidently lots of code that want to deal with the guts of
 tuples, but for scans you mainly want to start one, iterate and finish,
 but don't care much about the innards.  So the cleanup work required is
 going to be orders of magnitude smaller.

There will also be the folks who must add heapam.h and/or genam.h includes
despite formerly getting it/them through execnodes.h.  That's not ugly like
#if PG_VERSION_NUM ..., but it's still work for authors.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-09-18 Thread MauMau

From: Robert Haas robertmh...@gmail.com

On Mon, Sep 16, 2013 at 8:49 AM, MauMau maumau...@gmail.com wrote:

2. NCHAR/NVARCHAR columns can be used in non-UTF-8 databases and always
contain Unicode data.

...

3. Store strings in UTF-16 encoding in NCHAR/NVARCHAR columns.
Fixed-width encoding may allow faster string manipulation as described in
Oracle's manual.  But I'm not sure about this, because UTF-16 is not a 
real

fixed-width encoding due to supplementary characters.


It seems to me that these two points here are the real core of your
proposal.  The rest is just syntactic sugar.


No, those are desirable if possible features.  What's important is to 
declare in the manual that PostgreSQL officially supports national character 
types, as I stated below.



1. Accept NCHAR/NVARCHAR as data type name and N'...' syntactically.
This is already implemented.  PostgreSQL treats NCHAR/NVARCHAR as synonyms
for CHAR/VARCHAR, and ignores N prefix.  But this is not documented.

2. Declare support for national character support in the manual.
1 is not sufficient because users don't want to depend on undocumented
behavior.  This is exactly what the TODO item national character support
in PostgreSQL TODO wiki is about.

3. Implement NCHAR/NVARCHAR as distinct data types, not as synonyms so 
that:

- psql \d can display the user-specified data types.
- pg_dump/pg_dumpall can output NCHAR/NVARCHAR columns as-is, not as
CHAR/VARCHAR.
- To implement additional features for NCHAR/NVARCHAR in the future, as
described below.



And when declaring that, we had better implement NCHAR types as distinct 
types with their own OIDs so that we can extend NCHAR behavior in the 
future.
As the first stage, I think it's okay to treat NCHAR types exactly the same 
as CHAR/VARCHAR types.  For example, in ECPG:


switch (type)
case OID_FOR_CHAR:
case OID_FOR_VARCHAR:
case OID_FOR_TEXT:
case OID_FOR_NCHAR:  /* new code */
case OID_FOR_NVARCHAR:  /* new code */
some processing;
break;
And in JDBC, just call methods for non-national character types. 
Currently, those national character methods throw SQLException.


public void setNString(int parameterIndex, String value) throws SQLException
{
setString(parameterIndex, value);
}



Let me start with the second one: I don't think there's likely to be
any benefit in using UTF-16 as the internal encoding.  In fact, I
think it's likely to make things quite a bit more complicated, because
we have a lot of code that assumes that server encodings have certain
properties that UTF-16 doesn't - specifically, that any byte with the
high-bit clear represents the corresponding ASCII character.

As to the first one, if we're going to go to the (substantial) trouble
of building infrastructure to allow a database to store data in
multiple encodings, why limit it to storing UTF-8 in non-UTF-8
databases?  What about storing SHIFT-JIS in UTF-8 databases, or
Windows-yourfavoriteM$codepagehere in UTF-8 databases, or any other
combination you might care to name?

Whether we go that way or not, I think storing data in one encoding in
a database with a different encoding is going to be pretty tricky and
require far-reaching changes.  You haven't mentioned any of those
issues or discussed how you would solve them.



Yes, you are probably right -- I'm not sure UTF-16 has really benefits that 
UTF-8 doesn't have.  But why did Windows and Java choose UTF-16 for internal 
strings rather than UTF-8?  Why did Oracle recommend UTF-16 for NCHAR?  I 
have no clear idea.  Anyway, I don't strongly push UTF-16 and complicate the 
encoding handling.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UTF8 national character data type support WIP patch and list of open issues.

2013-09-18 Thread MauMau

From: Tom Lane t...@sss.pgh.pa.us

Another point to keep in mind is that UTF16 is not really any easier
to deal with than UTF8, unless you write code that fails to support
characters outside the basic multilingual plane.  Which is a restriction
I don't believe we'd accept.  But without that restriction, you're still
forced to deal with variable-width characters; and there's nothing very
nice about the way that's done in UTF16.  So on the whole I think it
makes more sense to use UTF8 for this.


I feel so.  I guess why Windows, Java, and Oracle chose UTF-16 is ... it was 
UCS-2 only with BMP when they chose it.  So character handling was easier 
and faster thanks to fixed-width encoding.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Dimitri Fontaine
Kevin Grittner kgri...@ymail.com writes:
 You are arguing that we should provide lesser support for numeric
 columns (and who knows how many other types) in materialized views
 than we do in streaming replication, pg_dump,
 suppress_redundant_updates_trigger(), and other places?  Why?

Because you're saying that you need SQL semantics, and probably because
I'm not understanding well enough the problem you're trying to solve.

 We have, as a community, gone to a fair amount of trouble  to make
 the concept of equality pluggable and allow multiple types of
 equality per type.  To me it seems the perfect tool to solve this
 problem.  Why the fuss?

Because I don't understand why you need another equality than the
default btree one, certainly. The other opclass, to my knowledge, are
only used in relation with index searches, that is when comparing heap
or input values with indexed values, right?

 No, matviews cannot be fixed without the new operators.  Here are
 the stats on the patch:

Ok, then someone (preferably a commiter) need to understand the problem
at hand in a better way than I do now, I guess. If possible I will read
through your patch, I'm curious now.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Vik Fearing
On 09/19/2013 12:55 AM, Dimitri Fontaine wrote:
 We have, as a community, gone to a fair amount of trouble  to make
  the concept of equality pluggable and allow multiple types of
  equality per type.  To me it seems the perfect tool to solve this
  problem.  Why the fuss?
 Because I don't understand why you need another equality than the
 default btree one, certainly.

Basically because 'word'::citext and 'Word'::citext are the same to your
brain but not to your eyeballs.

Unique indexes, for example, only need to please your brain.  Matviews
need to please your eyeballs.

-- 
Vik



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Not In Foreign Key Constraint

2013-09-18 Thread David Johnston
Misa Simic wrote
 I guess that rule can be achieved with triigers on TableA and TableC - but
 the same is true for FK (and FK constraint is more effective then trigger
 -
 that is why I wonder would it be useful/achievable to create that kind of
 constraint)
 
 Thoughts, ideas?

You create a common keys in use table and only insert a record into the
main tables if you can successfully add the desired key to the shared keys
table ( as a unique value ).  Setup a normal FK to that table to help
enforce that valid records must exist on the keys table.  Not fool-proof but
you only need to worry about insertions - delete from the pk table to remove
the record from the main table and free up the key.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Not-In-Foreign-Key-Constraint-tp5771056p5771546.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Freezing without write I/O

2013-09-18 Thread Jeff Janes
On Wed, Sep 18, 2013 at 12:55 PM, Jeff Janes
jeff.ja...@gmail.comjavascript:_e({}, 'cvml',
'jeff.ja...@gmail.com');
 wrote:

 On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas 
 hlinnakan...@vmware.com javascript:_e({}, 'cvml',
 'hlinnakan...@vmware.com'); wrote:


 Here's a rebased version of the patch, including the above-mentioned
 fixes. Nothing else new.


 I've applied this to 0892ecbc015930d, the last commit to which it applies
 cleanly.

 When I test this by repeatedly incrementing a counter in a randomly chosen
 row, then querying the whole table and comparing the results to what my
 driver knows they should be, I get discrepancies.

 No crash/recovery needs to be done to get the behavior.

 The number of rows is correct, so one version of every row is visible, but
 it is sometimes the wrong version.

 The discrepancy arises shortly after the first time this type of message
 appears:

 6930 UPDATE 2013-09-18 12:36:34.519 PDT:LOG:  started new XID range, XIDs
 133-, MultiXIDs 1-, tentative LSN 0/FA517F8
 6930 UPDATE 2013-09-18 12:36:34.519 PDT:STATEMENT:  update foo set
 count=count+1 where index=$1
 6928 UPDATE 2013-09-18 12:36:34.521 PDT:LOG:  closed old XID range at
 1000193 (LSN 0/FA58A08)
 6928 UPDATE 2013-09-18 12:36:34.521 PDT:STATEMENT:  update foo set
 count=count+1 where index=$1

 I'll work on getting the driver to shutdown the database the first time it
 finds a problem so that autovac doesn't destroy evidence.


I have uploaded the script to reproduce, and a tarball of the data
directory (when started, it will go through recovery.  table foo is in
the jjanes database and role.)

https://drive.google.com/folderview?id=0Bzqrh1SO9FcEek51NGEzRmFDVEEusp=sharing

The row with index=8499 should have count of 8, but really has count of 4,
and is only findable by seq scan, there is no such row by index scan.

select ctid,* from foo where index=8499;
select ctid,* from foo where index+0=8499;


select * from heap_page_items(get_raw_page('foo',37)) where lp=248 \x\g\x
Expanded display is on.
-[ RECORD 1 ]-
lp  | 248
lp_off  | 8160
lp_flags| 1
lp_len  | 32
t_xmin  | 2
t_xmax  | 0
t_field3| 0
t_ctid  | (37,248)
t_infomask2 | 32770
t_infomask  | 10496
t_hoff  | 24
t_bits  |
t_oid   |

So the xmax is 0 when it really should not be.

What I really want to do is find the not-visible ctids which would have
8499 for index, but I don't know how to do that.

Cheers,

Jeff


[HACKERS] Some interesting news about Linux 3.12 OOM

2013-09-18 Thread Daniel Farina
I'm not sure how many of you have been tracking this but courtesy of
lwn.net I have learned that it seems that the OOM killer behavior in
Linux 3.12 will be significantly different.  And by description, it
sounds like an improvement.  I thought some people reading -hackers
might be interested.

Based on the description at lwn, excerpted below, it sounds like the
news might be that systems with overcommit on might return OOM when a
non-outlandish request for memory is made from the kernel.


Johannes Weiner has posted a set of patches aimed at improving this
situation. Following a bunch of cleanup work, these patches make two
fundamental changes to how OOM conditions are handled in the kernel.
The first of those is perhaps the most visible: it causes the kernel
to avoid calling the OOM killer altogether for most memory allocation
failures. In particular, if the allocation is being made in response
to a system call, the kernel will just cause the system call to fail
with an ENOMEMerror rather than trying to find a process to kill. That
may cause system call failures to happen more often and in different
contexts than they used to. But, naturally, that will not be a problem
since all user-space code diligently checks the return status of every
system call and responds with well-tested error-handling code when
things go wrong.


Subject to experiment, this may be some good news, as many programs,
libraries, and runtime environments that may run parallel to Postgres
on a machine are pretty lackadaisical about limiting the amount of
virtual memory charged to them, and overcommit off is somewhat
punishing in those situations if one really needed a large hash table
from Postgres or whatever.  I've seen some cases here where a good
amount of VM has been reserved and caused apparent memory pressure
that cut throughput short of what should ought to be possible.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-09-18 Thread Sawada Masahiko
On Wed, Sep 18, 2013 at 11:45 AM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Sep 18, 2013 at 10:35 AM, Sawada Masahiko sawada.m...@gmail.com 
 wrote:
 On Tue, Sep 17, 2013 at 9:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
 I set up synchronous replication with synchronous_transfer = all, and then 
 I ran
 pgbench -i and executed CHECKPOINT in the master. After that, when I 
 executed
 CHECKPOINT in the standby, it got stuck infinitely. I guess this was cased 
 by
 synchronous_transfer feature.

 Did you set synchronous_standby_names in the standby server?

 Yes.

 If so, the master server waits for the standby server which is set to
 synchronous_standby_names.
 Please let me know detail of this case.

 Both master and standby have the same postgresql.conf settings as follows:

 max_wal_senders = 4
 wal_level = hot_standby
 wal_keep_segments = 32
 synchronous_standby_names = '*'
 synchronous_transfer = all

 How does synchronous_transfer work with cascade replication? If it's set to 
 all
 in the sender-side standby, it can resolve the data page inconsistency 
 between
 two standbys?


 Currently patch supports the case which two servers are set up SYNC 
 replication.
 IWO, failback safe standby is the same as SYNC replication standby.
 User can set synchronous_transfer in only master side.

 So, it's very strange that CHECKPOINT on the standby gets stuck infinitely.


I attached the patch which I have modified.

I have modified that if both synchronous replication and synchronous
transfer are requested,
but the server still in recovery(i.g., the server is in standby mode),
the server doesn't wait for
corresponding WAL replicated.
Specifically, I added condition RecoveryInProgress().

If both functions(synchronous replication and transfer) are set and
user sets up synchronous replication between two servers,
user can executes CHECKPOINT on standby side. It will not wait for
corresponding WAL replicated.
But, If both parameter are set and user doesn't set up synchronous
replication(i.g., the master server works alone),
the master server waits infinitely when user executes CHECKPOINT. This
behaviour is similar to synchronous replication.


Regards,

---
Sawada Masahiko


synchronous_transfer_v8.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-09-18 Thread Fujii Masao
On Thu, Sep 19, 2013 at 11:48 AM, Sawada Masahiko sawada.m...@gmail.com wrote:
 I attached the patch which I have modified.

Thanks for updating the patch!

Here are the review comments:

I got the compiler warning:

syncrep.c:112: warning: unused variable 'i'

How does synchronous_transfer work with synchronous_commit?

+ * accept all the likely variants of off.

This comment should be removed because synchronous_transfer
doesn't accept the value off.

+{commit, SYNCHRONOUS_TRANSFER_COMMIT, true},

ISTM the third value true should be false.

+{0, SYNCHRONOUS_TRANSFER_COMMIT, true},

Why is this needed?

+elog(WARNING, XLogSend sendTimeLineValidUpto(%X/%X) =
sentPtr(%X/%X) AND sendTImeLine,
+ (uint32) (sendTimeLineValidUpto  32), (uint32)
sendTimeLineValidUpto,
+ (uint32) (sentPtr  32), (uint32) sentPtr);

Why is this needed?

+#define SYNC_REP_WAIT_FLUSH1
+#define SYNC_REP_WAIT_DATA_FLUSH2

Why do we need to separate the wait-queue for wait-data-flush
from that for wait-flush? ISTM that wait-data-flush also can
wait for the replication on the wait-queue for wait-flush, and
which would simplify the patch.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-18 Thread Kevin Grittner
Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Kevin Grittner kgri...@ymail.com writes:
 You are arguing that we should provide lesser support for numeric
 columns (and who knows how many other types) in materialized views
 than we do in streaming replication, pg_dump,
 suppress_redundant_updates_trigger(), and other places?  Why?

 Because you're saying that you need SQL semantics, and probably because
 I'm not understanding well enough the problem you're trying to solve.

There are examples in the patch and this thread, but rather than
reference back to those I'll add a new one.  Without the patch:

test=# CREATE TABLE nt (id serial PRIMARY KEY, grp citext, num numeric);
CREATE TABLE
test=# INSERT INTO nt (grp, num) VALUES
test-#   ('one', '1.0'),
test-#   ('one', '2.0'),
test-#   ('two', '123.000');
INSERT 0 3
test=# CREATE VIEW nv AS SELECT grp, sum(num) AS num FROM nt GROUP BY grp;
CREATE VIEW
test=# SELECT * FROM nv ORDER BY grp;
 grp |   num  
-+-
 one | 3.0
 two | 123.000
(2 rows)

test=# CREATE MATERIALIZED VIEW nm AS SELECT grp, sum(num) AS num FROM nt GROUP 
BY grp;
SELECT 2
test=# CREATE UNIQUE INDEX nm_id ON nm (grp);
CREATE INDEX
test=# SELECT * FROM nm ORDER BY grp;
 grp |   num  
-+-
 one | 3.0
 two | 123.000
(2 rows)

test=# UPDATE nt SET grp = 'Two', num = '123.' WHERE id = 3;
UPDATE 1
test=# REFRESH MATERIALIZED VIEW CONCURRENTLY nm;
REFRESH MATERIALIZED VIEW
test=# SELECT * FROM nv ORDER BY grp;
 grp |   num   
-+--
 one |  3.0
 Two | 123.
(2 rows)

test=# SELECT * FROM nm ORDER BY grp;
 grp |   num  
-+-
 one | 3.0
 two | 123.000
(2 rows)

The problem, as I see it, is that the view and the concurrently
refreshed materialized view don't yield the same results for the
same query.  The rows are equal, but they are not the same.  With
the patch the matview, after RMVC, looks just the same as the view.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] when construct new tuple for update?

2013-09-18 Thread Amit Kapila
On Wed, Sep 18, 2013 at 2:21 PM, mohsen soodkhah mohammadi
mohsensoodk...@gmail.com wrote:
 hi
 I want that find where did a new tuple data construct in postgresql code
 when query is update.
 I find that ExecModiryTable is an upper function for do it. but I want to
 find exact place that create the data of one row of table.

heap_form_tuple() construct a new tuple from the given values[] and
isnull[] arrays and incase of UPDATE operation, it gets called in
below call stack
ExecModifyTable()-ExecUpdate()-ExecMaterializeSlot()-ExecCopySlotTuple()-heap_form_tuple()

Values to construct new tuple are formed during projection in
ExecProject() which is called as below call stack in UPDATE operation:
ExecModifyTable()-ExecProcNode()-ExecSeqScan()-ExecScan()-ExecProject()


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6

2013-09-18 Thread Fujii Masao
On Tue, Sep 17, 2013 at 11:31 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-09-17 09:45:28 -0400, Peter Eisentraut wrote:
 On 9/15/13 11:30 AM, Andres Freund wrote:
  On 2013-09-15 11:20:20 -0400, Peter Eisentraut wrote:
  On Sat, 2013-09-14 at 22:49 +0200, Andres Freund wrote:
  Attached you can find the newest version of the logical changeset
  generation patchset.
 
  You probably have bigger things to worry about, but please check the
  results of cpluspluscheck, because some of the header files don't
  include header files they depend on.
 
  Hm. I tried to get that right, but it's been a while since I last
  checked. I don't regularly use cpluspluscheck because it doesn't work in
  VPATH builds... We really need to fix that.
 
  I'll push a fix for that to the git tree, don't think that's worth a
  resend in itself.

 This patch set now fails to apply because of the commit Rename various
 freeze multixact variables.

 And I am even partially guilty for that patch...

 Rebased patches attached.

When I applied all the patches and do the compile, I got the following error:

gcc -O0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -g -I. -I../../../../src/include -D_GNU_SOURCE   -c -o
snapbuild.o snapbuild.c
snapbuild.c:187: error: redefinition of typedef 'SnapBuild'
../../../../src/include/replication/snapbuild.h:45: note: previous
declaration of 'SnapBuild' was here
make[4]: *** [snapbuild.o] Error 1


When I applied only
0001-wal_decoding-Allow-walsender-s-to-connect-to-a-speci.patch,
compiled the source, and set up the asynchronous replication, I got
the segmentation
fault.

LOG:  server process (PID 12777) was terminated by signal 11:
Segmentation fault

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-09-18 Thread samthakur74
I got the segmentation fault when I tested the case where the
least-executed
query statistics is discarded, i.e., when I executed different queries
more than
pg_stat_statements.max times. I guess that the patch might have a bug.
Thanks, will try to fix it.

pg_stat_statements--1.1.sql should be removed.
 Yes will do that



 +  entrystructfieldqueryid/structfield/entry
 +  entrytypebigint/type/entry
 +  entry/entry
 +  entryUnique value of each representative statement for the
 current statistics session.
 +   This value will change for each new statistics session./entry

 What does statistics session mean?
 The time period when statistics are gathered by statistics collector
 without being reset. So the statistics session continues across normal
 shutdowns, but in case of abnormal situations like crashes, format upgrades
 or statistics being reset for any other reason, a new time period of
 statistics collection starts i.e. a new statistics session. The queryid
 value generation is linked to statistics session so emphasize the fact that
 in case of crashes,format upgrades or any situation of statistics reset,
 the queryid for the same queries will also change. Will update
 documentation clearly explain the term statistics session in this context

 regards
Sameer







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-stat-statements-calls-under-estimation-propagation-tp5738128p5771562.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.