Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Amit Kapila
On Mon, May 12, 2014 at 7:26 PM, Heikki Linnakangas
 wrote:
> In theory, we could use a snapshot LSN as the cutoff-point for
> HeapTupleSatisfiesVisibility(). Maybe it's just because this is new, but
> that makes me feel uneasy.

To accomplish this won't XID-CSN map table be required and how will
it be maintained (means when to clear and add a entry to that map table)?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5: UPDATE/DELETE .. ORDER BY .. LIMIT ..

2014-05-12 Thread Amit Kapila
On Sun, May 11, 2014 at 10:17 PM, Tom Lane  wrote:
> Simon Riggs  writes:
>> On 11 May 2014 11:18, Andres Freund  wrote:
>>> I don't know. I'd find UPDATE/DELETE ORDER BY something rather
>>> useful.
>
>> Perhaps if an index exists to provide an ordering that makes it clear
>> what this means, then yes.
>
> The $64 question is whether we'd accept an implementation that fails
> if the target table has children (ie, is partitioned).  That seems
> to me to not be up to the project's usual quality expectations, but
> maybe if there's enough demand for a partial solution we should do so.
>
> It strikes me that a big part of the problem here is that the current
> support for this case assumes that the children don't all have the
> same rowtype.  Which is important if you think of "child table" as
> meaning "subclass in the OO sense".  But for ordinary partitioning
> cases it's just useless complexity, and ModifyTable isn't the only
> thing that suffers from that useless complexity.
>
> If we had a notion of "partitioned table" that involved a restriction
> that all the child tables have the exact same rowtype, we could implement
> UPDATE/DELETE in a much saner fashion --- just one plan tree, not one
> per child table --- and it would be possible to support UPDATE/DELETE
> ORDER BY LIMIT with no more work than for the single-table case.
> So that might shift the calculation as to whether we're willing to
> accept a partial implementation.

I think there are many use cases where current inheritance mechanism
is used for partitioning the table without adding new columns in child
table, so if we could support UPDATE/DELETE .. ORDER BY for
those cases, then it will be quite useful, but not sure if it is viable to
see simpler implementation for this case along with keeping current logic.

> Another idea is that the main reason we do things like this is the
> assumption that for UPDATE, ModifyTable receives complete new rows
> that only need to be pushed back into the table (and hence have
> to already match the rowtype of the specific child table).  What if
> we got rid of that and had the incoming tuples just have the target
> row identifier (tableoid+TID) and the values for the updated columns?
> ModifyTable then would have to visit the old row (something it must
> do anyway, NB), pull out the values for the not-to-be-updated columns,
> form the final tuple and store it.  It could implement this separately
> for each child table, with a different mapping of which columns receive
> the updates.

How about sorting step, are you thinking to have MergeAppend
node for it beneath ModifyTable?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Rajeev rastogi
On 12 May 2014 19:27, Heikki Linnakangas Wrote:
 
> On 01/24/2014 02:10 PM, Rajeev rastogi wrote:
> > We are also planning to implement CSN based snapshot.
> > So I am curious to know whether any further development is happening
> on this.
> 
> I started looking into this, and plan to work on this for 9.5. It's a
> big project, so any help is welcome. The design I have in mind is to
> use the LSN of the commit record as the CSN (as Greg Stark suggested).

Great !

> Some problems and solutions I have been thinking of:
> 
> The core of the design is to store the LSN of the commit record in
> pg_clog. Currently, we only store 2 bits per transaction there,
> indicating if the transaction committed or not, but the patch will
> expand it to 64 bits, to store the LSN. To check the visibility of an
> XID in a snapshot, the XID's commit LSN is looked up in pg_clog, and
> compared with the snapshot's LSN.

Isn't it will be bit in-efficient to look in to pg_clog to read XID's commit
LSN for every visibility check?
 
> With this mechanism, taking a snapshot is just a matter of reading the
> current WAL insertion point. There is no need to scan the proc array,
> which is good. However, it probably still makes sense to record an xmin
> and an xmax in SnapshotData, for performance reasons. An xmax, in
> particular, will allow us to skip checking the clog for transactions
> that will surely not be visible. We will no longer track the latest
> completed XID or the xmin like we do today, but we can use
> SharedVariableCache->nextXid as a conservative value for xmax, and keep
> a cached global xmin value in shared memory, updated when convenient,
> that can be just copied to the snapshot.

I think we can update xmin, whenever transaction with its XID equal
to xmin gets committed (i.e. in ProcArrayEndTransaction).

Thanks and Regards,
Kumar Rajeev Rastogi



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple logical decoding fixes/patches

2014-05-12 Thread Noah Misch
On Sat, May 10, 2014 at 04:56:51PM +0200, Andres Freund wrote:
> On 2014-05-10 00:59:59 -0400, Noah Misch wrote:
> > Static functions having only one call site are especially vulnerable to
> > inlining, so avoid naming them in the suppressions file.  I do see
> > ReorderBufferSerializeChange() inlined away at -O2 and higher.  Is it fair 
> > to
> > tie the suppression to ReorderBufferSerializeTXN() instead?
> 
> Hm. That's a good point. If you're talking about tying it to
> ReorderBufferSerializeTXN() you mean to list it below the write, as part
> of the callstack?
> 
> {
>   padding_reorderbuffer_serialize
>   Memcheck:Param
>   write(buf)
> 
>   ...
>   fun:ReorderBufferSerializeTXN
> }
> 
> If so, yes, that should be fine. Since there's no other writes it
> shouldn't make a difference.

Yep.  Committed that way.

> > Do you happen to have a self-contained procedure for causing the server to
> > reach the code in question?
> 
> (cd contrib/test_decoding && make -s installcheck-force)
> against a server running with
> valgrind \
>   --quiet --trace-children=yes --leak-check=no --track-origins=yes \
>   --read-var-info=yes run-pg-dev-master -c logging_collector=on \
>   --suppressions=/home/andres/src/postgresql/src/tools/valgrind.supp
>  \
> -c wal_level=logical -c max_replication_slots=3
> 
> Does the trick here. Valgrind warns in the first (ddl) test run.

Thanks.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 release notes

2014-05-12 Thread Bruce Momjian
On Fri, May  9, 2014 at 02:50:05AM +0900, MauMau wrote:
> From: "Bruce Momjian" 
> >I have completed the initial version of the 9.4 release notes.  You can
> >view them here:
> >
> >http://www.postgresql.org/docs/devel/static/release-9-4.html
> >
> >Feedback expected and welcomed.  I expect to be modifying this until we
> >release 9.4 final.  I have marked items where I need help with question
> >marks.
> 
> Could you add the following item, client-only installation on
> Windows, if it's appropriate for release note?  It will be useful
> for those like EnterpriseDB who develop products derived from
> PostgreSQL.
> 
> https://commitfest.postgresql.org/action/patch_view?id=1326

Agreed, added:

Allow client-only installs for MSVC
(Windows) builds (MauMau)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 release notes

2014-05-12 Thread Bruce Momjian
On Thu, May  8, 2014 at 10:17:27AM +0900, Tatsuo Ishii wrote:
> > I have completed the initial version of the 9.4 release notes.  You can
> > view them here:
> > 
> > http://www.postgresql.org/docs/devel/static/release-9-4.html
> > 
> > I will be adding additional markup in the next few days.
> > 
> > Feedback expected and welcomed.  I expect to be modifying this until we
> > release 9.4 final.  I have marked items where I need help with question
> > marks.
> 
> 
> E.1.3.7.1. System Information Functions
> 
> Add functions for error-free pg_class, pg_proc, pg_type, and pg_operator 
> lookups (Yugo Nagata, Nozomi Anzai, Robert Haas)
> 
> For example, to_regclass() does error-free lookups of pg_class, and 
> returns NULL for lookup failures.
> 
> 
> Probably "error-free" is too strong wording because these functions
> are not actualy error free.
> 
> test=# select to_regclass('a.b.c.d');
> ERROR:  improper relation name (too many dotted names): a.b.c.d
> STATEMENT:  select to_regclass('a.b.c.d');

Agreed.  New text:

Add functions for pg_class,
pg_proc, pg_type, and
pg_operator lookups that do not generate errors for
non-existent objects (Yugo Nagata, Nozomi Anzai,
Robert Haas)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 release notes

2014-05-12 Thread Bruce Momjian
On Tue, May  6, 2014 at 03:53:54PM +0200, Nicolas Barbier wrote:
> 2014-05-05 Bruce Momjian :
> 
> > On Mon, May  5, 2014 at 10:40:29AM -0700, Josh Berkus wrote:
> >
> >> * ALTER SYSTEM SET
> >>
> >> Lemme know if you need description text for any of the above.
> >
> > OK, great!  Once I have the markup done, I will beef up the descriptions
> > if needed and copy the text up to the major items section so we have
> > that all ready for beta.
> 
> “Add SQL-level command ALTER SYSTEM command [..]”
> 
> Using “command” twice sounds weird to my ears. Wouldn’t leaving out
> the second instance be better?

Agreed.  Second "command" removed.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New timezones used in regression tests

2014-05-12 Thread Robert Haas
On Mon, May 12, 2014 at 7:16 PM, Tom Lane  wrote:
> I'm quite unimpressed by the dependency on Mars/Mons_Olympus, too ... that
> might not fail *today*, but considering it's a real location, assuming it
> is not in the IANA database seems like a recipe for future failure.
> Maybe something like Nehwon/Lankhmar?  Or maybe we should not try to be
> cute but just test Foo/Bar.

Personally, I think it would be *awesome* if our regression tests
started failing due to the establishment of Mars/Mons_Olympus as a
real time zone.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-05-12 Thread Haribabu Kommi
On Tue, May 13, 2014 at 3:33 AM, Fujii Masao  wrote:
> On Sun, May 11, 2014 at 7:30 PM, Simon Riggs  wrote:
>> On 30 August 2013 04:55, Fujii Masao  wrote:
>>
>>> My idea is very simple, just compress FPW because FPW is
>>> a big part of WAL. I used pglz_compress() as a compression method,
>>> but you might think that other method is better. We can add
>>> something like FPW-compression-hook for that later. The patch
>>> adds new GUC parameter, but I'm thinking to merge it to full_page_writes
>>> parameter to avoid increasing the number of GUC. That is,
>>> I'm thinking to change full_page_writes so that it can accept new value
>>> 'compress'.
>>
>>> * Result
>>>   [tps]
>>>   1386.8 (compress_backup_block = off)
>>>   1627.7 (compress_backup_block = on)
>>>
>>>   [the amount of WAL generated during running pgbench]
>>>   4302 MB (compress_backup_block = off)
>>>   1521 MB (compress_backup_block = on)
>>
>> Compressing FPWs definitely makes sense for bulk actions.
>>
>> I'm worried that the loss of performance occurs by greatly elongating
>> transaction response times immediately after a checkpoint, which were
>> already a problem. I'd be interested to look at the response time
>> curves there.
>
> Yep, I agree that we should check how the compression of FPW affects
> the response time, especially just after checkpoint starts.
>
>> I was thinking about this and about our previous thoughts about double
>> buffering. FPWs are made in foreground, so will always slow down
>> transaction rates. If we could move to double buffering we could avoid
>> FPWs altogether. Thoughts?
>
> If I understand the double buffering correctly, it would eliminate the need 
> for
> FPW. But I'm not sure how easy we can implement the double buffering.

There is already a patch on the double buffer write to eliminate the FPW.
But It has some performance problem because of CRC calculation for the
entire page.

http://www.postgresql.org/message-id/1962493974.656458.1327703514780.javamail.r...@zimbra-prod-mbox-4.vmware.com

I think this patch can be further modified with a latest multi core
CRC calculation and can be used for testing.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New timezones used in regression tests

2014-05-12 Thread Gavin Flower

On 13/05/14 11:16, Tom Lane wrote:

Christoph Berg  writes:

84df54b22e8035addc7108abd9ff6995e8c49264 introduced timestamp
constructors. In the regression tests, various time zones are tested,
including America/Metlakatla. Now, if you configure using
--with-system-tzdata, you'll get an error if that zone isn't there.
Unfortunately, this is what I'm getting now when trying to build beta1
on Ubuntu 10.04 (lucid) with tzdata 2010i-1:

I agree, that seems an entirely gratuitous choice of zone.  It does
seem like a good idea to test a zone that has a nonintegral offset
from GMT, but we can get that from almost anywhere as long as we're
testing a pre-1900 date.  There's no need to use any zones that aren't
long-established and unlikely to change.

I'm quite unimpressed by the dependency on Mars/Mons_Olympus, too ... that
might not fail *today*, but considering it's a real location, assuming it
is not in the IANA database seems like a recipe for future failure.
Maybe something like Nehwon/Lankhmar?  Or maybe we should not try to be
cute but just test Foo/Bar.

regards, tom lane


You might like to consider the Chatham Islands, they are offset by 45 
minutes:

(GMT +12:45 / GMT +13:45)!

Cheers,
Gavin





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New timezones used in regression tests

2014-05-12 Thread Tom Lane
Christoph Berg  writes:
> 84df54b22e8035addc7108abd9ff6995e8c49264 introduced timestamp
> constructors. In the regression tests, various time zones are tested,
> including America/Metlakatla. Now, if you configure using
> --with-system-tzdata, you'll get an error if that zone isn't there.
> Unfortunately, this is what I'm getting now when trying to build beta1
> on Ubuntu 10.04 (lucid) with tzdata 2010i-1:

I agree, that seems an entirely gratuitous choice of zone.  It does
seem like a good idea to test a zone that has a nonintegral offset
from GMT, but we can get that from almost anywhere as long as we're
testing a pre-1900 date.  There's no need to use any zones that aren't
long-established and unlikely to change.

I'm quite unimpressed by the dependency on Mars/Mons_Olympus, too ... that
might not fail *today*, but considering it's a real location, assuming it
is not in the IANA database seems like a recipe for future failure.
Maybe something like Nehwon/Lankhmar?  Or maybe we should not try to be
cute but just test Foo/Bar.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore src/tools/msvc/config.pl in code tree for MSVC compilation

2014-05-12 Thread Michael Paquier
On Tue, May 13, 2014 at 3:16 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> Actually I am sending an updated patch as buildenv.pl enters in the
>> same category as config.pl.
>
> This seems sane to me; it's in the same category as src/Makefile.custom,
> which we have a .gitignore entry for.  I wondered whether there were any
> more such files, but the documentation at least doesn't mention any.
Maybe there are but nobody really noticed. I actually bumped into
those ones by looking at the documentation and the scripts.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New timezones used in regression tests

2014-05-12 Thread Christoph Berg
Re: To PostgreSQL Hackers 2014-05-12 <20140512214025.ga31...@msgid.df7cb.de>
> 84df54b22e8035addc7108abd9ff6995e8c49264 introduced timestamp
> constructors. In the regression tests, various time zones are tested,
> including America/Metlakatla. Now, if you configure using
> --with-system-tzdata, you'll get an error if that zone isn't there.
> Unfortunately, this is what I'm getting now when trying to build beta1
> on Ubuntu 10.04 (lucid) with tzdata 2010i-1:
> 
>   SELECT make_timestamptz(1866, 12, 10, 0, 0, 0, 'America/Metlakatla') AT 
> TIME ZONE 'UTC';
> ! ERROR:  time zone "America/Metlakatla" not recognized
> 
> I can work around it by patching the regression tests, but it would be
> nice if some other zone would be used that wasn't "invented" in 2011.

Fwiw, there is an updated tzdata version in lucid-updates
(2014a-0ubuntu0.10.04), which wasn't used in the pgapt build env until
now, hence the error. Still, the problem will remain on older systems,
and choosing a different time zone for this test seems easy to change.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New timezones used in regression tests

2014-05-12 Thread Christoph Berg
84df54b22e8035addc7108abd9ff6995e8c49264 introduced timestamp
constructors. In the regression tests, various time zones are tested,
including America/Metlakatla. Now, if you configure using
--with-system-tzdata, you'll get an error if that zone isn't there.
Unfortunately, this is what I'm getting now when trying to build beta1
on Ubuntu 10.04 (lucid) with tzdata 2010i-1:

  SELECT make_timestamptz(1866, 12, 10, 0, 0, 0, 'America/Metlakatla') AT TIME 
ZONE 'UTC';
! ERROR:  time zone "America/Metlakatla" not recognized

I can work around it by patching the regression tests, but it would be
nice if some other zone would be used that wasn't "invented" in 2011.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Sergey Muraviov
Hi.

I'll try to fix it tomorrow.


2014-05-12 18:42 GMT+04:00 Tom Lane :

> Greg Stark  writes:
> > On Mon, May 12, 2014 at 2:12 PM, Greg Stark  wrote:
> >> Hm, there was an off by one error earlier in some cases, maybe we
> >> fixed it by breaking other case. Will investigate.
>
> > Those spaces are coming from the ascii wrapping indicators. i.e. the
> periods in:
>
> Ah.  I wonder whether anyone will complain that the format changed?
>
> > Apparently we used to print those with border=1 in normal mode but in
> > expanded mode we left out the space for those on the outermost edges
> > since there was no need for them. If we put them in for wrapped mode
> > then we'll be inconsistent if we don't for nonwrapped mode though. And
> > if we don't put them in for wrapped mode then there's no way to
> > indicate wrapping versus newlines.
>
> Barring anyone complaining that the format changed, I'd say the issue
> is not that you added them but that the accounting for line length
> fails to include them.
>
> regards, tom lane
>



-- 
Best regards,
Sergey Muraviov


Re: [HACKERS] Running DBT2 on postgresql

2014-05-12 Thread Josh Berkus
On 05/12/2014 10:16 AM, Rohit Goyal wrote:
> Hi All,
> 
> Please help me in running DBT2 on postgresql. I am doing it for the first
> time. I am facing error while running dbt2 test.
> 
> I installed dbt2 by following "install" file. Below some final lines of
> terminal.

You'll get more help on the DBT mailing list, which I see you've already
found.

Are you doing this for a GSOC project?  If so, what problem are you
trying to solve with it?

BTW, it's not very friendly to cross-post to 3 mailing lists at the same
time.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore src/tools/msvc/config.pl in code tree for MSVC compilation

2014-05-12 Thread Tom Lane
Michael Paquier  writes:
> Actually I am sending an updated patch as buildenv.pl enters in the
> same category as config.pl.

This seems sane to me; it's in the same category as src/Makefile.custom,
which we have a .gitignore entry for.  I wondered whether there were any
more such files, but the documentation at least doesn't mention any.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-05-12 Thread Fujii Masao
On Sun, May 11, 2014 at 7:30 PM, Simon Riggs  wrote:
> On 30 August 2013 04:55, Fujii Masao  wrote:
>
>> My idea is very simple, just compress FPW because FPW is
>> a big part of WAL. I used pglz_compress() as a compression method,
>> but you might think that other method is better. We can add
>> something like FPW-compression-hook for that later. The patch
>> adds new GUC parameter, but I'm thinking to merge it to full_page_writes
>> parameter to avoid increasing the number of GUC. That is,
>> I'm thinking to change full_page_writes so that it can accept new value
>> 'compress'.
>
>> * Result
>>   [tps]
>>   1386.8 (compress_backup_block = off)
>>   1627.7 (compress_backup_block = on)
>>
>>   [the amount of WAL generated during running pgbench]
>>   4302 MB (compress_backup_block = off)
>>   1521 MB (compress_backup_block = on)
>
> Compressing FPWs definitely makes sense for bulk actions.
>
> I'm worried that the loss of performance occurs by greatly elongating
> transaction response times immediately after a checkpoint, which were
> already a problem. I'd be interested to look at the response time
> curves there.

Yep, I agree that we should check how the compression of FPW affects
the response time, especially just after checkpoint starts.

> I was thinking about this and about our previous thoughts about double
> buffering. FPWs are made in foreground, so will always slow down
> transaction rates. If we could move to double buffering we could avoid
> FPWs altogether. Thoughts?

If I understand the double buffering correctly, it would eliminate the need for
FPW. But I'm not sure how easy we can implement the double buffering.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Tom Lane
Peter Eisentraut  writes:
> On 5/12/14, 12:42 PM, Tom Lane wrote:
>> Peter Eisentraut  writes:
>>> You need plv8 master branch (unreleased), which fixes all these issues.

>> How does it deal with the function declaration incompatibility problem?

> commit df92ced297282ffbb13e95748543b6c52ad4d238
> Author: Hitoshi Harada 
> Date:   Wed May 7 01:28:18 2014 -0700

> Remove exception specifier from PG callbacks.

> 9.4 includes function declaration in PG_FUNCTION_INFO_V1 macro, which is
> not compatible with ours using exception specifiers.  Actually I don't
> see the reason we have them so simply I remove them.

> That said, I'm not yet sure what the overall right answer is here.

Hm.  If you're writing SQL functions in C++, you definitely don't want
them throwing any C++ exceptions out to the core backend; so the throw()
declaration is sensible and might help catch coding errors.  That means
that Hitoshi-san's solution is just a quick hack rather than a desirable
answer.

We could perhaps use an "#ifdef __cplusplus" in the declaration of
PG_FUNCTION_INFO_V1 to forcibly put a "throw()" into the extern when
compiling C++.  That would break less-carefully-written C++ code, but
the fix would be easy (unless they are throwing exceptions, but then
they've got a bug to fix anyway).

I'm concerned though that this may not be the only use-case for
decorations on those externs.  A slightly more flexible answer
is to make it look like

#ifdef __cplusplus
#define PG_FUNCTION_DECORATION throw()
#else
#define PG_FUNCTION_DECORATION
#endif

#define PG_FUNCTION_INFO_V1(funcname) \
Datum funcname(PG_FUNCTION_ARGS) PG_FUNCTION_DECORATION; \
extern ...

which would leave the door open for modules to redefine
PG_FUNCTION_DECORATION if they had to.  On the other hand it could
reasonably be argued that that would largely break the point of
having a uniform extern declaration in the first place.

Still wondering if we shouldn't just revert this change as being more
pain than gain.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Alvaro Herrera
Peter Eisentraut wrote:
> On 5/12/14, 12:42 PM, Tom Lane wrote:
> > Peter Eisentraut  writes:
> >> You need plv8 master branch (unreleased), which fixes all these issues.
> > 
> > How does it deal with the function declaration incompatibility problem?
> 
> commit df92ced297282ffbb13e95748543b6c52ad4d238
> Author: Hitoshi Harada 
> Date:   Wed May 7 01:28:18 2014 -0700
> 
> Remove exception specifier from PG callbacks.
> 
> 9.4 includes function declaration in PG_FUNCTION_INFO_V1 macro, which is
> not compatible with ours using exception specifiers.  Actually I don't
> see the reason we have them so simply I remove them.

Do C++ exception specifiers in fmgr V1 functions work at all?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Running DBT2 on postgresql

2014-05-12 Thread Rohit Goyal
Hi All,

Please help me in running DBT2 on postgresql. I am doing it for the first
time. I am facing error while running dbt2 test.

I installed dbt2 by following "install" file. Below some final lines of
terminal.

Install the project...
-- Install configuration: ""
-- Installing: /home/abhi/dbt2_install/bin/dbt2-client
-- Installing: /home/abhi/dbt2_install/bin/dbt2-datagen
-- Installing: /home/abhi/dbt2_install/bin/dbt2-driver
-- Installing: /home/abhi/dbt2_install/bin/dbt2-transaction-test
-- Installing: /home/abhi/dbt2_install/bin/dbt2-generate-report
-- Installing: /home/abhi/dbt2_install/bin/dbt2-get-os-info
-- Installing: /home/abhi/dbt2_install/bin/dbt2-post-process
-- Installing: /home/abhi/dbt2_install/bin/dbt2-run-workload
-- Installing: /home/abhi/dbt2_install/bin/dbt2-sysstats
-- Installing: /home/abhi/dbt2_install/bin/dbt2-plot-transaction-rate
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-build-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-check-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-create-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-create-indexes
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-create-tables
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-db-stat
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-drop-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-drop-tables
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-load-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-load-stored-procs
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-plans
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-start-db
-- Installing: /home/abhi/dbt2_install/bin/dbt2-pgsql-stop-db


After that I am following readme_postgresql file. But, I can't understand

"A really quick howto.

Edit bin/pgsql/pgsql_profile.in and follow the notes for the DBT2PGDATA
and DBDATA directory.  DBT2PGDATA is where the database directory will
be created and DBDATA is where the database table data will be
generated.

Set environment variables, see examples/dbt2_profile."

Please explain me the parameters pls. I have set

export DBT2INSTALLDIR=/home/abhi/dbt2_install  # change where you want
install dbt2
export DBT2PGDATA=/home/abhi/project/pgsql/DemoDir  # your postgres
database directory
export DBT2DBNAME=dbt2 # keep as it is

I set above parameter before installation.

Regards,
Rohit Goyal



-- 
Regards,
Rohit Goyal


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Ants Aasma
On Mon, May 12, 2014 at 7:10 PM, Greg Stark  wrote:
> Would it be useful to store the current WAL insertion point along with
> the "about to commit" flag so it's effectively a promise that this
> transaction will commit no earlier than XXX. That should allow most
> transactions to decide if those records are visible or not unless
> they're very recent transactions which started in that short window
> while the committing transaction was in the process of committing.

I don't believe this is worth the complexity. The contention window is
extremely short here.

Regards,
Ants Aasma
-- 
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Peter Eisentraut
On 5/12/14, 12:42 PM, Tom Lane wrote:
> Peter Eisentraut  writes:
>> You need plv8 master branch (unreleased), which fixes all these issues.
> 
> How does it deal with the function declaration incompatibility problem?

commit df92ced297282ffbb13e95748543b6c52ad4d238
Author: Hitoshi Harada 
Date:   Wed May 7 01:28:18 2014 -0700

Remove exception specifier from PG callbacks.

9.4 includes function declaration in PG_FUNCTION_INFO_V1 macro, which is
not compatible with ours using exception specifiers.  Actually I don't
see the reason we have them so simply I remove them.


That said, I'm not yet sure what the overall right answer is here.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Fujii Masao
On Tue, May 13, 2014 at 1:36 AM, Tom Lane  wrote:
> Fujii Masao  writes:
>> On Mon, May 12, 2014 at 8:40 PM, Heikki Linnakangas
>>  wrote:
>>> I agree the new behavior is better, and we should just remove the Open Items
>>> entry.
>
>> Yes. I just removed that entry.
>
> Our practice in past years has been to move items to a separate "Resolved
> Issues" section rather than just delete them.  I fixed the page to look
> that way.

Yes. Thanks!

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Tom Lane
Peter Eisentraut  writes:
> You need plv8 master branch (unreleased), which fixes all these issues.

How does it deal with the function declaration incompatibility problem?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Pavel Stehule
2014-05-12 18:36 GMT+02:00 Peter Eisentraut :

> On 5/12/14, 11:05 AM, Pavel Stehule wrote:
> > After returning back before this commit I cannot compile PL/V8 still but
> > with more solvable bug
> >
> >  g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> > -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> > -I/usr/include/libxml2  -fPIC -c -o plv8.o plv8.cc
> > g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> > -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> > -I/usr/include/libxml2  -fPIC -c -o plv8_type.o plv8_type.cc
> > g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> > -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> > -I/usr/include/libxml2  -fPIC -c -o plv8_func.o plv8_func.cc
> > plv8_func.cc: In function ‘v8::Handle plv8_Prepare(const
> > v8::Arguments&)’:
> > plv8_func.cc:521:47: error: too few arguments to function ‘void
> > parseTypeString(const char*, Oid*, int32*, bool)’
> >parseTypeString(typestr, &types[i], &typemod);
> >^
> > In file included from plv8_func.cc:22:0:
> > /usr/local/pgsql/include/server/parser/parse_type.h:50:13: note:
> > declared here
> >  extern void parseTypeString(const char *str, Oid *typeid_p, int32
> > *typmod_p, bool missing_ok);
> >  ^
> > make: *** [plv8_func.o] Error 1
> >
> > so the main issue is really this commit
>
> You need plv8 master branch (unreleased), which fixes all these issues.
>  No released version of plv8 works with 9.4 at the moment.
>

ok, I'll check it

Thank you

Pavel


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Tom Lane
Fujii Masao  writes:
> On Mon, May 12, 2014 at 8:40 PM, Heikki Linnakangas
>  wrote:
>> I agree the new behavior is better, and we should just remove the Open Items
>> entry.

> Yes. I just removed that entry.

Our practice in past years has been to move items to a separate "Resolved
Issues" section rather than just delete them.  I fixed the page to look
that way.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Peter Eisentraut
On 5/12/14, 11:05 AM, Pavel Stehule wrote:
> After returning back before this commit I cannot compile PL/V8 still but
> with more solvable bug
> 
>  g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> -I/usr/include/libxml2  -fPIC -c -o plv8.o plv8.cc
> g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> -I/usr/include/libxml2  -fPIC -c -o plv8_type.o plv8_type.cc
> g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> -I/usr/local/pgsql/include/internal -D_GNU_SOURCE
> -I/usr/include/libxml2  -fPIC -c -o plv8_func.o plv8_func.cc
> plv8_func.cc: In function ‘v8::Handle plv8_Prepare(const
> v8::Arguments&)’:
> plv8_func.cc:521:47: error: too few arguments to function ‘void
> parseTypeString(const char*, Oid*, int32*, bool)’
>parseTypeString(typestr, &types[i], &typemod);
>^
> In file included from plv8_func.cc:22:0:
> /usr/local/pgsql/include/server/parser/parse_type.h:50:13: note:
> declared here
>  extern void parseTypeString(const char *str, Oid *typeid_p, int32
> *typmod_p, bool missing_ok);
>  ^
> make: *** [plv8_func.o] Error 1
> 
> so the main issue is really this commit

You need plv8 master branch (unreleased), which fixes all these issues.
 No released version of plv8 works with 9.4 at the moment.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Andres Freund
On 2014-05-12 19:14:55 +0300, Heikki Linnakangas wrote:
> On 05/12/2014 06:26 PM, Andres Freund wrote:
> >>>With the new "commit-in-progress" status in clog, we won't need the
> >>>sub-committed clog status anymore. The "commit-in-progress" status will
> >>>achieve the same thing.
> >Wouldn't that cause many spurious waits? Because commit-in-progress
> >needs to be waited on, but a sub-committed xact surely not?
> 
> Ah, no. Even today, a subxid isn't marked as sub-committed, until you commit
> the top-level transaction. The sub-commit state is a very transient state
> during the commit process, used to make the commit of the sub-transactions
> and the top-level transaction appear atomic. The commit-in-progress state
> would be a similarly short-lived state. You mark the subxids and the top xid
> as commit-in-progress just before the XLogInsert() of the commit record, and
> you replace them with the real LSNs right after XLogInsert().

Ah, right. Forgot that detail...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Fujii Masao
On Mon, May 12, 2014 at 8:40 PM, Heikki Linnakangas
 wrote:
> On 05/12/2014 02:29 PM, Fujii Masao wrote:
>>
>> Hmm.. probably I have the same opinion with you. If I understand this
>> correctly,
>> an immediate shutdown doesn't call CancelBackup() in 9.4 or before. But
>> the
>> commit 82233ce unintentionally changed an immediate shutdown so that it
>> calls
>> CancelBackup().
>
>
> Oh, sorry. I thought it was the other way 'round: that we used to remove
> backup_label on an immediate shutdown on 9.3 and before, but that 9.4
> doesn't do that anymore. Now that I re-read this thread and tested it
> myself, I see that I got it backwards.
>
> I agree the new behavior is better, and we should just remove the Open Items
> entry.

Yes. I just removed that entry.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Heikki Linnakangas

On 05/12/2014 06:26 PM, Andres Freund wrote:

>With the new "commit-in-progress" status in clog, we won't need the
>sub-committed clog status anymore. The "commit-in-progress" status will
>achieve the same thing.

Wouldn't that cause many spurious waits? Because commit-in-progress
needs to be waited on, but a sub-committed xact surely not?


Ah, no. Even today, a subxid isn't marked as sub-committed, until you 
commit the top-level transaction. The sub-commit state is a very 
transient state during the commit process, used to make the commit of 
the sub-transactions and the top-level transaction appear atomic. The 
commit-in-progress state would be a similarly short-lived state. You 
mark the subxids and the top xid as commit-in-progress just before the 
XLogInsert() of the commit record, and you replace them with the real 
LSNs right after XLogInsert().


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Greg Stark
On Mon, May 12, 2014 at 2:56 PM, Heikki Linnakangas
 wrote:
> Currently, before consulting the clog for an XID's status, it is necessary
> to first check if the transaction is still in progress by scanning the proc
> array. To get rid of that requirement, just before writing the commit record
> in the WAL, the backend will mark the clog slot with a magic value that says
> "I'm just about to commit". After writing the commit record, it is replaced
> with the record's actual LSN. If a backend sees the magic value in the clog,
> it will wait for the transaction to finish the insertion, and then check
> again to get the real LSN. I'm thinking of just using XactLockTableWait()
> for that. This mechanism makes the insertion of a commit WAL record and
> updating the clog appear atomic to the rest of the system.


Would it be useful to store the current WAL insertion point along with
the "about to commit" flag so it's effectively a promise that this
transaction will commit no earlier than XXX. That should allow most
transactions to decide if those records are visible or not unless
they're very recent transactions which started in that short window
while the committing transaction was in the process of committing.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Ants Aasma
On Mon, May 12, 2014 at 6:09 PM, Robert Haas  wrote:
> However, I wonder what
> happens if you write the commit record and then the attempt to update
> pg_clog fails.  I think you'll have to PANIC, which kind of sucks.

CLOG IO error while commiting is already a PANIC, SimpleLruReadPage()
does SlruReportIOError(), which in turn does ereport(ERROR), while
inside a critical section initiated in RecordTransactionCommit().

Regards,
Ants Aasma
-- 
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Ants Aasma
On Mon, May 12, 2014 at 4:56 PM, Heikki Linnakangas
 wrote:
> On 01/24/2014 02:10 PM, Rajeev rastogi wrote:
>>
>> We are also planning to implement CSN based snapshot.
>> So I am curious to know whether any further development is happening on
>> this.
>
>
> I started looking into this, and plan to work on this for 9.5. It's a big
> project, so any help is welcome. The design I have in mind is to use the LSN
> of the commit record as the CSN (as Greg Stark suggested).

I did do some coding work on this, but the free time I used to work on
this basically disappeared with a child in the family. I guess what I
have has bitrotted beyond recognition. However I may still have some
insight that may be of use.

From your comments I presume that you are going with the original,
simpler approach proposed by Robert to simply keep the XID-CSN map
around for ever and probe it for all visibility lookups that lie
outside of the xmin-xmax range? As opposed to the more complex hybrid
approach I proposed that keeps a short term XID-CSN map and lazily
builds conventional list-of-concurrent-XIDs snapshots for long lived
snapshots. I think that would be prudent, as the simpler approach
needs mostly the same ground work and if turns out to work well
enough, simpler is always better.

Regards,
Ants Aasma
-- 
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Robert Haas
On Mon, May 12, 2014 at 10:41 AM, Andres Freund  wrote:
> On 2014-05-12 16:56:51 +0300, Heikki Linnakangas wrote:
>> On 01/24/2014 02:10 PM, Rajeev rastogi wrote:
>> >We are also planning to implement CSN based snapshot.
>> >So I am curious to know whether any further development is happening on 
>> >this.
>>
>> I started looking into this, and plan to work on this for 9.5. It's a big
>> project, so any help is welcome. The design I have in mind is to use the LSN
>> of the commit record as the CSN (as Greg Stark suggested).
>
> Cool.

Yes, very cool.  I remember having some concerns about using the LSN
of the commit record as the CSN.  I think the biggest one was the need
to update clog with the CSN before the commit record had been written,
which your proposal to store a temporary sentinel value there until
the commit has completed might address.  However, I wonder what
happens if you write the commit record and then the attempt to update
pg_clog fails.  I think you'll have to PANIC, which kind of sucks.  It
would be nice to pin the pg_clog page into the SLRU before writing the
commit record so that we don't have to fear needing to re-read it
afterwards, but the SLRU machinery doesn't currently have that notion.

Another thing to think about is that LSN = CSN will make things much
more difficult if we ever want to support multiple WAL streams with a
separate LSN sequence for each.  Perhaps you'll say that's a pipe
dream anyway, and I agree it's probably 5 years out, but I think it
may be something we'll want eventually.  With synthetic CSNs those
systems are more decoupled.  OTOH, one advantage of LSN = CSN is that
the commit order as seen on the standby would always match the commit
order as seen on the master, which is currently not true, and would be
a very nice property to have.

I think we're likely to find that system performance is quite
sensitive to any latency in updating the global-xmin.  One thing about
the present system is that if you take a snapshot while a very "old"
transaction is still running, you're going to use that as your
global-xmin for the entire lifetime of your transaction.  It might be
possible, with some of the rejiggering you're thinking about, to
arrange things so that there are opportunities for processes to roll
forward their notion of the global-xmin, making HOT pruning more
efficient.  Whether anything good happens there or not is sort of a
side issue, but we need to make sure the efficiency of HOT pruning
doesn't regress.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Pavel Stehule
2014-05-12 16:31 GMT+02:00 Tom Lane :

> Andrew Dunstan  writes:
> > On 05/12/2014 07:10 AM, Pavel Stehule wrote:
> >> I am trying to compile PL/v8 without success. I have Postgres
> >> installed via compilation from source code.
>
> >> plv8.cc:50:56: error: declaration of ‘Datum
> >> plv8_call_handler(FunctionCallInfo) throw ()’ has a different
> >> exception specifier
> >> Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
> >> ^
> >> plv8.cc:43:7: error: from previous declaration ‘Datum
> >> plv8_call_handler(FunctionCallInfo)’
> >> PG_FUNCTION_INFO_V1(plv8_call_handler);
>
> > This looks like a result of commit
> > <
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e7128e8dbb305059c30ec085461297e619bcbff4
> >
>
> Ouch.  I was a bit suspicious of that change from the start, but it hadn't
> occurred to me that functions written in C++ would have an issue with it.
>
> > Maybe we need a way of telling the preprocessor to suppress the
> > generation of a prototype?
>
> Maybe we need to revert that patch altogether.  Dealing with this is
> likely to introduce much more pain and confusion than the change is worth.
>

After returning back before this commit I cannot compile PL/V8 still but
with more solvable bug

 g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
-I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
-fPIC -c -o plv8.o plv8.cc
g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
-I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
-fPIC -c -o plv8_type.o plv8_type.cc
g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
-I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
-fPIC -c -o plv8_func.o plv8_func.cc
plv8_func.cc: In function ‘v8::Handle plv8_Prepare(const
v8::Arguments&)’:
plv8_func.cc:521:47: error: too few arguments to function ‘void
parseTypeString(const char*, Oid*, int32*, bool)’
   parseTypeString(typestr, &types[i], &typemod);
   ^
In file included from plv8_func.cc:22:0:
/usr/local/pgsql/include/server/parser/parse_type.h:50:13: note: declared
here
 extern void parseTypeString(const char *str, Oid *typeid_p, int32
*typmod_p, bool missing_ok);
 ^
make: *** [plv8_func.o] Error 1

so the main issue is really this commit

Regards

Pavel

p.s. my tests on 9.2 was messy probably





>
> regards, tom lane
>


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Heikki Linnakangas

On 05/12/2014 05:41 PM, Andres Freund wrote:

I haven't fully thought it through but I think it should make some of
the decoding code simpler. And it should greatly simplify the hot
standby code.


Cool. I was worried it might conflict with the logical decoding stuff in 
some fundamental way, as I'm not really familiar with it.



Some of the stuff in here will be influence whether your freezing
replacement patch gets in. Do you plan to further pursue that one?


Not sure. I got to the point where it seemed to work, but I got a bit of 
a cold feet proceeding with it. I used the page header's LSN field to 
define the "epoch" of the page, but I started to feel uneasy about it. I 
would be much more comfortable with an extra field in the page header, 
even though that uses more disk space. And requires dealing with pg_upgrade.



The core of the design is to store the LSN of the commit record in pg_clog.
Currently, we only store 2 bits per transaction there, indicating if the
transaction committed or not, but the patch will expand it to 64 bits, to
store the LSN. To check the visibility of an XID in a snapshot, the XID's
commit LSN is looked up in pg_clog, and compared with the snapshot's LSN.


We'll continue to need some of the old states? You plan to use values
that can never be valid lsns for them? I.e. 0/0 IN_PROGRESS, 0/1 ABORTED
etc?


Exactly.

Using 64 bits per XID instead of just 2 will obviously require a lot 
more disk space, so we might actually want to still support the old clog 
format too, as an "archive" format. The clog for old transactions could 
be converted to the more compact 2-bits per XID format (or even just 1 bit).



How do you plan to deal with subtransactions?


pg_subtrans will stay unchanged. We could possibly merge it with 
pg_clog, reserving some 32-bit chunk of values that are not valid LSNs 
to mean an uncommitted subtransaction, with the parent XID. That assumes 
that you never need to look up the parent of an already-committed 
subtransaction. I thought that was true at first, but I think the SSI 
code looks up the parent of a committed subtransaction, to find its 
predicate locks. Perhaps it could be changed, but seems best to leave it 
alone for now; there will be a lot code churn anyway.


I think we can get rid of the sub-XID array in PGPROC. It's currently 
used to speed up TransactionIdIsInProgress(), but with the patch it will 
no longer be necessary to call TransactionIdIsInProgress() every time 
you check the visibility of an XID, so it doesn't need to be so fast 
anymore.


With the new "commit-in-progress" status in clog, we won't need the 
sub-committed clog status anymore. The "commit-in-progress" status will 
achieve the same thing.



Currently, before consulting the clog for an XID's status, it is necessary
to first check if the transaction is still in progress by scanning the proc
array. To get rid of that requirement, just before writing the commit record
in the WAL, the backend will mark the clog slot with a magic value that says
"I'm just about to commit". After writing the commit record, it is replaced
with the record's actual LSN. If a backend sees the magic value in the clog,
it will wait for the transaction to finish the insertion, and then check
again to get the real LSN. I'm thinking of just using XactLockTableWait()
for that. This mechanism makes the insertion of a commit WAL record and
updating the clog appear atomic to the rest of the system.


So it's quite possible that clog will become more of a contention point
due to the doubled amount of writes.


Yeah. OTOH, each transaction will take more space in the clog, which 
will spread the contention across more pages. And I think there are ways 
to mitigate contention in clog, if it becomes a problem. We could make 
the locking more fine-grained than one lock per page, use atomic 64-bit 
reads/writes on platforms that support it, etc.



In theory, we could use a snapshot LSN as the cutoff-point for
HeapTupleSatisfiesVisibility(). Maybe it's just because this is new, but
that makes me feel uneasy.


It'd possibly also end up being less efficient because you'd visit the
clog for potentially quite some transactions to get the LSN.


True.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Tom Lane
Greg Stark  writes:
> On Mon, May 12, 2014 at 2:12 PM, Greg Stark  wrote:
>> Hm, there was an off by one error earlier in some cases, maybe we
>> fixed it by breaking other case. Will investigate.

> Those spaces are coming from the ascii wrapping indicators. i.e. the periods 
> in:

Ah.  I wonder whether anyone will complain that the format changed?

> Apparently we used to print those with border=1 in normal mode but in
> expanded mode we left out the space for those on the outermost edges
> since there was no need for them. If we put them in for wrapped mode
> then we'll be inconsistent if we don't for nonwrapped mode though. And
> if we don't put them in for wrapped mode then there's no way to
> indicate wrapping versus newlines.

Barring anyone complaining that the format changed, I'd say the issue
is not that you added them but that the accounting for line length
fails to include them.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Andres Freund
On 2014-05-12 16:56:51 +0300, Heikki Linnakangas wrote:
> On 01/24/2014 02:10 PM, Rajeev rastogi wrote:
> >We are also planning to implement CSN based snapshot.
> >So I am curious to know whether any further development is happening on this.
> 
> I started looking into this, and plan to work on this for 9.5. It's a big
> project, so any help is welcome. The design I have in mind is to use the LSN
> of the commit record as the CSN (as Greg Stark suggested).

Cool.

I haven't fully thought it through but I think it should make some of
the decoding code simpler. And it should greatly simplify the hot
standby code.

Some of the stuff in here will be influence whether your freezing
replacement patch gets in. Do you plan to further pursue that one?

> The core of the design is to store the LSN of the commit record in pg_clog.
> Currently, we only store 2 bits per transaction there, indicating if the
> transaction committed or not, but the patch will expand it to 64 bits, to
> store the LSN. To check the visibility of an XID in a snapshot, the XID's
> commit LSN is looked up in pg_clog, and compared with the snapshot's LSN.

We'll continue to need some of the old states? You plan to use values
that can never be valid lsns for them? I.e. 0/0 IN_PROGRESS, 0/1 ABORTED
etc?
How do you plan to deal with subtransactions?

> Currently, before consulting the clog for an XID's status, it is necessary
> to first check if the transaction is still in progress by scanning the proc
> array. To get rid of that requirement, just before writing the commit record
> in the WAL, the backend will mark the clog slot with a magic value that says
> "I'm just about to commit". After writing the commit record, it is replaced
> with the record's actual LSN. If a backend sees the magic value in the clog,
> it will wait for the transaction to finish the insertion, and then check
> again to get the real LSN. I'm thinking of just using XactLockTableWait()
> for that. This mechanism makes the insertion of a commit WAL record and
> updating the clog appear atomic to the rest of the system.

So it's quite possible that clog will become more of a contention point
due to the doubled amount of writes.

> In theory, we could use a snapshot LSN as the cutoff-point for
> HeapTupleSatisfiesVisibility(). Maybe it's just because this is new, but
> that makes me feel uneasy.

It'd possibly also end up being less efficient because you'd visit the
clog for potentially quite some transactions to get the LSN.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Select queries which violates table constrains

2014-05-12 Thread Heikki Linnakangas

On 05/10/2014 09:24 PM, Joni Martikainen wrote:

Hi,

I investigated some select query performance issues and noticed that
postgresql misses some obvious cases while processing SELECT query. I
mean the case where WHERE clause contains statement which condition
would be against table structure. (excuse my language, look the code)

Example:
Let the table be :

CREATE TABLE test
(
id numeric(3,0) NOT NULL,
somecolumn numeric(5,0) NOT NULL,
CONSTRAINT id_pk PRIMARY KEY (id)
);

Simple table with "somecolumn" column which has constraint NOT NULL.

Let's do a following query to the table.

SELECT somecolumn FROM test WHERE somecolumn IS NULL;

Result is empty result set which is obvious because any null value would
be against the table constrain.
The thing here is that postgresql does SeqScan to this table in order to
find out if there is any null values.

Explain:
"Seq Scan on test  (cost=0.00..1.06 rows=1 width=5)"
"  Filter: (somecolumn IS NULL)"
"Planning time: 0.778 ms"

SeqScan can be avoided by making index for "somecolumn" and indexing all
the null values. That index would be empty and very fast but also very
pointless since table constraint here is simple.
No one would do such a query in real life but some programmatically
generated queries does this kind of things. Only way I found to go
around this problem was to create those empty indexies but I think the
query optimizer could be smarter here.

I took a look of the optimizer code and I didn't find any code which
avoids this kind of situations. (I expect that it would be optimizer's
task to find out this kind of things)

I was thinking some feature for optimizer where the optimizer could add
a hint for an executor if some query plan path leads to the empty result
set case. If executor sees this hint it could avoid doing seqscan and
actually even index scans. This kind of query constraint vs. table
constraint comparison should be anyway cheaper process to execute than
seqscan.

The question is that, is there any reason why such an optimization phase
could not be implemented? Another question is that how is the query
engine handling the partitioned table case? Am i right that table
partitions are solved by table constrains and indexies are used to
validate which child table to look for? And so forth could this kind of
new optimization phase benefit partitioned tables?


Actually, the planner can perform that optimization. The trick is called 
"constraint exclusion". It is typically used for partitioning, where the 
WHERE-clause restricts the query to a single partition, and you would 
otherwise have to scan all the partitions. It is not usually a very 
useful optimization, and it is somewhat expensive to check for that 
case, so it is disabled by default except for partitioned tables. But if 
you do "set constraint_exclusion=on", you will get the plan you're 
looking for:


postgres=# set  constraint_exclusion=on;
SET
postgres=# explain SELECT somecolumn FROM test WHERE somecolumn IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
   One-Time Filter: false
 Planning time: 0.071 ms
(3 rows)

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Select queries which violates table constrains

2014-05-12 Thread Tom Lane
Joni Martikainen  writes:
> I investigated some select query performance issues and noticed that 
> postgresql misses some obvious cases while processing SELECT query. I 
> mean the case where WHERE clause contains statement which condition 
> would be against table structure. (excuse my language, look the code)

Your example does what you want if you set constraint_exclusion to ON:

regression=# explain SELECT somecolumn FROM test WHERE somecolumn IS NULL;
  QUERY PLAN  
--
 Seq Scan on test  (cost=0.00..25.10 rows=8 width=12)
   Filter: (somecolumn IS NULL)
 Planning time: 0.055 ms
(3 rows)

regression=# set constraint_exclusion = on;
SET
regression=# explain SELECT somecolumn FROM test WHERE somecolumn IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
   One-Time Filter: false
 Planning time: 0.065 ms
(3 rows)

There may be other cases where the planner could be smarter, but in this
particular case it intentionally doesn't check for this sort of situation
by default, because (as you say) the case only happens with badly-written
queries, and (as the above output demonstrates) we take rather a big hit
in planning time to make those checks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Tom Lane
Andrew Dunstan  writes:
> On 05/12/2014 07:10 AM, Pavel Stehule wrote:
>> I am trying to compile PL/v8 without success. I have Postgres 
>> installed via compilation from source code.

>> plv8.cc:50:56: error: declaration of ‘Datum 
>> plv8_call_handler(FunctionCallInfo) throw ()’ has a different 
>> exception specifier
>> Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
>> ^
>> plv8.cc:43:7: error: from previous declaration ‘Datum 
>> plv8_call_handler(FunctionCallInfo)’
>> PG_FUNCTION_INFO_V1(plv8_call_handler);

> This looks like a result of commit 
> 
>  

Ouch.  I was a bit suspicious of that change from the start, but it hadn't
occurred to me that functions written in C++ would have an issue with it.

> Maybe we need a way of telling the preprocessor to suppress the 
> generation of a prototype?

Maybe we need to revert that patch altogether.  Dealing with this is
likely to introduce much more pain and confusion than the change is worth.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore src/tools/msvc/config.pl in code tree for MSVC compilation

2014-05-12 Thread Michael Paquier
On Mon, May 12, 2014 at 3:08 PM, Michael Paquier
 wrote:
> Hi all,
>
> MSVC build uses two configuration perl files when running:
> config_default.pl and config.pl. The former is mandatory and is
> present in the code tree, while the latter can be used to override
> settings with some custom parameters. As far as I understand from the
> docs, config.pl should be used only in a custom environment and should
> never be committed. Hence, why not adding a .gitignore in
> src/tools/msvc and ignoring this file? This will prevent unfortunate
> commits that could include this file and impact the devs working on
> Windows. The patch attached does that. I think that it should be
> back-patched for consistency across branches..
Actually I am sending an updated patch as buildenv.pl enters in the
same category as config.pl.
-- 
Michael
From d1e22a13cb732952facc5fa563bfe44d25437eff Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Mon, 12 May 2014 16:04:11 +0900
Subject: [PATCH] Ignore config.pl and buildenv.pl in src/tools/msvc

config.pl and buildenv.pl can be used to override build settings when
using MSVC and should be never be included in the code tree.
---
 src/tools/msvc/.gitignore | 3 +++
 1 file changed, 3 insertions(+)
 create mode 100644 src/tools/msvc/.gitignore

diff --git a/src/tools/msvc/.gitignore b/src/tools/msvc/.gitignore
new file mode 100644
index 000..3a7a928
--- /dev/null
+++ b/src/tools/msvc/.gitignore
@@ -0,0 +1,3 @@
+# Custom configuration file for MSVC build
+/config.pl
+/buildenv.pl
-- 
1.9.2


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Ignore src/tools/msvc/config.pl in code tree for MSVC compilation

2014-05-12 Thread Michael Paquier
Hi all,

MSVC build uses two configuration perl files when running:
config_default.pl and config.pl. The former is mandatory and is
present in the code tree, while the latter can be used to override
settings with some custom parameters. As far as I understand from the
docs, config.pl should be used only in a custom environment and should
never be committed. Hence, why not adding a .gitignore in
src/tools/msvc and ignoring this file? This will prevent unfortunate
commits that could include this file and impact the devs working on
Windows. The patch attached does that. I think that it should be
back-patched for consistency across branches..
Regards,
-- 
Michael
From 31dc80972a2ea626b63ab22f3c2c8735e15f9582 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Mon, 12 May 2014 15:06:01 +0900
Subject: [PATCH] Ignore config.pl in src/tools/msvc

config.pl can be used to override build settings when using MSVC and
should be never be included in the code tree.
---
 src/tools/msvc/.gitignore | 2 ++
 1 file changed, 2 insertions(+)
 create mode 100644 src/tools/msvc/.gitignore

diff --git a/src/tools/msvc/.gitignore b/src/tools/msvc/.gitignore
new file mode 100644
index 000..dc9d8c0
--- /dev/null
+++ b/src/tools/msvc/.gitignore
@@ -0,0 +1,2 @@
+# Custom configuration file for MSVC build
+/config.pl
-- 
1.9.2


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Select queries which violates table constrains

2014-05-12 Thread Joni Martikainen

Hi,

I investigated some select query performance issues and noticed that 
postgresql misses some obvious cases while processing SELECT query. I 
mean the case where WHERE clause contains statement which condition 
would be against table structure. (excuse my language, look the code)


Example:
Let the table be :

CREATE TABLE test
(
  id numeric(3,0) NOT NULL,
  somecolumn numeric(5,0) NOT NULL,
  CONSTRAINT id_pk PRIMARY KEY (id)
);

Simple table with "somecolumn" column which has constraint NOT NULL.

Let's do a following query to the table.

SELECT somecolumn FROM test WHERE somecolumn IS NULL;

Result is empty result set which is obvious because any null value would 
be against the table constrain.
The thing here is that postgresql does SeqScan to this table in order to 
find out if there is any null values.


Explain:
"Seq Scan on test  (cost=0.00..1.06 rows=1 width=5)"
"  Filter: (somecolumn IS NULL)"
"Planning time: 0.778 ms"

SeqScan can be avoided by making index for "somecolumn" and indexing all 
the null values. That index would be empty and very fast but also very 
pointless since table constraint here is simple.
No one would do such a query in real life but some programmatically 
generated queries does this kind of things. Only way I found to go 
around this problem was to create those empty indexies but I think the 
query optimizer could be smarter here.


I took a look of the optimizer code and I didn't find any code which 
avoids this kind of situations. (I expect that it would be optimizer's 
task to find out this kind of things)


I was thinking some feature for optimizer where the optimizer could add 
a hint for an executor if some query plan path leads to the empty result 
set case. If executor sees this hint it could avoid doing seqscan and 
actually even index scans. This kind of query constraint vs. table 
constraint comparison should be anyway cheaper process to execute than 
seqscan.


The question is that, is there any reason why such an optimization phase 
could not be implemented? Another question is that how is the query 
engine handling the partitioned table case? Am i right that table 
partitions are solved by table constrains and indexies are used to 
validate which child table to look for? And so forth could this kind of 
new optimization phase benefit partitioned tables?



Kind regards
Joni Martikainen



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Pavel Stehule
2014-05-12 15:42 GMT+02:00 Andrew Dunstan :

>
> On 05/12/2014 07:10 AM, Pavel Stehule wrote:
>
>> Hello
>>
>> I am trying to compile PL/v8 without success. I have Postgres installed
>> via compilation from source code.
>>
>> After make I got errors
>>
>> [pavel@localhost plv8-1.4.2]$ make
>> g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
>> -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
>>  -fPIC -c -o plv8.o plv8.cc
>> plv8.cc:50:56: error: declaration of ‘Datum 
>> plv8_call_handler(FunctionCallInfo)
>> throw ()’ has a different exception specifier
>>  Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
>> ^
>> plv8.cc:43:7: error: from previous declaration ‘Datum plv8_call_handler(
>> FunctionCallInfo)’
>>  PG_FUNCTION_INFO_V1(plv8_call_handler);
>>
>
> This looks like a result of commit  gitweb/?p=postgresql.git;a=commitdiff;h=e7128e8dbb305059c30ec085461297
> e619bcbff4> Maybe we need a way of telling the preprocessor to suppress
> the generation of a prototype?
>

I got same result with tarball 9.2.4 released 2014-04-04

Pavel


>
> cheers
>
> andrew
>
>


Re: [HACKERS] Proposal for CSN based snapshots

2014-05-12 Thread Heikki Linnakangas

On 01/24/2014 02:10 PM, Rajeev rastogi wrote:

We are also planning to implement CSN based snapshot.
So I am curious to know whether any further development is happening on this.


I started looking into this, and plan to work on this for 9.5. It's a 
big project, so any help is welcome. The design I have in mind is to use 
the LSN of the commit record as the CSN (as Greg Stark suggested).


Some problems and solutions I have been thinking of:

The core of the design is to store the LSN of the commit record in 
pg_clog. Currently, we only store 2 bits per transaction there, 
indicating if the transaction committed or not, but the patch will 
expand it to 64 bits, to store the LSN. To check the visibility of an 
XID in a snapshot, the XID's commit LSN is looked up in pg_clog, and 
compared with the snapshot's LSN.


Currently, before consulting the clog for an XID's status, it is 
necessary to first check if the transaction is still in progress by 
scanning the proc array. To get rid of that requirement, just before 
writing the commit record in the WAL, the backend will mark the clog 
slot with a magic value that says "I'm just about to commit". After 
writing the commit record, it is replaced with the record's actual LSN. 
If a backend sees the magic value in the clog, it will wait for the 
transaction to finish the insertion, and then check again to get the 
real LSN. I'm thinking of just using XactLockTableWait() for that. This 
mechanism makes the insertion of a commit WAL record and updating the 
clog appear atomic to the rest of the system.


With this mechanism, taking a snapshot is just a matter of reading the 
current WAL insertion point. There is no need to scan the proc array, 
which is good. However, it probably still makes sense to record an xmin 
and an xmax in SnapshotData, for performance reasons. An xmax, in 
particular, will allow us to skip checking the clog for transactions 
that will surely not be visible. We will no longer track the latest 
completed XID or the xmin like we do today, but we can use 
SharedVariableCache->nextXid as a conservative value for xmax, and keep 
a cached global xmin value in shared memory, updated when convenient, 
that can be just copied to the snapshot.


In theory, we could use a snapshot LSN as the cutoff-point for 
HeapTupleSatisfiesVisibility(). Maybe it's just because this is new, but 
that makes me feel uneasy. In any case, I think we'll need a cut-off 
point defined as an XID rather than an LSN for freezing purposes. In 
particular, we need a cut-off XID to determine how far the pg_clog can 
be truncated, and to store in relfrozenxid. So, we will still need the 
concept of a global oldest xmin.


When a snapshot is just an LSN, taking a snapshot can no longer 
calculate an xmin, like we currently do (there will be a snapshot LSN in 
place of an xmin in the proc array). So we will need a new mechanism to 
calculate the global oldest xmin. First scan the proc array to find the 
oldest still in-progress XID. That - 1 will become the new oldest global 
xmin, after all currently active snapshots have finished. We don't want 
to sleep in GetOldestXmin(), waiting for the snapshots to finish, so we 
should periodically advance a system-wide oldest xmin value, for example 
whenever the walwrite process wakes up, so that when we need an 
oldest-xmin value, we will always have a fairly recently calculated 
value ready in shared memory.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Andrew Dunstan


On 05/12/2014 07:10 AM, Pavel Stehule wrote:

Hello

I am trying to compile PL/v8 without success. I have Postgres 
installed via compilation from source code.


After make I got errors

[pavel@localhost plv8-1.4.2]$ make
g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server 
-I/usr/local/pgsql/include/internal -D_GNU_SOURCE 
-I/usr/include/libxml2  -fPIC -c -o plv8.o plv8.cc
plv8.cc:50:56: error: declaration of ‘Datum 
plv8_call_handler(FunctionCallInfo) throw ()’ has a different 
exception specifier

 Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
^
plv8.cc:43:7: error: from previous declaration ‘Datum 
plv8_call_handler(FunctionCallInfo)’

 PG_FUNCTION_INFO_V1(plv8_call_handler);


This looks like a result of commit 
 
Maybe we need a way of telling the preprocessor to suppress the 
generation of a prototype?


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Greg Stark
On Mon, May 12, 2014 at 2:12 PM, Greg Stark  wrote:
> Hm, there was an off by one error earlier in some cases, maybe we
> fixed it by breaking other case. Will investigate.

Those spaces are coming from the ascii wrapping indicators. i.e. the periods in:

+-++
|a   +| a +|
|+| b  |
|b||
+-++
| xx  | yy |
|    +|   +|
| xx +| yy+|
|    +|   +|
| xx +| yy+|
|    +|   +|
| xx +| yy+|
| xxx.|   +|
|.x  +| yy+|
| xxx.||
|.xxx+||
| xxx.||
|.x   ||
+-++

Apparently we used to print those with border=1 in normal mode but in
expanded mode we left out the space for those on the outermost edges
since there was no need for them. If we put them in for wrapped mode
then we'll be inconsistent if we don't for nonwrapped mode though. And
if we don't put them in for wrapped mode then there's no way to
indicate wrapping versus newlines.

The biggest difference it makes is that in the border=1 mode the lines
ended at the end of the data previously. Now it's expanded to fill the
rectangle because of the plus symbols. ie. It used to look like:

-[ RECORD 1 ]---
a | xx
  |
b |
a | yy
b |
-[ RECORD 2 ]---
a | 
  | xx
b | 
  | xx
  | 
  | xx
  | 
  | xx
  | 
a | 
b | yy
  | 
  | yy
  | 
  | yy
  | 
  | yy
  |

and now looks like:

-[ RECORD 1 ]---
 a+| xx
  +|
 b |
 a+| yy
 b |
-[ RECORD 2 ]---
 a+| +
  +| xx  +
 b | +
   | xx  +
   | +
   | xx  +
   | +
   | xx  +
   | 
 a+| +
 b | yy  +
   | +
   | yy  +
   | +
   | yy  +
   | +
   | yy  +
   |



-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-05-12 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, May  4, 2014 at 11:12:57AM -0400, Tom Lane wrote:
> > Stephen Frost  writes:
> > > * Abhijit Menon-Sen (a...@2ndquadrant.com) wrote:
> > >> 1. I wish it were possible to prevent even the superuser from disabling
> > >> audit logging once it's enabled, so that if someone gained superuser
> > >> access without authorisation, their actions would still be logged.
> > >> But I don't think there's any way to do this.
> > 
> > > Their actions should be logged up until they disable auditing and
> > > hopefully those logs would be sent somewhere that they're unable to
> > > destroy (eg: syslog).  Of course, we make that difficult by not
> > > supporting log targets based on criteria (logging EVERYTHING to syslog
> > > would suck).
> > 
> > > I don't see a way to fix this, except to minimize the amount of things
> > > requiring superuser to reduce the chances of it being compromised, which
> > > is something I've been hoping to see happen for a long time.
> > 
> > Prohibiting actions to the superuser is a fundamentally flawed concept.
> > If you do that, you just end up having to invent a new "more super"
> > kind of superuser who *can* do whatever it is that needs to be done.
> 
> We did create a "replication" role that could only read data, right?  Is
> that similar?

Not sure which of the above discussions you're suggesting it's 'similar'
to, but a 'read-only' role (which is specifically *not* a superuser)
would definitely help reduce the number of things which need to run as
an actual 'superuser' (eg: pg_dump).

The above discussion was around having auditing which the superuser
couldn't change, which isn't really possible as a superuser can change
the code that's executing (modulo things like SELinux changing the
game, but that's outside PG to some extent).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Greg Stark
On Mon, May 12, 2014 at 2:00 PM, Tom Lane  wrote:
> but where did those leading spaces come from?  The header line is
> definitely not on board with that, and I think those spaces are
> contributing to the lines being too long for the window.  I think
> possibly the code is also adding a space that shouldn't be there
> at the end of the lines, because it prints lines that wrap around
> if I \pset columns to either 79 or 80 in an 80-column window, so
> the accounting is off by 2 someplace.

Hm, there was an off by one error earlier in some cases, maybe we
fixed it by breaking other case. Will investigate.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Tom Lane
Emre Hasegeli  writes:
> Pavel Stehule :
>> I am checking feature
>> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6513633b94173fc1d9e2b213c43f9422ddbf5faa
>> 
>> It works perfect with pager "less", but it works badly with default "more"

> I do not think so. It looks broken with or without any pager when
> border != 2. Your less configuration might be hiding the problem from you.

This seems broken in several ways.  I tried this test case:

regression=# \x \pset format wrapped
Expanded display (expanded) is on.
Output format (format) is wrapped.
regression=# select * from pg_proc where prolang!=12;

In 9.3, the output looks like this:

-[ RECORD 1 ]---+---


proname | to_timestamp
pronamespace| 11
proowner| 10
prolang | 14
procost | 1
prorows | 0
provariadic | 0
protransform| -
...

In HEAD, I see:

-[ RECORD 1 ]---+---
 proname | to_timestamp 
  
 pronamespace| 11   
  
 proowner| 10   
  
 prolang | 14   
  
 procost | 1
  
 prorows | 0
  
 provariadic | 0
  
 protransform| -
  
After "\pset columns 77" it looks a little better:

-[ RECORD 1 ]---+
 proname | to_timestamp
 pronamespace| 11  
 proowner| 10  
 prolang | 14  
 procost | 1   
 prorows | 0   
 provariadic | 0   
 protransform| -   
 proisagg| f   
 proiswindow | f   

but where did those leading spaces come from?  The header line is
definitely not on board with that, and I think those spaces are
contributing to the lines being too long for the window.  I think
possibly the code is also adding a space that shouldn't be there
at the end of the lines, because it prints lines that wrap around
if I \pset columns to either 79 or 80 in an 80-column window, so
the accounting is off by 2 someplace.

Also, this code looks quite broken:

 width = dwidth + swidth + hwidth;
 if ((output_columns > 0) && (width > output_columns))
 {
 dwidth = output_columns - hwidth - swidth;
 width = output_columns;
 }

What happens if output_columns is less than hwidth + swidth?  The code
goes crazy is what happens, because all of these are unsigned ints and so
wraparound leads to setting dwidth to something approaching 4 billion.
Try the same example after "\pset columns 10".  I don't necessarily expect
it to produce beautiful output, but I do expect it to not lock up.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Pavel Stehule
2014-05-12 13:45 GMT+02:00 Michael Paquier :

> On Mon, May 12, 2014 at 8:10 PM, Pavel Stehule 
> wrote:
> > g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> > -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
> > -fPIC -c -o plv8.o plv8.cc
> > plv8.cc:50:56: error: declaration of 'Datum
> > plv8_call_handler(FunctionCallInfo) throw ()' has a different exception
> > specifier
> >  Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
> > Some ideas how to fix it?
> It seems that you are compiling on the outdated branch staticlink. On
> either master or r1.4 it will work properly on Fedora 20, at least it
> works for me.
>

How I can check it?

I had same bug with scientific linux and I expected so this problem will be
solved on new Fedora. On second computer on newer system I had same problem.

Some problem can be in my g++ environment - it is default Fedora.

Pavel


> --
> Michael
>


Re: [HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Michael Paquier
On Mon, May 12, 2014 at 8:10 PM, Pavel Stehule  wrote:
> g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
> -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
> -fPIC -c -o plv8.o plv8.cc
> plv8.cc:50:56: error: declaration of 'Datum
> plv8_call_handler(FunctionCallInfo) throw ()' has a different exception
> specifier
>  Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
> Some ideas how to fix it?
It seems that you are compiling on the outdated branch staticlink. On
either master or r1.4 it will work properly on Fedora 20, at least it
works for me.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Heikki Linnakangas

On 05/12/2014 02:29 PM, Fujii Masao wrote:

Hmm.. probably I have the same opinion with you. If I understand this correctly,
an immediate shutdown doesn't call CancelBackup() in 9.4 or before. But the
commit 82233ce unintentionally changed an immediate shutdown so that it calls
CancelBackup().


Oh, sorry. I thought it was the other way 'round: that we used to remove 
backup_label on an immediate shutdown on 9.3 and before, but that 9.4 
doesn't do that anymore. Now that I re-read this thread and tested it 
myself, I see that I got it backwards.


I agree the new behavior is better, and we should just remove the Open 
Items entry.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-05-12 Thread Emre Hasegeli
Pavel Stehule :
> Hello
>
> I am checking feature
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6513633b94173fc1d9e2b213c43f9422ddbf5faa
>
> It works perfect with pager "less", but it works badly with default "more"
>
> see attached screenshots, pls
>
> It is expected behave?

I do not think so. It looks broken with or without any pager when
border != 2. Your less configuration might be hiding the problem from you.

I think it is because of miscalculation of the width used by
the separators. Increasing this variable for border = 0 and 1 fixed
the problem, but it might not be the right fix. The patch without
regression test changes attached.

While looking at it, I found another problem. It seems to me, a minus sign
is missing after -[RECORD  ] when border = 1.


psql-wrapped-expanded-fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Fujii Masao
On Mon, May 12, 2014 at 4:52 PM, Heikki Linnakangas
 wrote:
> On 05/09/2014 05:19 PM, Fujii Masao wrote:
>>
>> On Thu, Mar 20, 2014 at 11:38 PM, Alvaro Herrera
>>  wrote:
>>>
>>> Kyotaro HORIGUCHI escribió:

 Hi, I confirmed that 82233ce7ea4 surely did it.

 At Wed, 19 Mar 2014 09:35:16 -0300, Alvaro Herrera wrote
>
> Fujii Masao escribió:
>>
>> On Wed, Mar 19, 2014 at 7:57 PM, Heikki Linnakangas
>>  wrote:
>
>
 9.4 canceles backup mode even on immediate shutdown so the
 operation causes no problem, but 9.3 and before are doesn't.
>>>
>>>
>>> Hmm, I don't think we've changed that behavior in 9.4.
>>
>>
>> ISTM 82233ce7ea42d6ba519aaec63008aff49da6c7af changed immdiate
>> shutdown that way.
>
>
> Uh, interesting.  I didn't see that secondary effect.  I hope it's not
> for ill?


 The crucial factor for the behavior change is that pmdie has
 become not to exit immediately for SIGQUIT. 'case SIGQUIT:' in
 pmdie() ended with "ExitPostmaster(0)" before the patch but now
 it ends with 'PostmasterStateMachine(); break;' so continues to
 run with pmState = PM_WAIT_BACKENDS, similar to SIGINT (fast
 shutdown).

 After all, pmState changes to PM_NO_CHILDREN via PM_WAIT_DEAD_END
 by SIGCHLDs from non-significant processes, then CancelBackup().
>>>
>>>
>>> Judging from what was being said on the thread, it seems that running
>>> CancelBackup() after an immediate shutdown is better than not doing it,
>>> correct?
>>
>>
>> This is listed as a 9.4 Open Item, but no one seems to want to revert
>> this change.
>> So I'll drop this from the Open Item list barring objections.
>
>
> I object. We used to call CancelBackup() on immediate shutdown, which was
> good. That was inadvertently changed by commit 82233ce. That's a regression
> we should fix. I agree with Alvaro upthread that we don't want to revert
> 82233ce, but we should come up with a fix.

Hmm.. probably I have the same opinion with you. If I understand this correctly,
an immediate shutdown doesn't call CancelBackup() in 9.4 or before. But the
commit 82233ce unintentionally changed an immediate shutdown so that it calls
CancelBackup(). For now, no one wants to revert the current behavior. So I think
there is nothing that we have to do now. No?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-12 Thread Tom Lane
Andres Freund  writes:
> On 2014-05-12 10:07:29 +0300, Heikki Linnakangas wrote:
>> But I concur that in practice, if you're dealing with 16TB tables, it's time
>> to partition.

> Well, we need to improve our partitioning for that to be viable for all
> relations. Not having usable foreign and unique keys makes it a pita in
> some cases.

Well, yeah, but that's on the to-do list in any case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cannot to compile PL/V8 on Fedora 20

2014-05-12 Thread Pavel Stehule
Hello

I am trying to compile PL/v8 without success. I have Postgres installed via
compilation from source code.

After make I got errors

[pavel@localhost plv8-1.4.2]$ make
g++ -Wall -O2  -I. -I./ -I/usr/local/pgsql/include/server
-I/usr/local/pgsql/include/internal -D_GNU_SOURCE -I/usr/include/libxml2
-fPIC -c -o plv8.o plv8.cc
plv8.cc:50:56: error: declaration of ‘Datum
plv8_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
^
plv8.cc:43:7: error: from previous declaration ‘Datum
plv8_call_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plv8_call_handler);
   ^
plv8.cc:51:58: error: declaration of ‘Datum
plv8_call_validator(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plv8_call_validator(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:44:7: error: from previous declaration ‘Datum
plv8_call_validator(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plv8_call_validator);
   ^
plv8.cc:52:60: error: declaration of ‘Datum
plcoffee_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plcoffee_call_handler(PG_FUNCTION_ARGS) throw();
^
plv8.cc:45:7: error: from previous declaration ‘Datum
plcoffee_call_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plcoffee_call_handler);
   ^
plv8.cc:53:62: error: declaration of ‘Datum
plcoffee_call_validator(FunctionCallInfo) throw ()’ has a different
exception specifier
 Datum plcoffee_call_validator(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:46:7: error: from previous declaration ‘Datum
plcoffee_call_validator(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plcoffee_call_validator);
   ^
plv8.cc:54:56: error: declaration of ‘Datum
plls_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plls_call_handler(PG_FUNCTION_ARGS) throw();
^
plv8.cc:47:7: error: from previous declaration ‘Datum
plls_call_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plls_call_handler);
   ^
plv8.cc:55:58: error: declaration of ‘Datum
plls_call_validator(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plls_call_validator(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:48:7: error: from previous declaration ‘Datum
plls_call_validator(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plls_call_validator);
   ^
plv8.cc:63:58: error: declaration of ‘Datum
plv8_inline_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plv8_inline_handler(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:60:7: error: from previous declaration ‘Datum
plv8_inline_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plv8_inline_handler);
   ^
plv8.cc:64:62: error: declaration of ‘Datum
plcoffee_inline_handler(FunctionCallInfo) throw ()’ has a different
exception specifier
 Datum plcoffee_inline_handler(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:61:7: error: from previous declaration ‘Datum
plcoffee_inline_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plcoffee_inline_handler);
   ^
plv8.cc:65:58: error: declaration of ‘Datum
plls_inline_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 Datum plls_inline_handler(PG_FUNCTION_ARGS) throw();
  ^
plv8.cc:62:7: error: from previous declaration ‘Datum
plls_inline_handler(FunctionCallInfo)’
 PG_FUNCTION_INFO_V1(plls_inline_handler);
   ^
plv8.cc: In function ‘Datum plv8_call_handler(FunctionCallInfo)’:
plv8.cc:310:50: error: declaration of ‘Datum
plv8_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 plv8_call_handler(PG_FUNCTION_ARGS) throw()
  ^
plv8.cc:50:7: error: from previous declaration ‘Datum
plv8_call_handler(FunctionCallInfo)’
 Datum plv8_call_handler(PG_FUNCTION_ARGS) throw();
   ^
plv8.cc: In function ‘Datum plcoffee_call_handler(FunctionCallInfo)’:
plv8.cc:316:54: error: declaration of ‘Datum
plcoffee_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 plcoffee_call_handler(PG_FUNCTION_ARGS) throw()
  ^
plv8.cc:52:7: error: from previous declaration ‘Datum
plcoffee_call_handler(FunctionCallInfo)’
 Datum plcoffee_call_handler(PG_FUNCTION_ARGS) throw();
   ^
plv8.cc: In function ‘Datum plls_call_handler(FunctionCallInfo)’:
plv8.cc:322:50: error: declaration of ‘Datum
plls_call_handler(FunctionCallInfo) throw ()’ has a different exception
specifier
 plls_call_handler(PG_FUNCTION_ARGS) throw()

Re: [HACKERS] Runing DBT2 on Postgresql

2014-05-12 Thread Rohit Goyal
On Thu, Apr 24, 2014 at 6:57 AM, Peter Geoghegan  wrote:

> On Wed, Apr 23, 2014 at 2:33 AM, Rohit Goyal  wrote:
> > I am trying to install dbt2 on postgresql database.
> >
> > cmake(configure) command work fine and but make command(build) give an
> error
> > given below. I have no idea about how to solve it
>
> ld has become less tolerant of certain flag orderings over time in
> certain distros. The following tweak may be used as a quick-and-dirty
> work around:
>
> diff --git a/CMakeLists.txt b/CMakeLists.txt
> index 6a128e3..f6a796b 100644
> --- a/CMakeLists.txt
> +++ b/CMakeLists.txt
> @@ -11,6 +11,7 @@ SET(DBT2_CLIENT bin/dbt2-client)
>  SET(DBT2_DATAGEN bin/dbt2-datagen)
>  SET(DBT2_DRIVER bin/dbt2-driver)
>  SET(DBT2_TXN_TEST bin/dbt2-transaction-test)
> +set(CMAKE_EXE_LINKER_FLAGS "-Wl,--no-as-needed")
>
>  #
>  # Check for large file support by using 'getconf'.
>
>
> --
> Peter Geoghegan
>


Hi Peter/All,

I installed the dbt-2 benchmark by implementing the change you mentioned.
Now, I am trying to follow the readme_postgresql for running test on
postgresql, but facing an error in understanding changes in dbt2_profile.
 Can you explain my next step:

Could you tel me what to write in how to
set environment variables, see examples/dbt2_profile and proceed further.
Please help me and give me some link to run the test.

I cant find bin/pgsql/pgsql_profile.in file also and when i tried to "Create
a 1 warehouse database by running bin/pgsql/dbt2-pgsql-build-db

and put the data files in '/tmp/data':
dbt2-pgsql-build-db -w 1"

I got the error that dbt2-pgsql-build-db not found.

Please guide!!
--

Regards,
Rohit Goyal


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-12 Thread Heikki Linnakangas

On 05/09/2014 05:19 PM, Fujii Masao wrote:

On Thu, Mar 20, 2014 at 11:38 PM, Alvaro Herrera
 wrote:

Kyotaro HORIGUCHI escribió:

Hi, I confirmed that 82233ce7ea4 surely did it.

At Wed, 19 Mar 2014 09:35:16 -0300, Alvaro Herrera wrote

Fujii Masao escribió:

On Wed, Mar 19, 2014 at 7:57 PM, Heikki Linnakangas
 wrote:



9.4 canceles backup mode even on immediate shutdown so the
operation causes no problem, but 9.3 and before are doesn't.


Hmm, I don't think we've changed that behavior in 9.4.


ISTM 82233ce7ea42d6ba519aaec63008aff49da6c7af changed immdiate
shutdown that way.


Uh, interesting.  I didn't see that secondary effect.  I hope it's not
for ill?


The crucial factor for the behavior change is that pmdie has
become not to exit immediately for SIGQUIT. 'case SIGQUIT:' in
pmdie() ended with "ExitPostmaster(0)" before the patch but now
it ends with 'PostmasterStateMachine(); break;' so continues to
run with pmState = PM_WAIT_BACKENDS, similar to SIGINT (fast
shutdown).

After all, pmState changes to PM_NO_CHILDREN via PM_WAIT_DEAD_END
by SIGCHLDs from non-significant processes, then CancelBackup().


Judging from what was being said on the thread, it seems that running
CancelBackup() after an immediate shutdown is better than not doing it,
correct?


This is listed as a 9.4 Open Item, but no one seems to want to revert
this change.
So I'll drop this from the Open Item list barring objections.


I object. We used to call CancelBackup() on immediate shutdown, which 
was good. That was inadvertently changed by commit 82233ce. That's a 
regression we should fix. I agree with Alvaro upthread that we don't 
want to revert 82233ce, but we should come up with a fix.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-12 Thread Heikki Linnakangas

On 05/10/2014 01:32 AM, Tom Lane wrote:

Peter Geoghegan  writes:

On Fri, May 9, 2014 at 2:54 PM, Tom Lane  wrote:

However, what it looks to me like we've got here is a very bad
reimplementation of StringInfo buffers.  There is for example no
integer-overflow checking here.  Rather than try to bring this code
up to speed, I think we should rip it out and use StringInfo.



Heikki did specifically consider StringInfo buffers and said they were
not best suited to the task at hand. At the time I thought he meant
that he'd do something domain-specific to avoid unnecessary geometric
growth in the size of the buffer (I like to grow buffers to either
twice their previous size, or just big enough to fit the next thing,
whichever is larger), but that doesn't appear to be the case. Still,
it would be good to know what he meant before proceeding. It probably
had something to do with alignment.


It looks to me like he wanted an API that would let him reserve space
separately from filling it, which is not in stringinfo.c but is surely
easily built on top of it.


Right, the API to reserve space separately was what I had in mind.


For the moment, I've just gotten rid of
the buggy code fragment in favor of calling enlargeStringInfo, which
I trust to be right.


Thanks. I admit it didn't even occur to me to keep the localized API in 
jsonb_utils as wrappers around appendString* functions. I only 
considered two options: using appendString* directly, or doing 
repalloc's in jsonb_utils.c. I like what you did there.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-12 Thread Andres Freund
On 2014-05-12 10:07:29 +0300, Heikki Linnakangas wrote:
> On 05/12/2014 12:30 AM, Andres Freund wrote:
> >>>So if I were to take Andres'
> >>>complaint seriously at all, I'd be thinking in terms of "do we need to
> >>>widen BlockNumber to int64?", not "how do we make this print as
> >>>unsigned?".  But I doubt such a proposal would fly, because of the
> >>>negative impact on index sizes.
> >Yea, I am not wild for that either. I guess migrating to a postgres with
> >a larger blocksize is the next step.
> 
> A larger block size won't buy you very much time either.

Well. If you mean 'a year or five with that... :)

> We could steal some bits from the OffsetNumber portion of an ItemPointer. If
> we assume the max. block size of 32kb, and that each Item takes at least 16
> bytes, you only need 11 bits for the offset number. That leaves 5 bits
> unused, and if we use them to expand the block number to 37 bits in total,
> that's enough for 1 PB with the default 8k block size.

Hm. That's not a generally bad idea. I think we'll have to do that in a
couple of years. Regardless of better partitioning.

> But I concur that in practice, if you're dealing with 16TB tables, it's time
> to partition.

Well, we need to improve our partitioning for that to be viable for all
relations. Not having usable foreign and unique keys makes it a pita in
some cases.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-12 Thread Heikki Linnakangas

On 05/12/2014 12:30 AM, Andres Freund wrote:

>So if I were to take Andres'
>complaint seriously at all, I'd be thinking in terms of "do we need to
>widen BlockNumber to int64?", not "how do we make this print as
>unsigned?".  But I doubt such a proposal would fly, because of the
>negative impact on index sizes.

Yea, I am not wild for that either. I guess migrating to a postgres with
a larger blocksize is the next step.


A larger block size won't buy you very much time either.

We could steal some bits from the OffsetNumber portion of an 
ItemPointer. If we assume the max. block size of 32kb, and that each 
Item takes at least 16 bytes, you only need 11 bits for the offset 
number. That leaves 5 bits unused, and if we use them to expand the 
block number to 37 bits in total, that's enough for 1 PB with the 
default 8k block size.


But I concur that in practice, if you're dealing with 16TB tables, it's 
time to partition.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers