Can we consider "24 Hours" for "next day" in INTERVAL datatype ?

2022-03-15 Thread Prabhat Sahu
Hi All,
Kindly check the below scenario with INTERVAL datatype.

postgres=# select interval '01 20:59:59' + interval '00 05:00:01' as
interval;
interval

 1 day 26:00:00
(1 row)

Any operation with INTERVAL data, We are changing the interval values as
"60 sec" as "next minute"
"60 min" as "next hour"
*Similarly can't we consider "24 Hours" for "next day" ?*
Is there any specific purpose we are holding the hours as an increasing
number beyond 24 hours also?

But when we are dealing with TIMESTAMP with INTERVAL values it's considered
the "24 Hours" for "next day".

postgres=# select timestamp '01-MAR-22 20:59:59' + interval '00 05:00:01'
 as interval;
  interval
-
 2022-03-02 02:00:00
(1 row)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Corruption with IMMUTABLE functions in index expression.

2021-10-11 Thread Prabhat Sahu
Hi All,

While using IMMUTABLE functions in index expression, we are getting below
corruption on HEAD.

postgres=# CREATE TABLE  tab1 (c1 numeric, c2 numeric);
CREATE TABLE

postgres=# INSERT INTO  tab1 values (10, 100);
INSERT 0 1

postgres=# CREATE OR REPLACE FUNCTION func1(var1 numeric)
RETURNS NUMERIC AS $$
DECLARE
result numeric;
BEGIN
 SELECT c2 into result FROM  tab1 WHERE c1=var1;
 RETURN result;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
CREATE FUNCTION

-- When using the IMMUTABLE function in creating an index for the first
time, it is working fine.
postgres=# CREATE INDEX idx1 ON  tab1(func1(c1));
CREATE INDEX

-- Executing the similar query for 2nd time, We are getting the error
postgres=# CREATE INDEX idx2 ON  tab1(func1(c1));
ERROR:  could not read block 0 in file "base/13675/16391": read only 0 of
8192 bytes
CONTEXT:  SQL statement "SELECT c2 FROM  tab1 WHERE c1=var1"
PL/pgSQL function func1(numeric) line 5 at SQL statement

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: replay of CREATE TABLESPACE eats data at wal_level=minimal

2021-08-23 Thread Prabhat Sahu
On Mon, Aug 23, 2021 at 4:29 AM Noah Misch  wrote:

> On Wed, Aug 18, 2021 at 10:32:10PM -0700, Noah Misch wrote:
> > On Wed, Aug 18, 2021 at 10:47:24AM -0400, Robert Haas wrote:
> > > On Tue, Aug 10, 2021 at 9:35 AM Robert Haas 
> wrote:
> > > > Oh, yeah, I think that works, actually. I was imagining a few
> problems
> > > > here, but I don't think they really exist. The redo routines for
> files
> > > > within the directory can't possibly care about having the old files
> > > > erased for them, since that wouldn't be something that would normally
> > > > happen, if there were no recent CREATE TABLESPACE involved. And
> > > > there's code further down to remove and recreate the symlink, just in
> > > > case. So I think your proposed patch might be all we need.
> > >
> > > Noah, do you plan to commit this?
> >
> > Yes.  I feel it needs a test case, which is the main reason I've queued
> the
> > task rather than just pushed what I posted last.
>
> Here's what I plan to push.  Besides adding a test,


I have reproduced the issue of data inconsistency with CREATE TABLESPACE at
wal_level=minimal,
also I have tested the fix with v0 and v1 patch, and come up with a similar
tap-testcase(as in v1). The test case looks good.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Patch] ALTER SYSTEM READ ONLY

2021-07-30 Thread Prabhat Sahu
Hi,

On Thu, Jul 29, 2021 at 9:46 PM Robert Haas  wrote:

> On Wed, Jul 28, 2021 at 7:33 AM Amul Sul  wrote:
> > I was too worried about how I could miss that & after thinking more
> > about that, I realized that the operation for ArchiveRecoveryRequested
> > is never going to be skipped in the startup process and that never
> > left for the checkpoint process to do that later. That is the reason
> > that assert was added there.
> >
> > When ArchiveRecoveryRequested, the server will no longer be in
> > the wal prohibited mode, we implicitly change the state to
> > wal-permitted. Here is the snip from the 0003 patch:
>
> Ugh, OK. That makes sense, but I'm still not sure that I like it. I've
> kind of been wondering: why not have XLogAcceptWrites() be the
> responsibility of the checkpointer all the time, in every case? That
> would require fixing some more things, and this is one of them, but
> then it would be consistent, which means that any bugs would be likely
> to get found and fixed. If calling XLogAcceptWrites() from the
> checkpointer is some funny case that only happens when the system
> crashes while WAL is prohibited, then we might fail to notice that we
> have a bug.
>
> This is especially true given that we have very little test coverage
> in this area. Andres was ranting to me about this earlier this week,
> and I wasn't sure he was right, but then I noticed that we have
> exactly zero tests in the entire source tree that make use of
> recovery_end_command. We really need a TAP test for that, I think.
> It's too scary to do much reorganization of the code without having
> any tests at all for the stuff we're moving around. Likewise, we're
> going to need TAP tests for the stuff that is specific to this patch.
> For example, we should have a test that crashes the server while it's
> read only, brings it back up, checks that we still can't write WAL,
> then re-enables WAL, and checks that we now can write WAL. There are
> probably a bunch of other things that we should test, too.
>

Hi,

I have been testing “ALTER SYSTEM READ ONLY” and wrote a few tap test cases
for this feature.
Please find the test case(Draft version) attached herewith, to be applied
on top of the v30 patch by Amul.
Kindly have a review and let me know the required changes.
-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


prohibitwal-tap-test.patch
Description: Binary data


Re: Doubt with [ RANGE partition with TEXT datatype ]

2021-04-19 Thread Prabhat Sahu
On Mon, Apr 19, 2021 at 2:16 PM Amit Langote 
wrote:

> Hi Prabhat,
>
> On Mon, Apr 19, 2021 at 5:13 PM Prabhat Sahu
>  wrote:
> >
> > Hi All,
> >
> > Please help me out with my doubt in RANGE partition with TEXT datatype:
> >
> > postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);
> > CREATE TABLE
> >
> > postgres=# create table p1 (col1 text);
> > CREATE TABLE
> >
> > -- Partition with range from '5' to '10' shows error:
> > postgres=# alter table tab1 attach partition p1 for values from ('5') to
> ('10');
> > ERROR:  empty range bound specified for partition "p1"
> > LINE 1: ...r table tab1 attach partition p1 for values from ('5') to
> ('...
> >  ^
> > DETAIL:  Specified lower bound ('5') is greater than or equal to upper
> bound ('10').
> >
> > -- Whereas, partition with range from '5' to '9' is working fine as
> below:
> > postgres=# alter table tab1 attach partition p1 for values from ('5') to
> ('9');
> > ALTER TABLE
>
> Well, that is how comparing text values works.  If you are expecting
> the comparisons to follow numerical rules, use a numeric data type.
>
> > If this behavior is expected, Kindly let me know, how to represent the
> range from '5' to '10' with text datatype column?
>
> Don't know why you want to use the text type for the column and these
> particular values for the partitions bounds, but one workaround would
> be to use '05' instead of '5'.
>

While testing on some PG behavior, I came across such a scenario/doubt.
Thank you Amit for the clarification.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Doubt with [ RANGE partition with TEXT datatype ]

2021-04-19 Thread Prabhat Sahu
Hi All,

Please help me out with my doubt in RANGE partition with TEXT datatype:

postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);
CREATE TABLE

postgres=# create table p1 (col1 text);
CREATE TABLE

-- Partition with range from '5' to '10' shows error:
postgres=# alter table tab1 attach partition p1 for values from ('5') to
('10');
ERROR:  empty range bound specified for partition "p1"
LINE 1: ...r table tab1 attach partition p1 for values from ('5') to ('...
 ^
DETAIL:  Specified lower bound ('5') is greater than or equal to upper
bound ('10').

-- Whereas, partition with range from '5' to '9' is working fine as below:
postgres=# alter table tab1 attach partition p1 for values from ('5') to
('9');
ALTER TABLE

If this behavior is expected, Kindly let me know, how to represent the
range from '5' to '10' with text datatype column?
Is there any specific restriction for RANGE PARTITION table with TEXT
datatype column?

Similar test scenario is working fine with INTEGER datatype.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Patch] ALTER SYSTEM READ ONLY

2021-03-19 Thread Prabhat Sahu
Hi all,
While testing this feature with v20-patch, the server is crashing with
below steps.

Steps to reproduce:
1. Configure master-slave replication setup.
2. Connect to Slave.
3. Execute below statements, it will crash the server:
SELECT pg_prohibit_wal(true);
SELECT pg_prohibit_wal(false);

-- Slave:
postgres=# select pg_is_in_recovery();
 pg_is_in_recovery
---
 t
(1 row)

postgres=# SELECT pg_prohibit_wal(true);
 pg_prohibit_wal
-

(1 row)

postgres=# SELECT pg_prohibit_wal(false);
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>

-- Below are the stack trace:
[prabhat@localhost bin]$ gdb -q -c /tmp/data_slave/core.35273 postgres
Reading symbols from
/home/prabhat/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 35273]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: checkpointer
'.
Program terminated with signal 6, Aborted.
#0  0x7fa876233387 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-317.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-50.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64
libgcc-4.8.5-44.el7.x86_64 libselinux-2.5-15.el7.x86_64
openssl-libs-1.0.2k-21.el7_9.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7fa876233387 in raise () from /lib64/libc.so.6
#1  0x7fa876234a78 in abort () from /lib64/libc.so.6
#2  0x00aea31c in ExceptionalCondition (conditionName=0xb8c998
"ThisTimeLineID != 0 || IsBootstrapProcessingMode()",
errorType=0xb8956d "FailedAssertion", fileName=0xb897c0 "xlog.c",
lineNumber=8611) at assert.c:69
#3  0x00588eb5 in InitXLOGAccess () at xlog.c:8611
#4  0x00588ae6 in LocalSetXLogInsertAllowed () at xlog.c:8483
#5  0x005881bb in XLogAcceptWrites (needChkpt=true, xlogreader=0x0,
EndOfLog=0, EndOfLogTLI=0) at xlog.c:8008
#6  0x005751ed in ProcessWALProhibitStateChangeRequest () at
walprohibit.c:361
#7  0x0088c69f in CheckpointerMain () at checkpointer.c:355
#8  0x0059d7db in AuxiliaryProcessMain (argc=2,
argv=0x7ffd1290d060) at bootstrap.c:455
#9  0x0089fc5f in StartChildProcess (type=CheckpointerProcess) at
postmaster.c:5416
#10 0x0089f782 in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5128
#11 
#12 0x7fa8762f2983 in __select_nocancel () from /lib64/libc.so.6
#13 0x0089b511 in ServerLoop () at postmaster.c:1700
#14 0x0089af00 in PostmasterMain (argc=5, argv=0x15b8460) at
postmaster.c:1408
#15 0x0079c23a in main (argc=5, argv=0x15b8460) at main.c:209
(gdb)

kindly let me know if you need more inputs on this.

On Mon, Mar 15, 2021 at 12:56 PM Amul Sul  wrote:

> On Sun, Mar 14, 2021 at 11:51 PM Ibrar Ahmed 
> wrote:
> >
> > On Tue, Mar 9, 2021 at 3:31 PM Amul Sul  wrote:
> >>
> >> On Thu, Mar 4, 2021 at 11:02 PM Amul Sul  wrote:
> >> >
> >> > On Wed, Mar 3, 2021 at 8:56 PM Robert Haas 
> wrote:
> >> > >
> >> > > On Tue, Mar 2, 2021 at 7:22 AM Dilip Kumar 
> wrote:
> >[]
> >
> > One of the patch
> (v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch) from the
> latest patchset does not apply successfully.
> >
> > http://cfbot.cputube.org/patch_32_2602.log
> >
> > === applying patch
> ./v18-0002-Error-or-Assert-before-START_CRIT_SECTION-for-WA.patch
> >
> > Hunk #15 succeeded at 2604 (offset -13 lines).
> > 1 out of 15 hunks FAILED -- saving rejects to file
> src/backend/access/nbtree/nbtpage.c.rej
> > patching file src/backend/access/spgist/spgdoinsert.c
> >
> > It is a very minor change, so I rebased the patch. Please take a look,
> if that works for you.
> >
>
> Thanks, I am getting one more failure for the vacuumlazy.c. on the
> latest master head(d75288fb27b), I fixed that in attached version.
>
> Regards,
> Amul
>


-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Patch] ALTER SYSTEM READ ONLY

2020-07-15 Thread Prabhat Sahu
Hi All,
I was testing the feature on top of v3 patch and found the "pg_upgrade"
failure after keeping "alter system read only;" as below:

-- Steps:
./initdb -D data
./pg_ctl -D data -l logs start -c
./psql postgres
alter system read only;
\q
./pg_ctl -D data -l logs stop -c

./initdb -D data2
./pg_upgrade -b . -B . -d data -D data2 -p  -P 5520


[edb@localhost bin]$ ./pg_upgrade -b . -B . -d data -D data2 -p  -P 5520
Performing Consistency Checks
-
Checking cluster versions   ok

The source cluster was not shut down cleanly.
Failure, exiting

--Below is the logs
2021-07-16 11:04:20.305 IST [105788] LOG:  starting PostgreSQL 14devel on
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat
4.8.5-39), 64-bit
2020-07-16 11:04:20.309 IST [105788] LOG:  listening on IPv6 address "::1",
port 5432
2020-07-16 11:04:20.309 IST [105788] LOG:  listening on IPv4 address
"127.0.0.1", port 5432
2020-07-16 11:04:20.321 IST [105788] LOG:  listening on Unix socket
"/tmp/.s.PGSQL.5432"
2020-07-16 11:04:20.347 IST [105789] LOG:  database system was shut down at
2020-07-16 11:04:20 IST
2020-07-16 11:04:20.352 IST [105788] LOG:  database system is ready to
accept connections
2020-07-16 11:04:20.534 IST [105790] LOG:  system is now read only
2020-07-16 11:04:20.542 IST [105788] LOG:  received fast shutdown request
2020-07-16 11:04:20.543 IST [105788] LOG:  aborting any active transactions
2020-07-16 11:04:20.544 IST [105788] LOG:  background worker "logical
replication launcher" (PID 105795) exited with exit code 1
2020-07-16 11:04:20.544 IST [105790] LOG:  shutting down
2020-07-16 11:04:20.544 IST [105790] LOG:  skipping shutdown checkpoint
because the system is read only
2020-07-16 11:04:20.551 IST [105788] LOG:  database system is shut down

On Tue, Jul 14, 2020 at 12:08 PM Amul Sul  wrote:

> Attached is a rebased version for the latest master head[1].
>
> Regards,
> Amul
>
> 1] Commit # 101f903e51f52bf595cd8177d2e0bc6fe9000762
>


-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-06-09 Thread Prabhat Sahu
On Wed, Apr 29, 2020 at 8:52 AM 曾文旌  wrote:

> 2020年4月27日 下午9:48,Prabhat Sahu  写道:
>
> Thanks Wenjing, for the fix patch for previous issues.
> I have verified the issues, now those fix look good to me.
> But the below error message is confusing(for gtt2).
>
> postgres=# drop table gtt1;
> ERROR:  cannot drop global temp table gtt1 when other backend attached it.
>
> postgres=# drop table gtt2;
> ERROR:  cannot drop index idx2 on global temp table gtt2 when other
> backend attached it.
>
> I feel the above error message shown for "DROP TABLE gtt2;" is a bit
> confusing(looks similar to DROP INDEX gtt2;).
> If possible, can we keep the error message simple as "ERROR:  cannot drop
> global temp table gtt2 when other backend attached it."?
> I mean, without giving extra information for the index attached to that
> GTT.
>
> Fixed the error message to make the expression more accurate. In v33.
>

Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all
reported issues and they work fine.
Thanks.
-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: PG function with pseudotype "anyelement" for IN, OUT parameter shows wrong behaviour.

2020-05-29 Thread Prabhat Sahu
On Fri, May 29, 2020 at 8:30 PM Pavel Stehule 
wrote:

>
>
> pá 29. 5. 2020 v 16:45 odesílatel Prabhat Sahu <
> prabhat.s...@enterprisedb.com> napsal:
>
>> Hi All,
>>
>> Please check the below scenario, with pseudotype "anyelement" for IN, OUT
>> parameter and the RETURN record in a function.
>>
>> postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;
>> CREATE TABLE
>> postgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN
>> anyelement, OUT v1 anyelement, OUT v2 anyelement)
>> RETURNS record
>> AS
>> $$
>> BEGIN
>>   SELECT $1 + 1, $2 + 1 into v1, v2;
>>   insert into tab1 values(v1, v2, now());
>> END;
>> $$
>> language 'plpgsql';
>> CREATE FUNCTION
>> postgres=# SELECT (func_any(1, 2)).*;
>>  v1 | v2
>> +
>>   2 |  3
>> (1 row)
>>
>> postgres=# select * from tab1;
>>  c1 | c2 | c3
>> ++
>>   2 |  3 | 2020-05-30 19:26:32.036924
>>   2 |  3 | 2020-05-30 19:26:32.036924
>> (2 rows)
>>
>> I hope, the table "tab1" should have only a single record, but we are
>> able to see 2 records in tab1.
>>
>
> it is correct, because you use composite unpacking syntax
>
> SELECT (func_any(1, 2)).*;
>
> means
>
> SELECT (func_any(1, 2)).c1, (func_any(1, 2)).c2;
>
> If you don't want double execution, you should to run your function in
> FROM clause
>
> postgres=# SELECT * FROM func_any(1, 2);
> ┌┬┐
> │ v1 │ v2 │
> ╞╪╡
> │  2 │  3 │
> └┴┘
> (1 row)
>

Thanks Pavel, for the help, I have verified the same, Now I am getting a
single record in tab1.
postgres=# SELECT func_any(1, 2);
 func_any
--
 (2,3)
(1 row)

postgres=# select * from tab1;
 c1 | c2 | c3
++
  2 |  3 | 2020-05-30 20:17:59.989087
(1 row)
Thanks,
Prabhat Sahu


PG function with pseudotype "anyelement" for IN, OUT parameter shows wrong behaviour.

2020-05-29 Thread Prabhat Sahu
Hi All,

Please check the below scenario, with pseudotype "anyelement" for IN, OUT
parameter and the RETURN record in a function.

postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;
CREATE TABLE
postgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN
anyelement, OUT v1 anyelement, OUT v2 anyelement)
RETURNS record
AS
$$
BEGIN
  SELECT $1 + 1, $2 + 1 into v1, v2;
  insert into tab1 values(v1, v2, now());
END;
$$
language 'plpgsql';
CREATE FUNCTION
postgres=# SELECT (func_any(1, 2)).*;
 v1 | v2
+
  2 |  3
(1 row)

postgres=# select * from tab1;
 c1 | c2 | c3
++
  2 |  3 | 2020-05-30 19:26:32.036924
  2 |  3 | 2020-05-30 19:26:32.036924
(2 rows)

I hope, the table "tab1" should have only a single record, but we are able
to see 2 records in tab1.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-27 Thread Prabhat Sahu
Thanks Wenjing, for the fix patch for previous issues.
I have verified the issues, now those fix look good to me.
But the below error message is confusing(for gtt2).

postgres=# drop table gtt1;
ERROR:  cannot drop global temp table gtt1 when other backend attached it.

postgres=# drop table gtt2;
ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend
attached it.

I feel the above error message shown for "DROP TABLE gtt2;" is a bit
confusing(looks similar to DROP INDEX gtt2;).
If possible, can we keep the error message simple as "ERROR:  cannot drop
global temp table gtt2 when other backend attached it."?
I mean, without giving extra information for the index attached to that GTT.

On Mon, Apr 27, 2020 at 5:34 PM 曾文旌  wrote:

>
>
> 2020年4月27日 下午5:26,Prabhat Sahu  写道:
>
> Hi Wenjing,
>
> Please check the below scenario shows different error message with "DROP
> TABLE gtt;" for gtt with and without index.
>
> *-- Session1:*postgres=# create global temporary table gtt1 (c1 int);
> CREATE TABLE
> postgres=# create global temporary table gtt2 (c1 int);
> CREATE TABLE
> postgres=# create index idx2 on gtt2(c1);
> CREATE INDEX
>
>
> *-- Session2:*postgres=# drop table gtt1;
> ERROR:  can not drop relation gtt1 when other backend attached this global
> temp table
> postgres=# drop table gtt2;
> ERROR:  can not drop index gtt2 when other backend attached this global
> temp table.
>
> For DROP GTT, we need to drop the index on the table first.
> So the indexes on the GTT are checked first.
> But the error message needs to be fixed.
> Fixed in v32
>
>
> wenjing
>
>
>
>
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-27 Thread Prabhat Sahu
Hi Wenjing,

Please check the below scenario shows different error message with "DROP
TABLE gtt;" for gtt with and without index.

*-- Session1:*postgres=# create global temporary table gtt1 (c1 int);
CREATE TABLE
postgres=# create global temporary table gtt2 (c1 int);
CREATE TABLE
postgres=# create index idx2 on gtt2(c1);
CREATE INDEX


*-- Session2:*postgres=# drop table gtt1;
ERROR:  can not drop relation gtt1 when other backend attached this global
temp table
postgres=# drop table gtt2;
ERROR:  can not drop index gtt2 when other backend attached this global
temp table.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-24 Thread Prabhat Sahu
Hi Wenjing,

With the new patch(v30) as you mentioned the new syntax support for
"TRUNCATE TABLE gtt DROP", but we also observe the syntax "DROP TABLE gtt
DROP" is working as below:

postgres=# create global temporary table gtt(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# DROP TABLE gtt DROP;
DROP TABLE

Does this syntax intensional? If not, we should get a syntax error.

On Fri, Apr 24, 2020 at 10:25 AM Prabhat Sahu 
wrote:

> Hi Wenjing,
>
> Please check, the server getting crash with the below scenario(CLUSTER gtt
> using INDEX).
>
> *-- Session1:*
> postgres=# create global temporary table gtt (c1 integer) on commit
> preserve rows;
> CREATE TABLE
> postgres=# create index idx1 on gtt (c1);
> CREATE INDEX
>
> *-- Session2:*
> postgres=# create index idx2 on gtt (c1);
> CREATE INDEX
>
> *-- Session1:*
> postgres=# cluster gtt using idx1;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> !?>
>
> *-- Below is the stacktrace:*
> [edb@localhost bin]$ gdb -q -c data/core.95690 postgres
> Reading symbols from
> /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
> [New LWP 95690]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `postgres: edb postgres [local] CLUSTER
>'.
> Program terminated with signal 6, Aborted.
> #0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
> krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
> zlib-1.2.7-18.el7.x86_64
> (gdb) bt
> #0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
> #1  0x7f9c574efa28 in abort () from /lib64/libc.so.6
> #2  0x00ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8
> "!ReindexIsProcessingIndex(indexOid)", errorType=0xb5d365
> "FailedAssertion",
> fileName=0xb5d4e9 "index.c", lineNumber=3825) at assert.c:67
> #3  0x005b0412 in reindex_relation (relid=16384, flags=2,
> options=0) at index.c:3825
> #4  0x0065e36d in finish_heap_swap (OIDOldHeap=16384,
> OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,
> check_constraints=false, is_internal=true, frozenXid=491,
> cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448
> #5  0x0065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,
> indexOid=16387, verbose=false) at cluster.c:602
> #6  0x0065c757 in cluster_rel (tableOid=16384, indexOid=16387,
> options=0) at cluster.c:418
> #7  0x0065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at
> cluster.c:180
> #8  0x0093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,
> queryString=0x2cd0b30 "cluster gtt using idx1;",
> context=PROCESS_UTILITY_TOPLEVEL,
> params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at
> utility.c:819
> #9  0x0093aa50 in ProcessUtility (pstmt=0x2cd16c8,
> queryString=0x2cd0b30 "cluster gtt using idx1;",
> context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
> queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522
> #10 0x009398c2 in PortalRunUtility (portal=0x2d36ba0,
> pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,
> qc=0x7ffcd32604b0)
> at pquery.c:1157
> #11 0x00939ad8 in PortalRunMulti (portal=0x2d36ba0,
> isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,
> qc=0x7ffcd32604b0)
> at pquery.c:1303
> #12 0x00938ff6 in PortalRun (portal=0x2d36ba0,
> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,
> altdest=0x2cd19a8,
> qc=0x7ffcd32604b0) at pquery.c:779
> #13 0x009331b0 in exec_simple_query (query_string=0x2cd0b30
> "cluster gtt using idx1;") at postgres.c:1239
> #14 0x009371bc in PostgresMain (argc=1, argv=0x2cfab80,
> dbname=0x2cfaa78 "postgres", username=0x2cfaa58 "edb") at postgres.c:4315
> #15 0x008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510
> #16 0x00886a9e in BackendStartup (port=0x2cf2b50) at
> postmaster.c:4202
> #17 0x0088301c in ServerLoop () at postmaster.c:1727
> #18 0x008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at
> postmaster.c:1400
> #19 0x00789c54 in main (argc=3, argv=0x2ccb460) at main.c:210
> (gdb)
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>


-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-23 Thread Prabhat Sahu
Hi Wenjing,

Please check, the server getting crash with the below scenario(CLUSTER gtt
using INDEX).

*-- Session1:*
postgres=# create global temporary table gtt (c1 integer) on commit
preserve rows;
CREATE TABLE
postgres=# create index idx1 on gtt (c1);
CREATE INDEX

*-- Session2:*
postgres=# create index idx2 on gtt (c1);
CREATE INDEX

*-- Session1:*
postgres=# cluster gtt using idx1;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>

*-- Below is the stacktrace:*
[edb@localhost bin]$ gdb -q -c data/core.95690 postgres
Reading symbols from
/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 95690]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: edb postgres [local] CLUSTER
 '.
Program terminated with signal 6, Aborted.
#0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
#1  0x7f9c574efa28 in abort () from /lib64/libc.so.6
#2  0x00ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8
"!ReindexIsProcessingIndex(indexOid)", errorType=0xb5d365
"FailedAssertion",
fileName=0xb5d4e9 "index.c", lineNumber=3825) at assert.c:67
#3  0x005b0412 in reindex_relation (relid=16384, flags=2,
options=0) at index.c:3825
#4  0x0065e36d in finish_heap_swap (OIDOldHeap=16384,
OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,
check_constraints=false, is_internal=true, frozenXid=491,
cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448
#5  0x0065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,
indexOid=16387, verbose=false) at cluster.c:602
#6  0x0065c757 in cluster_rel (tableOid=16384, indexOid=16387,
options=0) at cluster.c:418
#7  0x0065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at
cluster.c:180
#8  0x0093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,
queryString=0x2cd0b30 "cluster gtt using idx1;",
context=PROCESS_UTILITY_TOPLEVEL,
params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at
utility.c:819
#9  0x0093aa50 in ProcessUtility (pstmt=0x2cd16c8,
queryString=0x2cd0b30 "cluster gtt using idx1;",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522
#10 0x009398c2 in PortalRunUtility (portal=0x2d36ba0,
pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,
qc=0x7ffcd32604b0)
at pquery.c:1157
#11 0x00939ad8 in PortalRunMulti (portal=0x2d36ba0,
isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,
qc=0x7ffcd32604b0)
at pquery.c:1303
#12 0x00938ff6 in PortalRun (portal=0x2d36ba0,
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,
altdest=0x2cd19a8,
qc=0x7ffcd32604b0) at pquery.c:779
#13 0x009331b0 in exec_simple_query (query_string=0x2cd0b30
"cluster gtt using idx1;") at postgres.c:1239
#14 0x009371bc in PostgresMain (argc=1, argv=0x2cfab80,
dbname=0x2cfaa78 "postgres", username=0x2cfaa58 "edb") at postgres.c:4315
#15 0x008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510
#16 0x00886a9e in BackendStartup (port=0x2cf2b50) at
postmaster.c:4202
#17 0x0088301c in ServerLoop () at postmaster.c:1727
#18 0x008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at
postmaster.c:1400
#19 0x00789c54 in main (argc=3, argv=0x2ccb460) at main.c:210
(gdb)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-22 Thread Prabhat Sahu
On Wed, Apr 22, 2020 at 2:49 PM 曾文旌  wrote:

>
> Although the implementation of GTT is different, I think so TRUNCATE on
> Postgres (when it is really finalized) can remove session metadata of GTT
> too (and reduce usage's counter). It is not critical feature, but I think
> so it should not be hard to implement. From practical reason can be nice to
> have a tool how to refresh GTT without a necessity to close session.
> TRUNCATE can be this tool.
>
> Yes, I think we need a way to delete the GTT local storage without closing
> the session.
>
> I provide the TRUNCATE tablename DROP to clear the data in the GTT and
> delete the storage files.
> This feature requires the current transaction to commit immediately after
> it finishes truncate.
>

Hi Wenjing,
Thanks for the patch(v30) for the new syntax support for (TRUNCATE
table_name DROP) for deleting storage files after TRUNCATE on GTT.

Please check below scenarios:


*Case1:*-- session1:
postgres=# create global temporary table gtt2 (c1 integer) on commit
preserve rows;
CREATE TABLE
postgres=# create index  idx1 on gtt2 (c1);
CREATE INDEX
postgres=# create index  idx2 on gtt2 (c1) where c1%2 =0;
CREATE INDEX
postgres=#
postgres=# CLUSTER gtt2 USING idx1;
CLUSTER
postgres=# CLUSTER gtt2 USING idx2;
ERROR:  cannot cluster on partial index "idx2"


*Case2:*-- Session2:
postgres=# CLUSTER gtt2 USING idx1;
CLUSTER
postgres=# CLUSTER gtt2 USING idx2;
CLUSTER

postgres=# insert into gtt2 values(1);
INSERT 0 1
postgres=# CLUSTER gtt2 USING idx1;
CLUSTER
postgres=# CLUSTER gtt2 USING idx2;
ERROR:  cannot cluster on partial index "idx2"


*Case3:*-- Session2:
postgres=# TRUNCATE gtt2 DROP;
TRUNCATE TABLE
postgres=# CLUSTER gtt2 USING idx1;
CLUSTER
postgres=# CLUSTER gtt2 USING idx2;
CLUSTER

In Case2, Case3 we can observe, with the absence of data in GTT, we are
able to "CLUSTER gtt2 USING idx2;" (having partial index)
But why does the same query fail for Case1 (absence of data)?

Thanks,
Prabhat Sahu



>
>
> Wenjing
>
>
>
> Regards
>
> Pavel
>
>
>> All in all, I think the current implementation is sufficient for dba to
>> manage GTT.
>>
>> 2020年4月2日 下午4:45,Prabhat Sahu  写道:
>>
>> Hi All,
>>
>> I have noted down few behavioral difference in our GTT implementation in
>> PG as compared to Oracle DB:
>> As per my understanding, the behavior of DROP TABLE in case of "Normal
>> table and GTT" in Oracle DB are as below:
>>
>>1. Any tables(Normal table / GTT) without having data in a session,
>>we will be able to DROP from another session.
>>2. For a completed transaction on a normal table having data, we will
>>be able to DROP from another session. If the transaction is not yet
>>complete, and we are trying to drop the table from another session, then 
>> we
>>will get an error. (working as expected)
>>3. For a completed transaction on GTT with(on commit delete rows)
>>(i.e. no data in GTT) in a session, we will be able to DROP from another
>>session.
>>4. For a completed transaction on GTT with(on commit preserve rows)
>>with data in a session, we will not be able to DROP from any session(not
>>even from the session in which GTT is created), we need to truncate the
>>table data first from all the session(session1, session2) which is having
>>data.
>>
>> *1. Any tables(Normal table / GTT) without having data in a session, we
>> will be able to DROP from another session.*
>> *Session1:*
>> create table t1 (c1 integer);
>> create global temporary table gtt1 (c1 integer) on commit delete rows;
>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>
>> *Session2:*
>> drop table t1;
>> drop table gtt1;
>> drop table gtt2;
>>
>> -- *Issue 1:* But we are able to drop a simple table and failed to drop
>> GTT as below.
>>
>> postgres=# drop table t1;
>> DROP TABLE
>> postgres=# drop table gtt1;
>> ERROR:  can not drop relation gtt1 when other backend attached this
>> global temp table
>> postgres=# drop table gtt2;
>> ERROR:  can not drop relation gtt2 when other backend attached this
>> global temp table
>>
>>
>> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.
>> no data in GTT) in a session, we will be able to DROP from another session.*
>>
>> *Session1:*create global temporary table gtt1 (c1 integer) on commit
>> delete rows;
>>
>> *Session2:*
>> drop table gtt1;
>>
>> -- *Issue 2:* But we are getting error for GTT
>> with(on_commit_delete_rows) without data.
>>
>

Re: [Proposal] Global temporary tables

2020-04-20 Thread Prabhat Sahu
> I think this is expected, and user test_gtt does not have permission to
> vacuum the system table.
> This has nothing to do with GTT.
>
> Hi Wenjing, Thanks for the explanation.
Thanks for the new patch. I have verified the crash, Now its resolved.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-17 Thread Prabhat Sahu
Hi Wenjing,

Please check below scenario, we are getting a server crash with "ALTER
TABLE" add column with default value as sequence:

-- Create gtt, exit and re-connect the psql prompt, create sequence, alter
table add a column with sequence.
postgres=# create global temporary table gtt1 (c1 int);
CREATE TABLE
postgres=# \q
[edb@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# create sequence seq;
CREATE SEQUENCE
postgres=# alter table gtt1 add c2 int default nextval('seq');
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?> \q


-- Stack trace:
[edb@localhost bin]$ gdb -q -c data/core.70358 postgres
Reading symbols from
/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 70358]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: edb postgres [local] ALTER TABLE
 '.
Program terminated with signal 6, Aborted.
#0  0x7f150223b337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f150223b337 in raise () from /lib64/libc.so.6
#1  0x7f150223ca28 in abort () from /lib64/libc.so.6
#2  0x00ab2cdd in ExceptionalCondition (conditionName=0xc03ab8
"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)",
errorType=0xc0371f "FailedAssertion", fileName=0xc03492 "cluster.c",
lineNumber=1637) at assert.c:67
#3  0x0065e200 in gtt_swap_relation_files (r1=16384, r2=16390,
target_is_pg_class=false, swap_toast_by_content=false, is_internal=true,
frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at
cluster.c:1637
#4  0x0065dcd9 in finish_heap_swap (OIDOldHeap=16384,
OIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false,
check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1,
newrelpersistence=103 'g') at cluster.c:1395
#5  0x006bca18 in ATRewriteTables (parsetree=0x1deab80,
wqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at
tablecmds.c:4991
#6  0x006ba890 in ATController (parsetree=0x1deab80,
rel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8,
context=0x7ffd841f8260)
at tablecmds.c:3991
#7  0x006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8,
context=0x7ffd841f8260) at tablecmds.c:3644
#8  0x0093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0,
pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
queryEnv=0x0, dest=0x1deaf28,
qc=0x7ffd841f8830) at utility.c:1267
#9  0x0093b141 in standard_ProcessUtility (pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,
dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067
#10 0x0093a22b in ProcessUtility (pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,
dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522
#11 0x0093909d in PortalRunUtility (portal=0x1e4fba0,
pstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28,
qc=0x7ffd841f8830)
at pquery.c:1157
#12 0x009392b3 in PortalRunMulti (portal=0x1e4fba0,
isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28,
qc=0x7ffd841f8830)
at pquery.c:1303
#13 0x009387d1 in PortalRun (portal=0x1e4fba0,
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28,
altdest=0x1deaf28,
qc=0x7ffd841f8830) at pquery.c:779
#14 0x0093298b in exec_simple_query (query_string=0x1de9b30 "alter
table gtt1 add c2 int default nextval('seq');") at postgres.c:1239
#15 0x00936997 in PostgresMain (argc=1, argv=0x1e13b80,
dbname=0x1e13a78 "postgres", username=0x1e13a58 "edb") at postgres.c:4315
#16 0x008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510
#17 0x008860a8 in BackendStartup (port=0x1e0bb50) at
postmaster.c:4202
#18 0x00882626 in ServerLoop () at postmaster.c:1727
#19 0x00881efd in PostmasterMain (argc=3, argv=0x1de4460) at
postmaster.c:1400
#20 0x00789288 in main (argc=3, argv=0x1de4460) at main.c:210
(gdb)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-17 Thread Prabhat Sahu
On Fri, Apr 17, 2020 at 2:44 PM 曾文旌  wrote:

>
> I improved the logic of the warning message so that when the gap between
> relfrozenxid of GTT is small,
> it will no longer be alarmed message.
>

Hi Wenjing,
Thanks for the patch(v26), I have verified the previous related issues, and
are working fine now.
Please check the below scenario VACUUM from a non-super user.

-- Create user "test_gtt", connect it , create gtt, VACUUM gtt and VACUUM /
VACUUM FULL
postgres=# CREATE USER test_gtt;
CREATE ROLE
postgres=# \c postgres test_gtt
You are now connected to database "postgres" as user "test_gtt".
postgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);
CREATE TABLE

-- VACUUM gtt is working fine, whereas we are getting huge WARNING for
VACUUM / VACUUM FULL as below:
postgres=> VACUUM gtt1 ;
VACUUM
postgres=> VACUUM;
WARNING:  skipping "pg_statistic" --- only superuser or database owner can
vacuum it
WARNING:  skipping "pg_type" --- only superuser or database owner can
vacuum it
WARNING:  skipping "pg_toast_2600" --- only table or database owner can
vacuum it
WARNING:  skipping "pg_toast_2600_index" --- only table or database owner
can vacuum it

... ...
... ...

WARNING:  skipping "_pg_foreign_tables" --- only table or database owner
can vacuum it
WARNING:  skipping "foreign_table_options" --- only table or database owner
can vacuum it
WARNING:  skipping "user_mapping_options" --- only table or database owner
can vacuum it
WARNING:  skipping "user_mappings" --- only table or database owner can
vacuum it
VACUUM

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-08 Thread Prabhat Sahu
On Wed, Apr 8, 2020 at 1:48 PM 曾文旌  wrote:

>
>
> 2020年4月7日 下午6:22,Prabhat Sahu  写道:
>
> Thanks for review.
>> This parameter should support all types of writing of the bool type like
>> parameter autovacuum_enabled.
>> So I fixed in global_temporary_table_v24-pg13.patch.
>>
>
> Thank you Wenjing for the new patch with the fix and the "VACUUM FULL GTT"
> support.
> I have verified the above issue now its resolved.
>
> Please check the below findings on VACUUM FULL.
>
> postgres=# create global temporary table  gtt(c1 int) on commit preserve
> rows;
> CREATE TABLE
> postgres=# vacuum FULL ;
> WARNING:  global temp table oldest FrozenXid is far in the past
> HINT:  please truncate them or kill those sessions that use them.
> VACUUM
>
>
> This is expected,
> This represents that the GTT FrozenXid is the oldest in the entire db, and
> dba should vacuum the GTT if he want to push the db datfrozenxid.
> Also he can use function pg_list_gtt_relfrozenxids() to check which
> session has "too old” data and truncate them or kill the sessions.
>

Again as per the HINT given, as  "HINT:  please truncate them or kill those
sessions that use them."
There is only a single session.
If we try "TRUNCATE" and "VACUUM FULL" still the behavior is same as below.

postgres=# truncate gtt ;
TRUNCATE TABLE
postgres=# vacuum full;
WARNING: global temp table oldest FrozenXid is far in the past
HINT: please truncate them or kill those sessions that use them.
VACUUM

I have one more finding related to "CLUSTER table USING index", Please
check the below issue.
postgres=# create global temporary table gtt(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# create index idx1 ON gtt (c1);
CREATE INDEX

-- exit and re-connect the psql prompt
postgres=# \q
[edb@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# cluster gtt using idx1;
WARNING:  relcache reference leak: relation "gtt" not closed
CLUSTER

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-07 Thread Prabhat Sahu
>
> Thanks for review.
> This parameter should support all types of writing of the bool type like
> parameter autovacuum_enabled.
> So I fixed in global_temporary_table_v24-pg13.patch.
>

Thank you Wenjing for the new patch with the fix and the "VACUUM FULL GTT"
support.
I have verified the above issue now its resolved.

Please check the below findings on VACUUM FULL.

postgres=# create global temporary table  gtt(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# vacuum FULL ;
WARNING:  global temp table oldest FrozenXid is far in the past
HINT:  please truncate them or kill those sessions that use them.
VACUUM

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-03 Thread Prabhat Sahu
Hi Wenjing,

Please check the allowed values for boolean parameter
"on_commit_delete_rows".

postgres=# create global temp table gtt1(c1 int)
with(on_commit_delete_rows='true');
CREATE TABLE
Similarly we can successfully create GTT by using the values as:
'true','false', true, false, 'ON', 'OFF', ON, OFF, 1, 0 for boolean
parameter "on_commit_delete_rows"

But we are getting error while using the boolean value as: '1', '0', 't',
'f', 'yes', 'no', 'y', 'n' as below.
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='1');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='0');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='t');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='f');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='yes');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='no');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='y');
ERROR:  on_commit_delete_rows requires a Boolean value
postgres=# create global temp table gtt11(c1 int)
with(on_commit_delete_rows='n');
ERROR:  on_commit_delete_rows requires a Boolean value

-- As per the error message "ERROR:  on_commit_delete_rows requires a
Boolean value" either we should allow all the boolean values.

*Example*: CREATE VIEW view1 WITH (security_barrier = 'true') as select 5;
The syntax of VIEW allows all the above possible boolean values for the
boolean parameter "security_barrier"


-- or else we should change the error message something like
"ERROR:  on_commit_delete_rows requires 'true','false','ON','OFF',1,0 as
Boolean value".

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-02 Thread Prabhat Sahu
Hi All,

I have noted down few behavioral difference in our GTT implementation in PG
as compared to Oracle DB:
As per my understanding, the behavior of DROP TABLE in case of "Normal
table and GTT" in Oracle DB are as below:

   1. Any tables(Normal table / GTT) without having data in a session, we
   will be able to DROP from another session.
   2. For a completed transaction on a normal table having data, we will be
   able to DROP from another session. If the transaction is not yet complete,
   and we are trying to drop the table from another session, then we will get
   an error. (working as expected)
   3. For a completed transaction on GTT with(on commit delete rows) (i.e.
   no data in GTT) in a session, we will be able to DROP from another session.
   4. For a completed transaction on GTT with(on commit preserve rows) with
   data in a session, we will not be able to DROP from any session(not even
   from the session in which GTT is created), we need to truncate the table
   data first from all the session(session1, session2) which is having data.

*1. Any tables(Normal table / GTT) without having data in a session, we
will be able to DROP from another session.*
*Session1:*
create table t1 (c1 integer);
create global temporary table gtt1 (c1 integer) on commit delete rows;
create global temporary table gtt2 (c1 integer) on commit preserve rows;

*Session2:*
drop table t1;
drop table gtt1;
drop table gtt2;

-- *Issue 1:* But we are able to drop a simple table and failed to drop GTT
as below.

postgres=# drop table t1;
DROP TABLE
postgres=# drop table gtt1;
ERROR:  can not drop relation gtt1 when other backend attached this global
temp table
postgres=# drop table gtt2;
ERROR:  can not drop relation gtt2 when other backend attached this global
temp table


*3. For a completed transaction on GTT with(on commit delete rows) (i.e. no
data in GTT) in a session, we will be able to DROP from another session.*

*Session1:*create global temporary table gtt1 (c1 integer) on commit delete
rows;

*Session2:*
drop table gtt1;

-- *Issue 2:* But we are getting error for GTT with(on_commit_delete_rows)
without data.

postgres=# drop table gtt1;
ERROR:  can not drop relation gtt1 when other backend attached this global
temp table


*4. For a completed transaction on GTT with(on commit preserve rows) with
data in any session, we will not be able to DROP from any session(not even
from the session in which GTT is created)*

*Case1:*
create global temporary table gtt2 (c1 integer) on commit preserve rows;
insert into gtt2 values(100);
drop table gtt2;

SQL> drop table gtt2;
drop table gtt2
  *
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table
already in use

-- *Issue 3:* But, we are able to drop the GTT(having data) which we have
created in the same session.

postgres=# drop table gtt2;
DROP TABLE




*Case2: GTT with(on commit preserve rows) having data in both session1 and
session2Session1:*create global temporary table gtt2 (c1 integer) on commit
preserve rows;
insert into gtt2 values(100);


*Session2:*insert into gtt2 values(200);

-- If we try to drop the table from any session we should get an error, it
is working fine.
drop table gtt2;

SQL> drop table gtt2;
drop table gtt2
  *
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table
already in use

postgres=# drop table gtt2 ;
ERROR:  can not drop relation gtt2 when other backend attached this global
temp table


-- To drop the table gtt2 from any session1/session2, we need to truncate
the table data first from all the session(session1, session2) which is
having data.
*Session1:*
truncate table gtt2;
-- Session2:
truncate table gtt2;

*Session 2:*
SQL> drop table gtt2;

Table dropped.

-- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE the
table in all the sessions.
-- truncate from all sessions where GTT have data.
postgres=# truncate gtt2 ;
TRUNCATE TABLE

-- *try to DROP GTT still, we are getting error.*

postgres=# drop table gtt2 ;
ERROR:  can not drop relation gtt2 when other backend attached this global
temp table


To drop the GTT from any session, we need to exit from all other sessions.
postgres=# drop table gtt2 ;
DROP TABLE

Kindly let me know if I am missing something.


On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu 
wrote:

> Hi Wenjing,
> I hope we need to change the below error message.
>
> postgres=# create global temporary table gtt(c1 int) on commit preserve
> rows;
> CREATE TABLE
>
> postgres=# create materialized view mvw as select * from gtt;
> ERROR: materialized views must not use global temporary tables* or views*
>
> Anyways we are not allowed to create a "global temporary view",
> so the above ERROR message should change(i.e. *" or view"* need to be
> removed from the error message) something like:
> *"ERROR: materialized views must not use global

Re: [Proposal] Global temporary tables

2020-04-01 Thread Prabhat Sahu
Hi Wenjing,
I hope we need to change the below error message.

postgres=# create global temporary table gtt(c1 int) on commit preserve
rows;
CREATE TABLE

postgres=# create materialized view mvw as select * from gtt;
ERROR: materialized views must not use global temporary tables* or views*

Anyways we are not allowed to create a "global temporary view",
so the above ERROR message should change(i.e. *" or view"* need to be
removed from the error message) something like:
*"ERROR: materialized views must not use global temporary tables"*

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-31 Thread Prabhat Sahu
On Wed, Apr 1, 2020 at 8:52 AM 曾文旌  wrote:

>
>
> 2020年3月31日 下午9:59,Prabhat Sahu  写道:
>
> Hi Wenjing,
> Thanks for the new patch.
> I saw with the patch(gtt_v23.patch), we are supporting the new concept
> "global temporary sequence"(i.e. session-specific sequence), is this
> intentional?
>
> It was supported in earlier versions,
>
yes.

This causes the sequence built into the GTT to automatically become a
> "global temp sequence",
> Such as create global temp table (a serial);
> Like GTT, the global temp sequnce is used individually for each session.
>
> Recently, I added the global temp sequence syntax so that it can be
> created independently.
> The purpose of this is to enable such sequence built into the GTT to
> support pg_dump and pg_restore.
>

Thanks for the explanation.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-31 Thread Prabhat Sahu
Hi Wenjing,
Thanks for the new patch.
I saw with the patch(gtt_v23.patch), we are supporting the new concept
"global temporary sequence"(i.e. session-specific sequence), is this
intentional?

postgres=# create *global temporary sequence* gt_seq;
CREATE SEQUENCE
postgres=# create sequence seq;
CREATE SEQUENCE
postgres=# \d+
  List of relations
 Schema |  Name  |   Type   | Owner | Persistence |Size|
Description
++--+---+-++-
 *public | gt_seq | sequence | edb   | session | 8192 bytes |*
 public | seq| sequence | edb   | permanent   | 8192 bytes |
(2 rows)

postgres=# select *nextval('gt_seq')*, nextval('seq');
 nextval | nextval
-+-
  * 1* |   1
(1 row)

postgres=# select nextval('gt_seq'), nextval('seq');
 nextval | nextval
-+-
   *2* |   2
(1 row)

-- Exit and re-connect to psql prompt:
postgres=# \q
[edb@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# select nextval('gt_seq'), nextval('seq');
 nextval | nextval
-+-
  * 1* |   3
(1 row)

postgres=# select nextval('gt_seq'), nextval('seq');
 nextval | nextval
-+-
   *2 *|   4
(1 row)

On Tue, Mar 31, 2020 at 9:46 AM 曾文旌  wrote:

>
>
> 2020年3月27日 下午5:21,tushar  写道:
>
> On 3/27/20 10:55 AM, 曾文旌 wrote:
>
> Hi Wenjing,
> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you
> have prepared the patch on top of some previous commit.
> Could you please rebase the patch which we can apply on HEAD ?
>
> Yes, It looks like the built-in functions are in conflict with new code.
>
>
> This error message looks wrong  to me-
>
> postgres=# reindex table concurrently t ;
> ERROR:  cannot create indexes on global temporary tables using concurrent
> mode
> postgres=#
>
> Better message would be-
>
> ERROR:  cannot reindex global temporary tables concurrently
>
> I found that the local temp table automatically disables concurrency mode.
> so, I made some improvements, The reindex GTT behaves the same as the
> local temp table.
>
>
> Wenjing
>
>
>
>
> --
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-25 Thread Prabhat Sahu
> Sorry, I introduced this bug in my refactoring.
> It's been fixed.
>
> Wenjing
>
> Hi Wenjing,
This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you
have prepared the patch on top of some previous commit.
Could you please rebase the patch which we can apply on HEAD ?

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-25 Thread Prabhat Sahu
Hi All,

Please check the behavior of GTT  having column with "SERIAL" datatype and
column with default value as "SEQUENCE" as below:


*Session1:*postgres=# create sequence gtt_c3_seq;
CREATE SEQUENCE
postgres=# create global temporary table gtt(c1 int, c2 serial, c3 int
default nextval('gtt_c3_seq') not null) on commit preserve rows;
CREATE TABLE

-- Structure of column c2 and c3 are similar:
postgres=# \d+ gtt
Table "public.gtt"
 Column |  Type   | Collation | Nullable | Default
| Storage | Stats target | Description
+-+---+--+-+-+--+-
 c1 | integer |   |  |
| plain   |  |
 c2 | integer |   | not null | nextval('gtt_c2_seq'::regclass)
| plain   |  |
 c3 | integer |   | not null | nextval('gtt_c3_seq'::regclass)
| plain   |  |
Access method: heap
Options: on_commit_delete_rows=false

postgres=# insert into gtt select generate_series(1,3);
INSERT 0 3
postgres=# select * from gtt;
 c1 | c2 | c3
++
  1 |  1 |  1
  2 |  2 |  2
  3 |  3 |  3
(3 rows)


*Session2:*postgres=# insert into gtt select generate_series(1,3);
INSERT 0 3
postgres=# select * from gtt;
 c1 | c2 | c3
++
  1 |  1 |  4
  2 |  2 |  5
  3 |  3 |  6
(3 rows)

Kindly let me know, Is this behavior expected?

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-24 Thread Prabhat Sahu
Hi Wenjing,
Please check my findings(on gtt_v20.patch) as below:

*TestCase1:* (cache lookup failed on GTT)

-- Session1:
postgres=# create global temporary table gtt1(c1 int) on commit delete rows;
CREATE TABLE

-- Session2:
postgres=# drop table gtt1 ;
DROP TABLE

-- Session1:
postgres=# create global temporary table gtt1(c1 int) on commit delete rows;
ERROR:  cache lookup failed for relation 16384


*TestCase2:*

-- Session1:
postgres=# create global temporary table gtt (c1 integer) on commit
preserve rows;
CREATE TABLE
postgres=# insert into gtt values(10);
INSERT 0 1

-- Session2:
postgres=# drop table gtt;
DROP TABLE


I hope "session2" should not allow to perform the "DROP" operation on GTT
having data.

*Behavior of GTT in Oracle Database in such scenario:* For a completed
transaction on GTT with(on_commit_delete_rows='FALSE') with data in a
session, we will not be able to DROP from any session, we need to TRUNCATE
the data first to DROP the table.

SQL> drop table gtt;
drop table gtt
   *
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table
already
in use



On Tue, Mar 17, 2020 at 9:16 AM 曾文旌(义从)  wrote:

>
>
> 2020年3月11日 下午3:52,Prabhat Sahu  写道:
>
> On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) 
> wrote:
>
>>
>>
>> Fixed in global_temporary_table_v18-pg13.patch.
>>
> Hi Wenjing,
> Thanks for the patch. I have verified the previous issues with
> "gtt_v18_pg13.patch" and those are resolved.
> Please find below case:
>
> postgres=# create sequence seq;
> CREATE SEQUENCE
>
> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON
> COMMIT DELETE ROWS;
> CREATE TABLE
>
> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON
> COMMIT PRESERVE ROWS;
> CREATE TABLE
>
> postgres=# alter table gtt1 add c2 int default nextval('seq');
> ERROR:  cannot reindex global temporary tables
>
> postgres=# alter table gtt2 add c2 int default nextval('seq');
> ERROR:  cannot reindex global temporary tables
>
> reindex GTT is already supported
>
> Please check global_temporary_table_v20-pg13.patch
>
>
> Wenjing
>
>
>
>
> *Note*: We are getting this error if we have a key column(PK/UNIQUE) in a
> GTT, and trying to add a column with a default sequence into it.
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-19 Thread Prabhat Sahu
On Thu, Mar 19, 2020 at 3:51 PM wenjing.zwj 
wrote:

> postgres=# CREATE LOCAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2
> VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;
> CREATE TABLE
> postgres=# CREATE LOCAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2
> integer NOT NULL,
> postgres(# PRIMARY KEY (c1, c2),
> postgres(# FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;
> ERROR:  unsupported ON COMMIT and foreign key combination
> DETAIL:  Table "gtt2" references "gtt1", but they do not have the same ON
> COMMIT setting.
>
> postgres=# CREATE LOCAL TEMPORARY TABLE gtt3(c1 serial PRIMARY KEY, c2
> VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;
> CREATE TABLE
> postgres=#
> postgres=# CREATE LOCAL TEMPORARY TABLE gtt4(c1 integer NOT NULL, c2
> integer NOT NULL,
> postgres(# PRIMARY KEY (c1, c2),
> postgres(# FOREIGN KEY (c1) REFERENCES gtt3 (c1)) ON COMMIT DELETE ROWS;
> CREATE TABLE
>
> The same behavior applies to the local temp table.
>
Yes, the issue is related to "local temp table".

I think, Cause of the problem is temp table with on commit delete rows is
> not good for reference tables.
> So, it the error message ”cannot reference an on commit delete rows
> temporary table.“ ?
>
No, this is not always true.
We can create GTT/"local temp table" with "ON COMMIT DELETE ROWS"  which
can references to "ON COMMIT DELETE ROWS"

Below are the 4 combinations of GTT/"local temp table" reference.
1. "ON COMMIT PRESERVE ROWS" can references to "ON COMMIT PRESERVE ROWS"
2. "ON COMMIT DELETE ROWS"   can references to "ON COMMIT PRESERVE ROWS"
3. "ON COMMIT DELETE ROWS"   can references to "ON COMMIT DELETE ROWS"
But
4. "ON COMMIT PRESERVE ROWS" fails to reference "ON COMMIT DELETE ROWS"

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-16 Thread Prabhat Sahu
On Mon, Mar 16, 2020 at 1:30 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:

>
> It seems to be expected behavior: GTT data is private to the session and
> postgres_fdw establish its own session where content of the table is empty.
> But if you insert some data in f_gtt1, then you will be able to select
> this data from it because of connection cache in postgres_fdw.
>

Thanks for the explanation.
I am able to insert and select the value from f_gtt1.

 postgres=# insert into f_gtt1 values (1,'gtt_c21');
INSERT 0 1
postgres=# select * from f_gtt1;
 c1 |   c2
+-
  1 | gtt_c21
(1 row)

I have one more doubt,
As you told above "GTT data is private to the session and postgres_fdw
establish its own session where content of the table is empty."
Please check the below scenario,
we can select data from the "root GTT" and "foreign GTT partitioned table"
but we are unable to select data from "GTT partitioned table"

postgres=# create global temporary table gtt2 (c1 integer, c2 integer)
partition by range(c1);
CREATE TABLE
postgres=# create global temporary table gtt2_p1 (c1 integer, c2 integer);
CREATE TABLE
postgres=# create foreign table f_gtt2_p1 (c1 integer, c2 integer) server
fdw options (table_name 'gtt2_p1');
CREATE FOREIGN TABLE
postgres=# alter table gtt2 attach partition f_gtt2_p1 for values from
(minvalue) to (10);
ALTER TABLE
postgres=# insert into gtt2 select i,i from generate_series(1,5,2)i;
INSERT 0 3
postgres=# select * from gtt2;
 c1 | c2
+
  1 |  1
  3 |  3
  5 |  5
(3 rows)

postgres=# select * from gtt2_p1;
 c1 | c2
+
(0 rows)

postgres=# select * from f_gtt2_p1;
 c1 | c2
+
  1 |  1
  3 |  3
  5 |  5
(3 rows)

Is this an expected behavior?

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-15 Thread Prabhat Sahu
Hi Wenjing,
Please check the below scenario, where the Foreign table on GTT not showing
records.

postgres=# create extension postgres_fdw;
CREATE EXTENSION
postgres=# do $d$
begin
execute $$create server fdw foreign data wrapper postgres_fdw
options (host 'localhost',dbname 'postgres',port
'$$||current_setting('port')||$$')$$;
end;
$d$;
DO
postgres=# create user mapping for public server fdw;
CREATE USER MAPPING

postgres=# create table lt1 (c1 integer, c2 varchar(50));
CREATE TABLE
postgres=# insert into lt1 values (1,'c21');
INSERT 0 1
postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw
options (table_name 'lt1');
CREATE FOREIGN TABLE
postgres=# select * from ft1;
 c1 | c2
+-
  1 | c21
(1 row)

postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));
CREATE TABLE
postgres=# insert into gtt1 values (1,'gtt_c21');
INSERT 0 1
postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server
fdw options (table_name 'gtt1');
CREATE FOREIGN TABLE

postgres=# select * from gtt1;
 c1 |   c2
+-
  1 | gtt_c21
(1 row)

postgres=# select * from f_gtt1;
 c1 | c2
+
(0 rows)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-13 Thread Prabhat Sahu
Hi Wenjing,

Please check the below combination of GTT with Primary and Foreign key
relations, with the ERROR message.


*Case1:*postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY
KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT *DELETE* ROWS;
CREATE TABLE

postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2
integer NOT NULL,
PRIMARY KEY (c1, c2),
FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT *PRESERVE* ROWS;
ERROR:  unsupported ON COMMIT and foreign key combination
DETAIL:  Table "gtt2" references "gtt1", but *they do not have the same ON
COMMIT setting*.

*Case2:*
postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2
VARCHAR (50) UNIQUE NOT NULL) ON COMMIT *PRESERVE* ROWS;
CREATE TABLE

postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2
integer NOT NULL,
PRIMARY KEY (c1, c2),
FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT *DELETE* ROWS;
CREATE TABLE

In "case2" although both the primary table and foreign key GTT *do not have
the same ON COMMIT setting*, still we are able to create the PK-FK
relations with GTT.

So I hope the detail message(DETAIL:  Table "gtt2" references "gtt1", but
they do not have the same ON COMMIT setting.) in "Case1" should be more
clear(something like "wrong combination of ON COMMIT setting").

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-12 Thread Prabhat Sahu
Hi Wenjing,

Please check the below findings:
After running "TRUNCATE" command, the "relfilenode" field is not changing
for GTT
whereas, for Simple table/Temp table "relfilenode" field is changing after
TRUNCATE.

*Case 1: Getting same "relfilenode" for GTT after and before "TRUNCATE"*
postgres=# create global temporary table gtt1(c1 int) on commit delete rows;
CREATE TABLE
postgres=# select relfilenode from pg_class  where relname ='gtt1';
 relfilenode
-
   16384
(1 row)
postgres=# truncate gtt1;
TRUNCATE TABLE
postgres=# select relfilenode from pg_class  where relname ='gtt1';
 relfilenode
-
   16384
(1 row)

postgres=# create global temporary table gtt2(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# select relfilenode from pg_class  where relname ='gtt2';
 relfilenode
-
   16387
(1 row)
postgres=# truncate gtt2;
TRUNCATE TABLE
postgres=# select relfilenode from pg_class  where relname ='gtt2';
 relfilenode
-
   16387
(1 row)


*Case 2: "relfilenode" changes after "TRUNCATE" for Simple table/Temp table*
postgres=# create temporary table temp3(c1 int) on commit preserve rows;
CREATE TABLE
postgres=# select relfilenode from pg_class  where relname ='temp3';
 relfilenode
-
   16392
(1 row)
postgres=# truncate temp3;
TRUNCATE TABLE
postgres=# select relfilenode from pg_class  where relname ='temp3';
 relfilenode
-
   16395
(1 row)


postgres=# create table tabl4(c1 int);
CREATE TABLE
postgres=# select relfilenode from pg_class  where relname ='tabl4';
 relfilenode
-
   16396
(1 row)
postgres=# truncate tabl4;
TRUNCATE TABLE
postgres=# select relfilenode from pg_class  where relname ='tabl4';
 relfilenode
-
   16399
(1 row)


On Thu, Mar 12, 2020 at 3:36 PM 曾文旌(义从)  wrote:

>
>
> > 2020年3月12日 上午4:12,Robert Haas  写道:
> >
> > On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) 
> wrote:
> >> reindex need change relfilenode, but GTT is not currently supported.
> >
> > In my view that'd have to be fixed somehow.
> Ok , I am working on it.
>
>
>
> >
> > --
> > Robert Haas
> > EnterpriseDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


SERIAL datatype column skipping values.

2020-03-11 Thread Prabhat Sahu
Hi all,
Please check the below behavior for the "SERIAL" datatype.

postgres=# CREATE TABLE t1(c1 int, c2 serial);
CREATE TABLE
postgres=# insert into t1 values (generate_series(1,3));
INSERT 0 3
postgres=# insert into t1 values (generate_series(4,6));
INSERT 0 3
postgres=# select * from t1;
 c1 | c2
+
  1 |  1
  2 |  2
  3 |  3
  4 |  5
  5 |  6
  6 |  7
(6 rows)

In this above case, the serial column "c2" is skipping the value "4" in
select output.
Is this an expected behavior?

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-11 Thread Prabhat Sahu
On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从)  wrote:

>
>
> Fixed in global_temporary_table_v18-pg13.patch.
>
Hi Wenjing,
Thanks for the patch. I have verified the previous issues with
"gtt_v18_pg13.patch" and those are resolved.
Please find below case:

postgres=# create sequence seq;
CREATE SEQUENCE

postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT
DELETE ROWS;
CREATE TABLE

postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT
PRESERVE ROWS;
CREATE TABLE

postgres=# alter table gtt1 add c2 int default nextval('seq');
ERROR:  cannot reindex global temporary tables

postgres=# alter table gtt2 add c2 int default nextval('seq');
ERROR:  cannot reindex global temporary tables

*Note*: We are getting this error if we have a key column(PK/UNIQUE) in a
GTT, and trying to add a column with a default sequence into it.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-09 Thread Prabhat Sahu
Hi All,

Kindly check the below scenario.

*Case 1: *
postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int) on commit delete rows;
CREATE TABLE
postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# vacuum gtt1;
VACUUM
postgres=# vacuum gtt2;
VACUUM
postgres=# vacuum;
VACUUM
postgres=# \q

*Case 2: Exit and reconnect to psql prompt.*
[edb@localhost bin]$ ./psql  postgres
psql (13devel)
Type "help" for help.

postgres=# vacuum gtt1;
WARNING:  skipping vacuum empty global temp table "gtt1"
VACUUM
postgres=# vacuum gtt2;
WARNING:  skipping vacuum empty global temp table "gtt2"
VACUUM
postgres=# vacuum;
WARNING:  skipping vacuum empty global temp table "gtt1"
WARNING:  skipping vacuum empty global temp table "gtt2"
VACUUM

Although in "Case1" the gtt1/gtt2 are empty, we are not getting "WARNING:
 skipping vacuum empty global temp table" for VACUUM in "Case 1".
whereas we are getting the "WARNING" for VACUUM in "Case2".


On Fri, Mar 6, 2020 at 12:41 PM 曾文旌(义从)  wrote:

>
>
> > 2020年3月5日 下午10:38,Robert Haas  写道:
> >
> > On Thu, Mar 5, 2020 at 9:19 AM tushar 
> wrote:
> >> WARNING:  relfilenode 13589/1663/19063 not exist in gtt shared hash
> when forget
> >> ERROR:  out of shared memory
> >> HINT:  You might need to increase max_active_gtt.
> >>
> >> also , would be great  if we can make this error message  user friendly
> like  - "max connection reached"  rather than memory error
> >
> > That would be nice, but the bigger problem is that the WARNING there
> > looks totally unacceptable. It's looks like it's complaining of some
> > internal issue (i.e. a bug or corruption) and the grammar is poor,
> > too.
>
> Yes, WARNING should not exist.
> This is a bug in the rollback process and I have fixed it in
> global_temporary_table_v17-pg13.patch
>
>
> Wenjing
>
>
> >
> > --
> > Robert Haas
> > EnterpriseDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
>
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-03-03 Thread Prabhat Sahu
On Tue, Mar 3, 2020 at 2:11 PM 曾文旌(义从)  wrote:

>
>
>
> I fixed in global_temporary_table_v16-pg13.patch.
>

Thank you Wenjing for the patch.
Now we are getting corruption with GTT with below scenario.

postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 bigint, c2 bigserial) on
commit delete rows;
CREATE TABLE
postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 bigint, c2 bigserial) on
commit preserve rows;
CREATE TABLE
postgres=# \q

[edb@localhost bin]$ echo "1
> 2
> 3
> "> t.dat

[edb@localhost bin]$ ./psql  postgres
psql (13devel)
Type "help" for help.

postgres=# \copy gtt1(c1) from 't.dat' with  csv;
ERROR:  could not read block 0 in file "base/13585/t3_16384": read only 0
of 8192 bytes
CONTEXT:  COPY gtt1, line 1: "1"

postgres=# \copy gtt2(c1) from 't.dat' with  csv;
ERROR:  could not read block 0 in file "base/13585/t3_16390": read only 0
of 8192 bytes
CONTEXT:  COPY gtt2, line 1: "1"

NOTE: We end with such corruption for "bigserial/smallserial/serial"
datatype columns.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-02-25 Thread Prabhat Sahu
Hi All,

Please check the below findings on GTT.
*-- Scenario 1:*
Under "information_schema", We are not allowed to create "temporary table",
whereas we can CREATE/DROP "Global Temporary Table", is it expected ?

postgres=# create temporary table information_schema.temp1(c1 int);
ERROR:  cannot create temporary relation in non-temporary schema
LINE 1: create temporary table information_schema.temp1(c1 int);
   ^

postgres=# create global temporary table information_schema.temp1(c1 int);
CREATE TABLE

postgres=# drop table information_schema.temp1 ;
DROP TABLE

*-- Scenario 2:*
Here I am getting the same error message in both the below cases.
We may add a "global" keyword with GTT related error message.

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE
postgres=# create temporary table tmp1 (c1 int unique);
CREATE TABLE

postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );
ERROR:  constraints on temporary tables may reference only temporary tables

postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );
ERROR:  constraints on temporary tables may reference only temporary tables

Thanks,
Prabhat Sahu

On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从)  wrote:

>
>
> 2020年2月24日 下午5:44,Prabhat Sahu  写道:
>
> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) 
> wrote:
>
>> Hi,
>> I have started testing the "Global temporary table" feature,
>> That's great, I see hope.
>> from "gtt_v11-pg13.patch". Below is my findings:
>>
>> -- session 1:
>> postgres=# create global temporary table gtt1(a int);
>> CREATE TABLE
>>
>> -- seeeion 2:
>> postgres=# truncate gtt1 ;
>> ERROR:  could not open file "base/13585/t3_16384": No such file or
>> directory
>>
>> is it expected?
>>
>> Oh ,this is a bug, I fixed it.
>>
> Thanks for the patch.
> I have verified the same, Now the issue is resolved with v12 patch.
>
> Kindly confirm the below scenario:
>
> postgres=# create global temporary table gtt1 (c1 int unique);
> CREATE TABLE
>
> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1)
> );
> ERROR:  referenced relation "gtt1" is not a global temp table
>
> postgres=# create table tab2 (c1 int references gtt1(c1) );
> ERROR:  referenced relation "gtt1" is not a global temp table
>
> Thanks,
> Prabhat Sahu
>
>
> GTT supports foreign key constraints
> in global_temporary_table_v13-pg13.patch
>
>
> Wenjing
>
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-02-24 Thread Prabhat Sahu
Hi All,

I observe a different behavior in  "temporary table" and "global temporary
table".
Not sure if it is expected?

postgres=# create global temporary table parent1(a int)  on commit delete
rows;
CREATE TABLE
postgres=# create global temporary table child1() inherits (parent1);
CREATE TABLE
postgres=# insert into parent1 values(1);
INSERT 0 1
postgres=# insert into child1 values(2);
INSERT 0 1
postgres=# select * from parent1;
 a
---
(0 rows)

postgres=# select * from child1;
 a
---
(0 rows)


postgres=# create temporary table parent2(a int)  on commit delete rows;
CREATE TABLE
postgres=# create temporary table child2() inherits (parent2);
CREATE TABLE
postgres=# insert into parent2 values(1);
INSERT 0 1
postgres=# insert into child2 values(2);
INSERT 0 1
postgres=# select * from parent2;
 a
---
 2
(1 row)

postgres=# select * from child2;
 a
---
 2
(1 row)


Thanks,
Prabhat Sahu


Re: [Proposal] Global temporary tables

2020-02-24 Thread Prabhat Sahu
On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从)  wrote:

> Hi,
> I have started testing the "Global temporary table" feature,
> That's great, I see hope.
> from "gtt_v11-pg13.patch". Below is my findings:
>
> -- session 1:
> postgres=# create global temporary table gtt1(a int);
> CREATE TABLE
>
> -- seeeion 2:
> postgres=# truncate gtt1 ;
> ERROR:  could not open file "base/13585/t3_16384": No such file or
> directory
>
> is it expected?
>
> Oh ,this is a bug, I fixed it.
>
Thanks for the patch.
I have verified the same, Now the issue is resolved with v12 patch.

Kindly confirm the below scenario:

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE

postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

postgres=# create table tab2 (c1 int references gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

Thanks,
Prabhat Sahu


Re: [Proposal] Global temporary tables

2020-02-20 Thread Prabhat Sahu
Hi,
I have started testing the "Global temporary table" feature,
from "gtt_v11-pg13.patch". Below is my findings:

-- session 1:
postgres=# create global temporary table gtt1(a int);
CREATE TABLE

-- seeeion 2:
postgres=# truncate gtt1 ;
ERROR:  could not open file "base/13585/t3_16384": No such file or directory

is it expected?

On Sun, Feb 16, 2020 at 8:53 PM Pavel Stehule 
wrote:

>
>
> ne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) 
> napsal:
>
>>
>>
>> 2020年2月15日 下午6:06,Pavel Stehule  写道:
>>
>>
>> postgres=# insert into foo select generate_series(1,1);
>>> INSERT 0 1
>>> postgres=# \dt+ foo
>>>   List of relations
>>> ┌┬──┬───┬───┬─┬┬─┐
>>> │ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │
>>> ╞╪══╪═══╪═══╪═╪╪═╡
>>> │ public │ foo  │ table │ pavel │ session │ 384 kB │ │
>>> └┴──┴───┴───┴─┴┴─┘
>>> (1 row)
>>>
>>> postgres=# truncate foo;
>>> TRUNCATE TABLE
>>> postgres=# \dt+ foo
>>>   List of relations
>>> ┌┬──┬───┬───┬─┬───┬─┐
>>> │ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │
>>> ╞╪══╪═══╪═══╪═╪═══╪═╡
>>> │ public │ foo  │ table │ pavel │ session │ 16 kB │ │
>>> └┴──┴───┴───┴─┴───┴─┘
>>> (1 row)
>>>
>>> I expect zero size after truncate.
>>>
>>> Thanks for review.
>>>
>>> I can explain, I don't think it's a bug.
>>> The current implementation of the truncated GTT retains two blocks of
>>> FSM pages.
>>> The same is true for truncating regular tables in subtransactions.
>>> This is an implementation that truncates the table without changing the
>>> relfilenode of the table.
>>>
>>>
>> This is not extra important feature - now this is little bit a surprise,
>> because I was not under transaction.
>>
>> Changing relfilenode, I think, is necessary, minimally for future VACUUM
>> FULL support.
>>
>> Not allowing relfilenode changes is the current limit.
>> I think can improve on it. But ,This is a bit complicated.
>> so I'd like to know the necessity of this improvement.
>> Could you give me more details?
>>
>
> I don't think so GTT without support of VACUUM FULL can be accepted. Just
> due consistency.
>
> Regards
>
> Pavel
>
>
>>
>> Regards
>>
>> Pavel Stehule
>>
>>
>>>
>>> Wenjing
>>>
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>>
>
> Wenjing
>
>
>
>
> >
> > --
> > Robert Haas
> > EnterpriseDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
>
>
>>>
>>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Prabhat Sahu
Hi all,

I would like to share my observation on this PG feature "Block-level
parallel vacuum".
I have tested the earlier patch (i.e v48) with below high-level test
scenarios, and those are working as expected.

   - I have played around with these GUC parameters  while testing

max_worker_processes

autovacuum = off


shared_buffers

max_parallel_workers

max_parallel_maintenance_workers

min_parallel_index_scan_size

vacuum_cost_limit

vacuum_cost_delay


   - Tested the parallel vacuum with tables and Partition tables having
   possible datatypes and Columns having various indexes(like btree, gist,
   etc.) on part / full table.
   - Tested the pgbench tables data with multiple indexes created manually
   and ran script(vacuum_test.sql) with DMLs and VACUUM for multiple Clients,
   Jobs, and Time as below.

./pgbench  -c 8 -j 16 -T 900  postgres -f vacuum_test.sql

We observe the usage of parallel workers during VACUUM.


   - Ran few isolation schedule test cases(in regression) with huge data
   and indexes, perform DMLs -> VACUUM
   - Tested with PARTITION TABLEs -> global/local indexes ->  DMLs -> VACUUM
   - Tested with PARTITION TABLE having different TABLESPACE in different
   location -> global/local indexes -> DMLs -> VACUUM
   - With Changing STORAGE options for columns(as PLAIN / EXTERNAL /
   EXTENDED)  -> DMLs -> VACUUM
   - Create index with CONCURRENTLY option / Changing storage_parameter for
   index as below  -> DMLs -> VACUUM

with(buffering=auto) / with(buffering=on) / with(buffering=off) /
with(fillfactor=30);


   - Tested with creating Simple and Partitioned tables ->  DMLs  ->
   pg_dump/pg_restore/pg_upgrade -> VACUUM

Verified the data after restore / upgrade / VACUUM.


   - Indexes on UUID-OSSP data ->  DMLs -> pg_upgrade -> VACUUM
   - Verified with various test scenarios for better performance of
   parallel VACUUM as compared to Non-parallel VACUUM.

 Time taken by VACUUM on PG HEAD+PATCH(with PARALLEL) <  Time taken
by VACUUM on PG HEAD (without PARALLEL)


Machine configuration: (16 VCPUs / RAM: 16GB / Disk size: 640GB)

*PG HEAD:*
VACUUM tab1;

Time: 38915.384 ms (00:38.915)

Time: 48389.006 ms (00:48.389)

Time: 41324.223 ms (00:41.324)

*Time: 37640.874 ms (00:37.641) --median*

Time: 36897.325 ms (00:36.897)

Time: 36351.022 ms (00:36.351)

Time: 36198.890 ms (00:36.199)


*PG HEAD + v48 Patch:*
VACUUM tab1;

Time: 37051.589 ms (00:37.052)

*Time: 33647.459 ms (00:33.647) --median*

Time: 31580.894 ms (00:31.581)

Time: 34442.046 ms (00:34.442)

Time: 31335.960 ms (00:31.336)

Time: 34441.245 ms (00:34.441)

Time: 31159.639 ms (00:31.160)



-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: Server crash with Master-Slave configuration.

2019-12-24 Thread Prabhat Sahu
On Wed, Dec 25, 2019 at 8:01 AM Michael Paquier  wrote:

> On Tue, Dec 24, 2019 at 05:29:25PM +0530, Prabhat Sahu wrote:
> > While performing below operations with Master-Slave configuration, Slave
> is
> > crashed.
> > Below are the steps to reproduce:
> >
> > -- create a Slave using pg_basebackup and start:
> > ./pg_basebackup -v -R -D d2 -p 55510
> > mkdir /home/centos/ts1
> >
> > -- Session 1(Master):
> > ./psql postgres -p 55510
> >
> > CREATE TABLESPACE ts1 location '/home/centos/ts1';
>
> Your mistake is here.  Both primary and standby are on the same host,
> so CREATE TABLESPACE would point to a path that overlap for both
> clusters as the tablespace path is registered the WAL replayed,
> leading to various weird behaviors.  What you need to do instead is to
> create the tablespace before taking the base backup, and then take the
> base backup using pg_basebackup's --tablespace-mapping.

Thanks Michael for pointing it out, I have re-tested the scenario
with "--tablespace-mapping=OLDDIR=NEWDIR" option of pg_basebackup, and now
its working fine.
But I think, instead of the crash, a proper error message would be better.


> --
> Michael
>


-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Server crash with Master-Slave configuration.

2019-12-24 Thread Prabhat Sahu
Hi,

While performing below operations with Master-Slave configuration, Slave is
crashed.
Below are the steps to reproduce:

-- create a Slave using pg_basebackup and start:
./pg_basebackup -v -R -D d2 -p 55510
mkdir /home/centos/ts1

-- Session 1(Master):
./psql postgres -p 55510

CREATE TABLESPACE ts1 location '/home/centos/ts1';
CREATE TABLE tab1 (c1 INTEGER, c2 TEXT, c3 point) tablespace ts1;
insert into tab1 (select x, x||'_c2',point (x,x) from
generate_series(1,10) x);

-- Cancel the below update query in middle and then vacuum:
update tab1 set c1=c1+2 , c3=point(10,10) where c1 <=9;
vacuum(analyze) tab1(c3, c2);

postgres=# update tab1 set c1=c1+2 , c3=point(10,10) where c1 <=9;
^CCancel request sent
ERROR:  canceling statement due to user request

postgres=# vacuum(analyze) tab1(c3, c2);
VACUUM

OR

postgres=# vacuum(analyze) tab1(c3, c2);
ERROR:  index "pg_toast_16385_index" contains unexpected zero page at block
0
HINT:  Please REINDEX it.

-- session 2: (slave)
./psql postgres -p 55520

-- Below select query is crashed:
select count(*) from tab1_2;

postgres=# select count(*) from tab1_2;
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

-- Below is the stack trace:
[centos@parallel-vacuum-testing bin]$ gdb -q -c d2/core.20509 postgres
Reading symbols from /home/centos/PGsrc/postgresql/inst/bin/postgres...done.
[New LWP 20509]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: startup   recovering
00010006   '.
Program terminated with signal 6, Aborted.
#0  0x7f42d2565337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64
pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f42d2565337 in raise () from /lib64/libc.so.6
#1  0x7f42d2566a28 in abort () from /lib64/libc.so.6
#2  0x00a94c55 in errfinish (dummy=0) at elog.c:590
#3  0x00a9729a in elog_finish (elevel=22, fmt=0xb30a10 "WAL
contains references to invalid pages") at elog.c:1465
#4  0x0057cb10 in log_invalid_page (node=..., forkno=MAIN_FORKNUM,
blkno=470, present=false) at xlogutils.c:96
#5  0x0057d64e in XLogReadBufferExtended (rnode=...,
forknum=MAIN_FORKNUM, blkno=470, mode=RBM_NORMAL) at xlogutils.c:472
#6  0x0057d386 in XLogReadBufferForRedoExtended (record=0x1b4a9c8,
block_id=0 '\000', mode=RBM_NORMAL, get_cleanup_lock=true,
buf=0x7ffda55b39d4)
at xlogutils.c:390
#7  0x004f12b5 in heap_xlog_clean (record=0x1b4a9c8) at
heapam.c:7744
#8  0x004f4ebe in heap2_redo (record=0x1b4a9c8) at heapam.c:8891
#9  0x0056cceb in StartupXLOG () at xlog.c:7202
#10 0x0086cb0c in StartupProcessMain () at startup.c:170
#11 0x00582150 in AuxiliaryProcessMain (argc=2,
argv=0x7ffda55b4600) at bootstrap.c:451
#12 0x0086ba0f in StartChildProcess (type=StartupProcess) at
postmaster.c:5461
#13 0x0086685d in PostmasterMain (argc=5, argv=0x1b49d50) at
postmaster.c:1392
#14 0x00775bb1 in main (argc=5, argv=0x1b49d50) at main.c:210
(gdb)

-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: [HACKERS] Block level parallel vacuum

2019-12-20 Thread Prabhat Sahu
Hi,

While testing this feature with parallel vacuum on "TEMPORARY TABLE", I got
a server crash on PG Head+V36_patch.
Changed configuration parameters and Stack trace are as below:

autovacuum = on
max_worker_processes = 4
shared_buffers = 10MB
max_parallel_workers = 8
max_parallel_maintenance_workers = 8
vacuum_cost_limit = 2000
vacuum_cost_delay = 10
min_parallel_table_scan_size = 8MB
min_parallel_index_scan_size = 0

-- Stack trace:
[centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.1399 postgres
Reading symbols from
/home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
[New LWP 1399]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: autovacuum worker   postgres
   '.
Program terminated with signal 6, Aborted.
#0  0x7f4517d80337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f4517d80337 in raise () from /lib64/libc.so.6
#1  0x7f4517d81a28 in abort () from /lib64/libc.so.6
#2  0x00a96341 in ExceptionalCondition (conditionName=0xd18efb
"strvalue != NULL", errorType=0xd18eeb "FailedAssertion",
fileName=0xd18ee0 "snprintf.c", lineNumber=442) at assert.c:67
#3  0x00b02522 in dopr (target=0x7ffdb0e38450, format=0xc5fa95
".%s\"", args=0x7ffdb0e38538) at snprintf.c:442
#4  0x00b01ea6 in pg_vsnprintf (str=0x256df50 "autovacuum: dropping
orphan temp table \"postgres.", '\177' ..., count=1024,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538) at snprintf.c:195
#5  0x00afbadf in pvsnprintf (buf=0x256df50 "autovacuum: dropping
orphan temp table \"postgres.", '\177' ..., len=1024,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538) at psprintf.c:110
#6  0x00afd34b in appendStringInfoVA (str=0x7ffdb0e38550,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538)
at stringinfo.c:149
#7  0x00a970fd in errmsg (fmt=0xc5fa68 "autovacuum: dropping orphan
temp table \"%s.%s.%s\"") at elog.c:832
#8  0x008588d2 in do_autovacuum () at autovacuum.c:2249
#9  0x00857b29 in AutoVacWorkerMain (argc=0, argv=0x0) at
autovacuum.c:1689
#10 0x0085772f in StartAutoVacWorker () at autovacuum.c:1483
#11 0x0086e64f in StartAutovacuumWorker () at postmaster.c:5562
#12 0x0086e106 in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5279
#13 
#14 0x7f4517e3f933 in __select_nocancel () from /lib64/libc.so.6
#15 0x00869838 in ServerLoop () at postmaster.c:1691
#16 0x00869212 in PostmasterMain (argc=3, argv=0x256bd70) at
postmaster.c:1400
#17 0x0077855d in main (argc=3, argv=0x256bd70) at main.c:210
(gdb)

I have tried to reproduce the same with all previously executed queries but
now I am not able to reproduce the same.


On Thu, Dec 19, 2019 at 11:26 AM Mahendra Singh  wrote:

> On Wed, 18 Dec 2019 at 12:07, Amit Kapila  wrote:
> >
> > [please trim extra text before responding]
> >
> > On Wed, Dec 18, 2019 at 12:01 PM Mahendra Singh 
> wrote:
> > >
> > > On Tue, 10 Dec 2019 at 00:30, Mahendra Singh 
> wrote:
> > > >
> > > >
> > > > 3.
> > > > After v35 patch, vacuum.sql regression test is taking too much time
> due to large number of inserts so by reducing number of tuples, we can
> reduce that time.
> > > > +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM
> generate_series(1,10) i;
> > > >
> > > > here, instead of 10, we can make 1000 to reduce time of this
> test case because we only want to test code and functionality.
> > >
> > > As we added check of min_parallel_index_scan_size in v36 patch set to
> > > decide parallel vacuum, 1000 tuples are not enough to do parallel
> > > vacuum. I can see that we are not launching any workers in vacuum.sql
> > > test case and hence, code coverage also decreased. I am not sure that
> > > how to fix this.
> > >
> >
> > Try by setting min_parallel_index_scan_size to 0 in test case.
>
> Thanks Amit for the fix.
>
> Yes, we can add "set min_parallel_index_scan_size = 0;" in vacuum.sql
> test case. I tested by setting min_parallel_index_scan_size=0 and it
> is working fine.
>
> @Masahiko san, please add above line in vacuum.sql test case.
>
> Thanks and Regards
> Mahendra Thalor
> EnterpriseDB: http://www.enterprisedb.com
>
>
>

-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: [HACKERS] Block level parallel vacuum

2019-12-18 Thread Prabhat Sahu
On Wed, Dec 18, 2019 at 6:04 PM Amit Kapila  wrote:

> On Wed, Dec 18, 2019 at 6:01 PM Prabhat Sahu <
> prabhat.s...@enterprisedb.com> wrote:
>
>> Hi all,
>>
>> While testing on v36 patch with gist index, I came across below
>> segmentation fault.
>>
>>
> It seems you forgot to apply the Gist index patch as mentioned by
> Masahiko-San.  You need to first apply the patch at
> https://www.postgresql.org/message-id/CAA4eK1J1RxmXFAHC34S4_BznT76cfbrvqORSk23iBgRAOj1azw%40mail.gmail.com
>  and
> then apply other v-36 patches.  If you have already done that, then we need
> to investigate.  Kindly confirm.
>
> Yes Amit, Thanks for the suggestion. I have forgotten to add the v4 patch.
I have retested the same scenario, now the issue is not reproducible and it
is working fine.
-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: [HACKERS] Block level parallel vacuum

2019-12-18 Thread Prabhat Sahu
Hi all,

While testing on v36 patch with gist index, I came across below
segmentation fault.

-- PG Head+ v36_patch
create table tab1(c1 int, c2 text PRIMARY KEY, c3 bool, c4 timestamp
without time zone, c5 timestamp with time zone, p point);
create index gist_idx1 on tab1 using gist(p);
create index gist_idx2 on tab1 using gist(p);
create index gist_idx3 on tab1 using gist(p);
create index gist_idx4 on tab1 using gist(p);
create index gist_idx5 on tab1 using gist(p);

-- Cancel the insert statement in middle:
postgres=# insert into tab1 (select x, x||'_c2', 'T', current_date-x/100,
current_date-x/100,point (x,x) from generate_series(1,100) x);
^CCancel request sent
ERROR:  canceling statement due to user request

-- Segmentation fault during VACUUM(PARALLEL):
postgres=# vacuum(parallel 10) tab1 ;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

-- Below is the stack trace:
[centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.14650  postgres
Reading symbols from
/home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
[New LWP 14650]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: centos postgres [local] VACUUM
   '.
Program terminated with signal 11, Segmentation fault.
#0  0x0075e713 in intset_num_entries (intset=0x1f62) at
integerset.c:353
353 return intset->num_entries;
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64
pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x0075e713 in intset_num_entries (intset=0x1f62) at
integerset.c:353
#1  0x004cbe0f in gistvacuum_delete_empty_pages
(info=0x7fff32f8eba0, stats=0x7f2923b3f4d8) at gistvacuum.c:478
#2  0x004cb353 in gistvacuumcleanup (info=0x7fff32f8eba0,
stats=0x7f2923b3f4d8) at gistvacuum.c:124
#3  0x0050dcca in index_vacuum_cleanup (info=0x7fff32f8eba0,
stats=0x7f2923b3f4d8) at indexam.c:711
#4  0x005079ba in lazy_cleanup_index (indrel=0x7f292e149560,
stats=0x2db5e40, reltuples=0, estimated_count=false) at vacuumlazy.c:2380
#5  0x005074f0 in vacuum_one_index (indrel=0x7f292e149560,
stats=0x2db5e40, lvshared=0x7f2923b3f460, shared_indstats=0x7f2923b3f4d0,
dead_tuples=0x7f2922fbe2c0) at vacuumlazy.c:2196
#6  0x00507428 in vacuum_indexes_leader (Irel=0x2db5de0,
nindexes=6, stats=0x2db5e38, vacrelstats=0x2db5cb0, lps=0x2db5e90) at
vacuumlazy.c:2155
#7  0x00507126 in lazy_parallel_vacuum_indexes (Irel=0x2db5de0,
stats=0x2db5e38, vacrelstats=0x2db5cb0, lps=0x2db5e90, nindexes=6)
at vacuumlazy.c:2045
#8  0x00507770 in lazy_cleanup_indexes (Irel=0x2db5de0,
stats=0x2db5e38, vacrelstats=0x2db5cb0, lps=0x2db5e90, nindexes=6) at
vacuumlazy.c:2300
#9  0x00506076 in lazy_scan_heap (onerel=0x7f292e1473b8,
params=0x7fff32f8f3e0, vacrelstats=0x2db5cb0, Irel=0x2db5de0, nindexes=6,
aggressive=false)
at vacuumlazy.c:1675
#10 0x00504228 in heap_vacuum_rel (onerel=0x7f292e1473b8,
params=0x7fff32f8f3e0, bstrategy=0x2deb3a0) at vacuumlazy.c:475
#11 0x006ea059 in table_relation_vacuum (rel=0x7f292e1473b8,
params=0x7fff32f8f3e0, bstrategy=0x2deb3a0)
at ../../../src/include/access/tableam.h:1432
#12 0x006ecb74 in vacuum_rel (relid=16384, relation=0x2cf5cf8,
params=0x7fff32f8f3e0) at vacuum.c:1885
#13 0x006eac8d in vacuum (relations=0x2deb548,
params=0x7fff32f8f3e0, bstrategy=0x2deb3a0, isTopLevel=true) at vacuum.c:440
#14 0x006ea776 in ExecVacuum (pstate=0x2deaf90, vacstmt=0x2cf5de0,
isTopLevel=true) at vacuum.c:241
#15 0x0091da3d in standard_ProcessUtility (pstmt=0x2cf5ea8,
queryString=0x2cf51a0 "vacuum(parallel 10) tab1 ;",
context=PROCESS_UTILITY_TOPLEVEL,
params=0x0, queryEnv=0x0, dest=0x2cf6188, completionTag=0x7fff32f8f840
"") at utility.c:665
#16 0x0091d270 in ProcessUtility (pstmt=0x2cf5ea8,
queryString=0x2cf51a0 "vacuum(parallel 10) tab1 ;",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
queryEnv=0x0, dest=0x2cf6188, completionTag=0x7fff32f8f840 "") at
utility.c:359
#17 0x0091c187 in PortalRunUtility (portal=0x2d5c530,
pstmt=0x2cf5ea8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cf6188,
completionTag=0x7fff32f8f840 "") at pquery.c:1175
#18 0x0091c39e in PortalRunMulti (portal=0x2d5c530,
isTopLevel=true, setHoldSnapshot=false, dest=0x2cf6188, altdest=0x2cf6188,
completionTag=0x7fff32f8f840 "") at pquery.c:1321
#19 0x0091b8c8 in PortalRun (portal=0x2d5c530,
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cf6188,
altdest=0x2cf6188,
completionTag=0x7fff32f8f840 "") at pquery.c:796
#20 0x0

Re: tableam vs. TOAST

2019-11-06 Thread Prabhat Sahu
On Tue, Nov 5, 2019 at 4:48 PM Ashutosh Sharma 
wrote:

> From the stack trace shared by Prabhat, I understand that the checkpointer
> process panicked due to one of the following two reasons:
>
> 1) The fsync() failed in the first attempt itself and the reason for the
> failure was not due to file being dropped or truncated i.e. fsync failed
> with the error other than ENOENT. Refer to ProcessSyncRequests() for
> details esp. the code inside for (failures = 0; !entry->canceled;
> failures++) loop.
>
> 2) The first attempt to fsync() failed with ENOENT error because just
> before the fsync function was called, the file being synced either got
> dropped or truncated. When this happened, the checkpointer process called
> AbsorbSyncRequests() to update the entry for deleted file in the hash table
> but it seems like AbsorbSyncRequests() failed to do so and that's why the
> "entry->canceled" couldn't be set to true. Due to this, fsync() was
> performed on the same file twice and that failed too. As checkpointer
> process doesn't expect the fsync on the same file to fail twice, it
> panicked. Again, please check ProcessSyncRequests() for details esp. the
> code inside for (failures = 0; !entry->canceled; failures++) loop.
>
> Now, the point of discussion here is, which one of the above two reasons
> could the cause for panic? According to me, point #2 doesn't look like the
> possible reason for panic. The reason being just before a file is unlinked,
> backend first sends a SYNC_FORGET_REQUEST to the checkpointer process which
> marks the entry for this file in the hash table as cancelled and then
> removes the file. So, with this understanding it is hard to believe that
> once the first fsync() for a file has failed with error ENOENT, a call to
> AbsorbSyncRequests() made immediately after that wouldn't update the entry
> for this file in the hash table because the backend only removes the file
> once it has successfully sent the SYNC_FORGET_REQUEST for that file to the
> checkpointer process. See mdunlinkfork()->register_forget_request() for
> details on this.
>
> So, I think the first point that I mentioned above could be the probable
> reason for the checkpointer process getting panicked. But, having said all
> that, it would be good to have some evidence for it which can be confirmed
> by inspecting the server logfile.
>
> Prabhat, is it possible for you to re-run the test-case with
> log_min_messages set to DEBUG1 and save the logfile for the test-case that
> crashes. This would be helpful in knowing if the fsync was performed just
> once or twice i.e. whether point #1 is the reason for the panic or point
> #2.
>

I have ran the same testcases with and without patch multiple times with
debug option (log_min_messages = DEBUG1), but this time I am not able to
reproduce the crash.

>
> Thanks,
>
> --
> With Regards,
> Ashutosh Sharma
> EnterpriseDB:http://www.enterprisedb.com
>
> On Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <
> prabhat.s...@enterprisedb.com> wrote:
>
>>
>>
>> On Wed, Oct 30, 2019 at 9:46 PM Robert Haas 
>> wrote:
>>
>>> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <
>>> prabhat.s...@enterprisedb.com> wrote:
>>>
>>>> While testing the Toast patch(PG+v7 patch) I found below server crash.
>>>> System configuration:
>>>> VCPUs: 4, RAM: 8GB, Storage: 320GB
>>>>
>>>> This issue is not frequently reproducible, we need to repeat the same
>>>> testcase multiple times.
>>>>
>>>
>>> I wonder if this is an independent bug, because the backtrace doesn't
>>> look like it's related to the stuff this is changing. Your report doesn't
>>> specify whether you can also reproduce the problem without the patch, which
>>> is something that you should always check before reporting a bug in a
>>> particular patch.
>>>
>>
>> Hi Robert,
>>
>> My sincere apologize that I have not mentioned the issue in more detail.
>> I have ran the same case against both PG HEAD and HEAD+Patch multiple
>> times(7, 10, 20nos), and
>> as I found this issue was not failing in HEAD and same case is
>> reproducible in HEAD+Patch (again I was not sure about the backtrace
>> whether its related to patch or not).
>>
>>
>>
>>> --
>>> Robert Haas
>>> EnterpriseDB: http://www.enterprisedb.com
>>> The Enterprise PostgreSQL Company
>>>
>>
>>
>> --
>>
>> With Regards,
>>
>> Prabhat Kumar Sahu
>> Skype ID: prabhat.sahu1984
>> EnterpriseDB Software India Pvt. Ltd.
>>
>> The Postgres Database Company
>>
>

-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: tableam vs. TOAST

2019-10-30 Thread Prabhat Sahu
On Wed, Oct 30, 2019 at 9:46 PM Robert Haas  wrote:

> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <
> prabhat.s...@enterprisedb.com> wrote:
>
>> While testing the Toast patch(PG+v7 patch) I found below server crash.
>> System configuration:
>> VCPUs: 4, RAM: 8GB, Storage: 320GB
>>
>> This issue is not frequently reproducible, we need to repeat the same
>> testcase multiple times.
>>
>
> I wonder if this is an independent bug, because the backtrace doesn't look
> like it's related to the stuff this is changing. Your report doesn't
> specify whether you can also reproduce the problem without the patch, which
> is something that you should always check before reporting a bug in a
> particular patch.
>

Hi Robert,

My sincere apologize that I have not mentioned the issue in more detail.
I have ran the same case against both PG HEAD and HEAD+Patch multiple
times(7, 10, 20nos), and
as I found this issue was not failing in HEAD and same case is reproducible
in HEAD+Patch (again I was not sure about the backtrace whether its related
to patch or not).



> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: tableam vs. TOAST

2019-10-30 Thread Prabhat Sahu
Hi All,

While testing the Toast patch(PG+v7 patch) I found below server crash.
System configuration:
VCPUs: 4, RAM: 8GB, Storage: 320GB

This issue is not frequently reproducible, we need to repeat the same
testcase multiple times.

CREATE OR REPLACE FUNCTION toast_chunks_cnt_func(p1 IN text)
  RETURNS int AS $$
DECLARE
 chunks_cnt int;
 v_tbl text;
BEGIN
  SELECT reltoastrelid::regclass INTO v_tbl FROM pg_class WHERE RELNAME =
p1;
  EXECUTE 'SELECT count(*) FROM ' || v_tbl::regclass INTO chunks_cnt;
  RETURN chunks_cnt;
END; $$ LANGUAGE PLPGSQL;

-- Server crash after multiple run of below testcase
-- 
CHECKPOINT;
CREATE TABLE toast_tab (c1 text);
\d+ toast_tab
-- ALTER table column c1 for storage as "EXTERNAL" to make sure that the
column value is pushed to the TOAST table but not COMPRESSED.
ALTER TABLE toast_tab ALTER COLUMN c1 SET STORAGE EXTERNAL;
\d+ toast_tab
\timing
INSERT INTO toast_tab
( select repeat('a', 20)
  from generate_series(1,4) x);
\timing
SELECT reltoastrelid::regclass FROM pg_class WHERE RELNAME = 'toast_tab';
SELECT toast_chunks_cnt_func('toast_tab') "Number of chunks";
SELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;
SELECT DISTINCT SUBSTR(c1, 9,10) FROM toast_tab;

CHECKPOINT;
\timing
UPDATE toast_tab SET c1 = UPPER(c1);
\timing
SELECT toast_chunks_cnt_func('toast_tab') "Number of chunks";
SELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;
SELECT DISTINCT SUBSTR(c1, 9,10) FROM toast_tab;

DROP TABLE toast_tab;
-- 

-- Stacktrace as below:
[centos@host-192-168-1-249 bin]$ gdb -q -c data2/core.3151 postgres
Reading symbols from
/home/centos/PG/PGsrc/postgresql/inst/bin/postgres...done.
[New LWP 3151]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: checkpointer
   '.
Program terminated with signal 6, Aborted.
#0  0x7f2267d33207 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-260.el7_6.5.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_6.x86_64 libcom_err-1.42.9-13.el7.x86_64
libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-16.el7_6.1.x86_64
pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f2267d33207 in raise () from /lib64/libc.so.6
#1  0x7f2267d348f8 in abort () from /lib64/libc.so.6
#2  0x00eb3a80 in errfinish (dummy=0) at elog.c:552
#3  0x00c26530 in ProcessSyncRequests () at sync.c:393
#4  0x00bbbc57 in CheckPointBuffers (flags=256) at bufmgr.c:2589
#5  0x00604634 in CheckPointGuts (checkPointRedo=51448358328,
flags=256) at xlog.c:8992
#6  0x00603b5e in CreateCheckPoint (flags=256) at xlog.c:8781
#7  0x00aed8fa in CheckpointerMain () at checkpointer.c:481
#8  0x006240de in AuxiliaryProcessMain (argc=2,
argv=0x7ffe887c0880) at bootstrap.c:461
#9  0x00b0e834 in StartChildProcess (type=CheckpointerProcess) at
postmaster.c:5414
#10 0x00b09283 in reaper (postgres_signal_arg=17) at
postmaster.c:2995
#11 
#12 0x7f2267df1f53 in __select_nocancel () from /lib64/libc.so.6
#13 0x00b05000 in ServerLoop () at postmaster.c:1682
#14 0x00b0457b in PostmasterMain (argc=5, argv=0x349bce0) at
postmaster.c:1391
#15 0x00971c9f in main (argc=5, argv=0x349bce0) at main.c:210
(gdb)



On Sat, Oct 5, 2019 at 12:03 AM Robert Haas  wrote:

> On Fri, Sep 6, 2019 at 10:59 AM Robert Haas  wrote:
> > On Thu, Sep 5, 2019 at 4:07 PM Andres Freund  wrote:
> > > Yea, makes sense to me.
> >
> > OK, done.  Here's the remaining patches again, with a slight update to
> > the renaming patch (now 0002).  In the last version, I renamed
> > toast_insert_or_update to heap_toast_insert_or_update but did not
> > rename toast_delete to heap_toast_delete.  Actually, I'm not seeing
> > any particular reason not to go ahead and push the renaming patch at
> > this point also.
>
> And, hearing no objections, done.
>
> Here's the last patch back, rebased over that renaming. Although I
> think that Andres (and Tom) are probably right that there's room for
> improvement here, I currently don't see a way around the issues I
> wrote about in
> http://postgr.es/m/ca+tgmoa0zfcacpojcsspollgpztvfsyvcvb-uss8yokzmo5...@mail.gmail.com
> -- so not quite sure where to go next. Hopefully Andres or someone
> else will give me a quick whack with the cluebat if I'm missing
> something obvious.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: tableam vs. TOAST

2019-07-08 Thread Prabhat Sahu
On Mon, Jul 8, 2019 at 9:06 PM Robert Haas  wrote:

> On Tue, Jun 25, 2019 at 2:19 AM Prabhat Sahu
>  wrote:
> > I have tested the TOAST patches(v3) with different storage options
> like(MAIN, EXTERNAL, EXTENDED, etc.), and
> > combinations of compression and out-of-line storage options.
> > I have used a few dummy tables with various tuple count say 10k, 20k,
> 40k, etc. with different column lengths.
> > Used manual CHECKPOINT option with (checkpoint_timeout = 1d,
> max_wal_size = 10GB) before the test to avoid performance fluctuations,
> > and calculated the results as a median value of a few consecutive test
> executions.
>
> Thanks for testing.
>
> > All the observation looks good to me,
> > except for the "Test1" for SCC UPDATE with tuple count(10K/20K), for SCC
> INSERT with tuple count(40K)  there was a slightly increse in time taken
> > incase of "with patch" result. For a better observation, I also have ran
> the same "Test 1" for higher tuple count(i.e. 80K), and it also looks fine.
>
> Did you run each test just once?  How stable are the results?
>
No, I have executed the test multiple times(7times each) and calculated the
result as the median among those,
and the result looks stable(with v3 patches).

-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: Attached partition not considering altered column properties of root partition.

2019-07-03 Thread Prabhat Sahu
Thanks Amit for the fix patch,

I have applied the patch and verified the issue.
The attached partition with altered column properties shows error as below:
postgres=# alter table p attach partition p2 for values in (2);
psql: ERROR:  child table "p2" has different storage option for column "b"
than parent
DETAIL:  EXTENDED versus MAIN

Thanks,
Prabhat Sahu

On Wed, Jul 3, 2019 at 7:23 AM Amit Langote  wrote:

> Hi Prabhat,
>
> On Tue, Jul 2, 2019 at 5:12 PM Prabhat Sahu
>  wrote:
> >
> > Hi,
> >
> > In below testcase when I changed the staorage option for root partition,
> newly attached partition not including the changed staorage option.
> > Is this an expected behavior?
>
> Thanks for the report.  This seems like a bug.  Documentation claims
> that the child tables inherit column storage options from the parent
> table.  That's actually enforced in only some cases.
>
> 1. If you create the child table as a child to begin with (that is,
> not attach it as child after the fact):
>
> create table parent (a text);
> create table child () inherits (parent);
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  parent   │ a   │ x
>  child│ a   │ x
> (2 rows)
>
>
> 2. If you change the parent's column's storage option, child's column
> is recursively changed.
>
> alter table parent alter a set storage main;
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('parent'::regclass, 'child'::regclass) and attname = 'a';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  parent   │ a   │ m
>  child│ a   │ m
> (2 rows)
>
> However, we fail to enforce the rule when the child is attached after the
> fact:
>
> create table child2 (a text);
> alter table child2 inherit parent;
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('parent'::regclass, 'child'::regclass,
> 'child2'::regclass) and attname = 'a';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  parent   │ a   │ m
>  child│ a   │ m
>  child2   │ a   │ x
> (3 rows)
>
> To fix this, MergeAttributesIntoExisting() should check that the
> attribute options of a child don't conflict with the parent, which the
> attached patch implements.  Note that partitioning uses the same code
> as inheritance, so the fix applies to it too.  After the patch:
>
> create table p (a int, b text) partition by list (a);
> create table p1 partition of p for values in (1);
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  p│ b   │ x
>  p1   │ b   │ x
> (2 rows)
>
> alter table p alter b set storage main;
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('p'::regclass, 'p1'::regclass) and attname = 'b';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  p│ b   │ m
>  p1   │ b   │ m
> (2 rows)
>
> create table p2 (like p);
> select attrelid::regclass, attname, attstorage from pg_attribute where
> attrelid in ('p'::regclass, 'p1'::regclass, 'p2'::regclass) and
> attname = 'b';
>  attrelid │ attname │ attstorage
> ──┼─┼
>  p│ b   │ m
>  p1   │ b   │ m
>  p2   │ b   │ x
> (3 rows)
>
> alter table p attach partition p2 for values in (2);
> ERROR:  child table "p2" has different storage option for column "b" than
> parent
> DETAIL:  EXTENDED versus MAIN
>
> -- ok after changing p2 to match
> alter table p2 alter b set storage main;
> alter table p attach partition p2 for values in (2);
>
> Thanks,
> Amit


Attached partition not considering altered column properties of root partition.

2019-07-02 Thread Prabhat Sahu
Hi,

In below testcase when I changed the staorage option for root partition,
newly attached partition not including the changed staorage option.
Is this an expected behavior?

postgres=# CREATE TABLE tab1 (c1 INT, c2 text) PARTITION BY RANGE(c1);
CREATE TABLE
postgres=# create table tt_p1 as select * from tab1  where 1=2;
SELECT 0
postgres=# alter  table tab1 alter COLUMN c2 set storage main;
ALTER TABLE
postgres=#
postgres=# alter table tab1 attach partition tt_p1 for values from (20) to
(30);
ALTER TABLE
postgres=# \d+ tab1
 Partitioned table "public.tab1"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats target
| Description
+-+---+--+-+-+--+-
 c1 | integer |   |  | | plain   |
 |
 c2 | text|   |  | | main|
 |
Partition key: RANGE (c1)
Partitions: tt_p1 FOR VALUES FROM (20) TO (30)

postgres=# \d+ tt_p1
   Table "public.tt_p1"
 Column |  Type   | Collation | Nullable | Default | Storage  | Stats
target | Description
+-+---+--+-+--+--+-
 c1 | integer |   |  | | plain|
 |
 c2 | text|   |  | | extended |
 |
Partition of: tab1 FOR VALUES FROM (20) TO (30)
Partition constraint: ((c1 IS NOT NULL) AND (c1 >= 20) AND (c1 < 30))
Access method: heap

-- 

With Regards,

Prabhat Kumar Sahu


Re: tableam vs. TOAST

2019-06-24 Thread Prabhat Sahu
On Tue, Jun 11, 2019 at 9:47 PM Robert Haas  wrote:

> On Tue, May 21, 2019 at 2:10 PM Robert Haas  wrote:
> > Updated and rebased patches attached.
>
> And again.
>

Hi Robert,

I have tested the TOAST patches(v3) with different storage options
like(MAIN, EXTERNAL, EXTENDED, etc.), and
combinations of compression and out-of-line storage options.
I have used a few dummy tables with various tuple count say 10k, 20k, 40k,
etc. with different column lengths.
Used manual CHECKPOINT option with (checkpoint_timeout = 1d, max_wal_size =
10GB) before the test to avoid performance fluctuations,
and calculated the results as a median value of a few consecutive test
executions.

Please find the SQL script attached herewith, which I have used to perform
the observation.

Below are the test scenarios, how I have checked the behavior and
performance of TOAST patches against PG master.
1. where a single column is compressed(SCC)
2. where multiple columns are compressed(MCC)
-- ALTER the table column/s for storage as "MAIN" to make sure that
the column values are COMPRESSED.

3. where a single column is pushed to the TOAST table but not
compressed(SCTNC)
4. where multiple columns are pushed to the TOAST table but not
compressed(MCTNC)
-- ALTER the table column/s for storage as "EXTERNAL" to make sure
that the column values are pushed to the TOAST table but not COMPRESSED.

5. where a single column is pushed to the TOAST table and also
compressed(SCTC)
6. where multiple columns are pushed to the TOAST table and also
compressed(MCTC)
-- ALTER the table column/s for storage as "EXTENDED" to make sure
that the column values are pushed to the TOAST table and also COMPRESSED.

7. updating the tuples with similar data shouldn't affect the behavior of
storage options.

Please find my observation as below:
System Used: (VCPUs: 8, RAM: 16GB, Size: 640GB)
1 Tuples 2 Tuples 4 Tuples 8 Tuples
Without Patch With Patch Without Patch With Patch Without Patch With
Patch Without
Patch With Patch
1. SCC INSERT 125921.737 ms (02:05.922) 125992.563 ms (02:05.993) 234263.295
ms (03:54.263) 235952.336 ms (03:55.952) 497290.442 ms (08:17.290) 502820.139
ms (08:22.820) 948470.603 ms (15:48.471) 941778.952 ms (15:41.779)
1. SCC UPDATE 263017.814 ms (04:23.018) 270893.910 ms (04:30.894) 488393.748
ms (08:08.394) 507937.377 ms (08:27.937) 1078862.613 ms (17:58.863) 1053029.428
ms (17:33.029) 2037119.576 ms (33:57.120) 2023633.862 ms (33:43.634)
2. MCC INSERT 35415.089 ms (00:35.415) 35910.552 ms (00:35.911) 70899.737
ms (01:10.900) 70800.964 ms (01:10.801) 142185.996 ms (02:22.186) 142241.913
ms (02:22.242)
2. MCC UPDATE 72043.757 ms (01:12.044) 73848.732 ms (01:13.849) 137717.696
ms (02:17.718) 137577.606 ms (02:17.578) 276358.752 ms (04:36.359) 276520.727
ms (04:36.521)
3. SCTNC INSERT 26377.274 ms (00:26.377) 25600.189 ms (00:25.600) 45702.630
ms (00:45.703) 45163.510 ms (00:45.164) 99903.299 ms (01:39.903) 100013.004
ms (01:40.013)
3. SCTNC UPDATE 78385.225 ms (01:18.385) 76680.325 ms (01:16.680) 151823.250
ms (02:31.823) 153503.971 ms (02:33.504) 308197.734 ms (05:08.198) 308474.937
ms (05:08.475)
4. MCTNC INSERT 26214.069 ms (00:26.214) 25383.522 ms (00:25.384) 50826.522
ms (00:50.827) 50221.669 ms (00:50.222) 106034.338 ms (01:46.034) 106122.827
ms (01:46.123)
4. MCTNC UPDATE 78423.817 ms (01:18.424) 75154.593 ms (01:15.155) 158885.787
ms (02:38.886) 156530.964 ms (02:36.531) 319721.266 ms (05:19.721) 322385.709
ms (05:22.386)
5. SCTC INSERT 38451.022 ms (00:38.451) 38652.520 ms (00:38.653) 71590.748
ms (01:11.591) 71048.975 ms (01:11.049) 143327.913 ms (02:23.328) 142593.207
ms (02:22.593)
5. SCTC UPDATE 82069.311 ms (01:22.069) 81678.131 ms (01:21.678) 138763.508
ms (02:18.764) 138625.473 ms (02:18.625) 277534.080 ms (04:37.534) 277091.611
ms (04:37.092)
6. MCTC INSERT 36325.730 ms (00:36.326) 35803.368 ms (00:35.803) 73285.204
ms (01:13.285) 72728.371 ms (01:12.728) 142324.859 ms (02:22.325) 144368.335
ms (02:24.368)
6. MCTC UPDATE 73740.729 ms (01:13.741) 73002.511 ms (01:13.003) 141309.859
ms (02:21.310) 139676.173 ms (02:19.676) 278906.647 ms (04:38.907) 279522.408
ms (04:39.522)

All the observation looks good to me,
except for the "Test1" for SCC UPDATE with tuple count(10K/20K), for SCC
INSERT with tuple count(40K)  there was a slightly increse in time taken
incase of "with patch" result. For a better observation, I also have ran
the same "Test 1" for higher tuple count(i.e. 80K), and it also looks fine.

I also have performed the below test with TOAST table objects.
8. pg_dump/restore, pg_upgrade with these
9. Streaming Replication setup
10. Concurrent Transactions

While testing few concurrent transactions I have below query:
-- Concurrent transactions acquire a lock for TOAST option(ALTER TABLE ..
SET STORAGE .. MAIN/EXTERNAL/EXTENDED/ etc)

-- Session 1:
CREATE TABLE a (a_id text PRIMARY KEY);
CREATE TABLE b (b_id text);
INSERT INTO a VALUES ('a'), ('b');
INSERT INTO b VALUES ('a'), ('b'), 

Inconsistencies in the behavior of CHR() function in PG.

2019-03-29 Thread Prabhat Sahu
Hi All,

While trying to explore on CHR() function in PG,
I found that few of the ASCII values are returning hex number values(like
'\x08', '\x0B')
and few are executing within SQL (i.e. chr(9) => Horizontal tab,  chr(10)
=> Line feed) as below example.

postgres=# select 1|| chr(8)|| 2 || chr(9)||3 || chr(10)||4 || chr(11)||5
|| chr(12)||6 || chr(13)||7  as col1;
  col1

 1*\x08*2  3* +*
 4*\x0B*5*\x0C*6*\r*7
(1 row)

My question here is, why these inconsistencies in the behavior of CHR()
function?

-- 


With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


pg_dump/pg_restore fail for TAR_DUMP and CUSTOM_DUMP from v94/v95/v96 to v11/master.

2019-02-27 Thread Prabhat Sahu
Hi,

I got a failure in pg_dump/pg_restore as below:
pg_dump/pg_restore fails with 'ERROR: schema "public" already exists' for
TAR_DUMP and CUSTOM_DUMP from v94/v95/v96 to v11/master.

-- Take pg_dump in v94/v95/v96:
[prabhat@localhost bin]$ ./pg_dump -f /tmp/*tar_dump_PG94.tar* -Ft postgres
-p 9000
[prabhat@localhost bin]$ ./pg_dump -f /tmp/*custom_dump_PG94.sql* -Fc
postgres -p 9000

-- Try to restore the above dump into v11/master:
[prabhat@localhost bin]$ ./pg_restore -F t -U prabhat -d db3 -p 9001 /tmp/
*tar_dump_PG94.tar*
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public
prabhat
pg_restore: [archiver (db)] could not execute query: ERROR:  schema
"public" already exists
Command was: CREATE SCHEMA public;



WARNING: errors ignored on restore: 1

[prabhat@localhost bin]$ ./pg_restore -F c -U prabhat -d db4 -p 9001 /tmp/
*custom_dump_PG94.sql*
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public
prabhat
pg_restore: [archiver (db)] could not execute query: ERROR:  schema
"public" already exists
Command was: CREATE SCHEMA public;



WARNING: errors ignored on restore: 1

Note: I am able to perform "Plain dump/restore" across the branches.


-- 


With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


Re: [HACKERS] Bug in to_timestamp().

2018-09-18 Thread Prabhat Sahu
Hi All,

Few more findings on to_timestamp() test with HEAD.

postgres[3493]=# select to_timestamp('15-07-1984 23:30:32',' dd- mm-  
hh24: mi: ss');
   to_timestamp
---
 1984-07-15 23:30:32+05:30
(1 row)

postgres[3493]=# select to_timestamp('15-07-*1984* 23:30:32','*9*dd-*9*mm-
*99* *9*hh24:*9*mi:*9*ss');
 to_timestamp
--
 *0084*-07-05 03:00:02+05:53:28
(1 row)

If there are spaces before any formate then output is fine(1st output) but
instead of spaces if we have *digit* then we are getting wrong output.


On Mon, Sep 10, 2018 at 12:23 AM Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:

> On Thu, Sep 6, 2018 at 3:58 PM Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Thu, Sep 6, 2018 at 2:40 PM Alexander Korotkov
>>  wrote:
>> >
>> > On Wed, Sep 5, 2018 at 7:28 PM amul sul  wrote:
>> > > On Wed, Sep 5, 2018, 6:35 PM Alexander Korotkov <
>> a.korot...@postgrespro.ru> wrote:
>> > >> On Wed, Sep 5, 2018 at 3:10 PM amul sul  wrote:
>> > >> > On Wed, Sep 5, 2018 at 3:05 PM Alexander Korotkov
>> > >> >  wrote:
>> > >> > > On Wed, Sep 5, 2018 at 1:22 AM David G. Johnston
>> > >> > >  wrote:
>> > >> > > > From those results the question is how important is it to
>> force the following breakage on our users (i.e., introduce FX exact symbol
>> matching):
>> > >> > > >
>> > >> > > > SELECT to_timestamp('97/Feb/16', 'FXYY:Mon:DD');
>> > >> > > > - to_timestamp
>> > >> > > > ---
>> > >> > > > - Sun Feb 16 00:00:00 1997 PST
>> > >> > > > -(1 row)
>> > >> > > > -
>> > >> > > > +ERROR:  unexpected character "/", expected character ":"
>> > >> > > > +HINT:  In FX mode, punctuation in the input string must
>> exactly match the format string.
>> > >> > > >
>> > >> > > > There seemed to be some implicit approvals of this breakage
>> some 30 emails and 10 months ago but given that this is the only change
>> from a correct result to a failure I'd like to officially put it out there
>> for opinion/vote gathering.   Mine is a -1; though keeping the distinction
>> between space and non-alphanumeric characters is expected.
>> > >> > >
>> > >> > > Do I understand correctly that you're -1 to changes to FX mode,
>> but no
>> > >> > > objection to changes in non-FX mode?
>> > >> > >
>> > >> > Ditto.
>> > >>
>> > >> So, if no objections for non-FX mode changes, then I'll extract that
>> > >> part and commit it separately.
>> > >
>> > >
>> > > Yeah, that make sense to me, thank you.
>> >
>> > OK!  I've removed FX changes from the patch.  The result is attached.
>> > I'm going to commit this if no objections.
>>
>> Attached revision fixes usage of two subsequent spaces in the
>> documentation.
>>
>
> So, pushed!  Thanks to every thread participant for review and feedback.
>
> --
> Alexander Korotkov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


-- 


With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


Difference in TO_TIMESTAMP results.

2018-09-17 Thread Prabhat Sahu
Hi All,

I have found below difference in TO_TIMESTAMP results.

postgres[114552]=# select to_timestamp('15-07-1984 23:30:32','dd-mm-
hh24:mi:ss');
   to_timestamp
---
 1984-07-15 23:30:32+05:30
(1 row)

postgres[114552]=# select to_timestamp('15-07-84 23:30:32','dd-mm-
hh24:mi:ss');
 to_timestamp
--
 0084-07-15 23:30:32+05:53*:28*
(1 row)

My doubt is the *":28"* in timezone part in 2nd result, is it expected ?

-- 


With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


Memory leak with CALL to Procedure with COMMIT.

2018-07-22 Thread Prabhat Sahu
Hi Hackers,

While testing with PG procedure, I found a memory leak on HEAD, with below
steps:

postgres=# CREATE OR REPLACE PROCEDURE proc1(v1 INOUT INT)
AS $$
BEGIN
 commit;
END; $$ LANGUAGE plpgsql;
CREATE PROCEDURE

postgres=# call proc1(10);
WARNING:  Snapshot reference leak: Snapshot 0x23678e8 still referenced
 v1

 10
(1 row)

--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


"Access privileges" is missing after pg_dumpall

2018-06-25 Thread Prabhat Sahu
Hi,

I have taken pg_dumpall in pg-master and after restoring the dump I am not
able to see the "Access privileges" as below:
Same is reproducible in back branches as well, is this fine ?


CREATE ROLE user1 PASSWORD 'user1' SUPERUSER LOGIN;
CREATE DATABASE db1 OWNER=user1;
GRANT ALL ON DATABASE db1 TO user1;

postgres=# \l+ db1
 List of databases
 Name | Owner | Encoding |  Collate  |   Ctype| Access
privileges   |  Size   | Tablespace | Description
--+---+--+++---+-++-
 db1| user1   | UTF8   | en_US.utf8 | en_US.utf8 | =Tc/user1
 +| 7621 kB | pg_default  |
   | | ||
  | user1=CTc/user1|   ||
(1 row)

postgres=# SELECT datname as "Relation", datacl as "Access permissions"
FROM pg_database WHERE datname = 'db1';
 Relation | Access permissions
--+-
 db1  | {=Tc/user1,user1=CTc/user1}
(1 row)


-- pg_dumpall
./pg_dumpall > /tmp/dumpall.sql

-- Restore
./psql -a -f /tmp/dumpall.sql


postgres=# \l+ db1
 List of databases
 Name | Owner | Encoding |  Collate   |   Ctype   | Access
privileges |  Size  | Tablespace | Description
--+---+--+++---+-++-
 db1| user1   | UTF8   | en_US.utf8 | en_US.utf8 |
| 7699 kB | pg_default |
(1 row)

postgres=# SELECT datname as "Relation", datacl as "Access permissions"
FROM pg_database WHERE datname = 'db1';
 Relation | Access permissions
--+
 db1  |
(1 row)

--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


segmentation fault in pg head with SQL function.

2018-03-15 Thread Prabhat Sahu
Hi,

I found a segmentation fault  on pg master Head with below steps and
stacktrace.

postgres=# CREATE OR REPLACE FUNCTION func1() RETURNS VOID
LANGUAGE SQL
AS $$
select 10;
$$;
CREATE FUNCTION

postgres=# select func1();
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q


-- stack trace:
[edb@localhost bin]$ gdb -q -c data/core.31498 postgres
Reading symbols from
/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 31498]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib64/libthread_db.so.1".
Core was generated by `postgres: edb postgres [local] SELECT
 '.
Program terminated with signal 6, Aborted.
#0  0x7fb376a001f7 in raise () from /usr/lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64
libselinux-2.5-11.el7.x86_64 openssl-libs-1.0.2k-8.el7.x86_64
pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  0x7fb376a001f7 in raise () from /usr/lib64/libc.so.6
#1  0x7fb376a018e8 in abort () from /usr/lib64/libc.so.6
#2  0x00a01368 in ExceptionalCondition (conditionName=0xbea4a0
"!(exprType(newexpr) == result_type)", errorType=0xbe98c6
"FailedAssertion",
fileName=0xbe9916 "clauses.c", lineNumber=4627) at assert.c:54
#3  0x007d1611 in inline_function (funcid=16384, result_type=2278,
result_collid=0, input_collid=0, args=0x0, funcvariadic=0 '\000',
func_tuple=0x7fb377ebc918,
context=0x7fff01e5fca0) at clauses.c:4627
#4  0x007d0781 in simplify_function (funcid=16384,
result_type=2278, result_typmod=-1, result_collid=0, input_collid=0,
args_p=0x7fff01e5eb70, funcvariadic=0 '\000',
process_args=1 '\001', allow_non_const=1 '\001',
context=0x7fff01e5fca0) at clauses.c:4095
#5  0x007ce15f in eval_const_expressions_mutator (node=0x2a88cf8,
context=0x7fff01e5fca0) at clauses.c:2676
#6  0x0073727b in expression_tree_mutator (node=0x2a88d88,
mutator=0x7cdc5c , context=0x7fff01e5fca0)
at nodeFuncs.c:2854
#7  0x007d007c in eval_const_expressions_mutator (node=0x2a88d88,
context=0x7fff01e5fca0) at clauses.c:3671
#8  0x00737464 in expression_tree_mutator (node=0x2a88d50,
mutator=0x7cdc5c , context=0x7fff01e5fca0)
at nodeFuncs.c:2903
#9  0x007d007c in eval_const_expressions_mutator (node=0x2a88d50,
context=0x7fff01e5fca0) at clauses.c:3671
#10 0x007cdc09 in eval_const_expressions (root=0x2a88f00,
node=0x2a88d50) at clauses.c:2474
#11 0x007ae7cb in preprocess_expression (root=0x2a88f00,
expr=0x2a88d50, kind=1) at planner.c:996
#12 0x007adf40 in subquery_planner (glob=0x2a88b68,
parse=0x2a888e0, parent_root=0x0, hasRecursion=0 '\000', tuple_fraction=0)
at planner.c:687
#13 0x007ad541 in standard_planner (parse=0x2a888e0,
cursorOptions=256, boundParams=0x0) at planner.c:385
#14 0x007ad2d2 in planner (parse=0x2a888e0, cursorOptions=256,
boundParams=0x0) at planner.c:243
#15 0x0089980c in pg_plan_query (querytree=0x2a888e0,
cursorOptions=256, boundParams=0x0) at postgres.c:807
#16 0x00899939 in pg_plan_queries (querytrees=0x2a88ec8,
cursorOptions=256, boundParams=0x0) at postgres.c:873
#17 0x00899c08 in exec_simple_query (query_string=0x2a87a28 "select
func1();") at postgres.c:1048
#18 0x0089dfb0 in PostgresMain (argc=1, argv=0x2ab3550,
dbname=0x2ab33b0 "postgres", username=0x2a84428 "edb") at postgres.c:4144
#19 0x007feed4 in BackendRun (port=0x2aab390) at postmaster.c:4409
#20 0x007fe64d in BackendStartup (port=0x2aab390) at
postmaster.c:4081
#21 0x007fab64 in ServerLoop () at postmaster.c:1754
#22 0x007fa19d in PostmasterMain (argc=5, argv=0x2a82330) at
postmaster.c:1362
#23 0x00731fb8 in main (argc=5, argv=0x2a82330) at main.c:228
(gdb)


--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2018-03-08 Thread Prabhat Sahu
On Wed, Mar 7, 2018 at 7:51 PM, Robert Haas  wrote:

> On Wed, Mar 7, 2018 at 8:59 AM, Prabhat Sahu <
> prabhat.s...@enterprisedb.com> wrote:
>>
>> 2018-03-07 19:24:44.263 IST [54400] LOG:  background worker "parallel
>> worker" (PID 54482) was terminated by signal 9: Killed
>>
>
> That looks like the background worker got killed by the OOM killer.  How
> much memory do you have in the machine where this occurred?
>
I have ran the testcase in my local machine with below configurations:

Environment: CentOS 7(64bit)
HD : 100GB
RAM: 4GB
Processor: 4

I have nerrowdown the testcase as below, which also reproduce the same
crash.

-- GUCs under postgres.conf
maintenance_work_mem = 8GB

./pgbench -i -s 500 -d postgres

postgres=# create index pgb_acc_idx3 on pgbench_accounts(aid,
abalance,filler);
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>

--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company

On Thu, Mar 8, 2018 at 7:12 AM, Andres Freund  wrote:

>
>
> On March 7, 2018 5:40:18 PM PST, Peter Geoghegan  wrote:
> >On Wed, Mar 7, 2018 at 5:16 PM, Tomas Vondra
> > wrote:
> >> FWIW that's usually written to the system log. Does dmesg say
> >something
> >> about the kill?
> >
> >While it would be nice to confirm that it was indeed the OOM killer,
> >either way the crash happened because SIGKILL was sent to a parallel
> >worker. There is no reason to suspect a bug.
>
> Not impossible there's a leak somewhere though.
>
> Andres
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2018-03-07 Thread Prabhat Sahu
On Wed, Mar 7, 2018 at 7:16 PM, Robert Haas  wrote:

> On Wed, Mar 7, 2018 at 8:13 AM, Prabhat Sahu <
> prabhat.s...@enterprisedb.com> wrote:
>
>> Hi all,
>>
>> While testing this feature I found a crash on PG head with parallel
>> create index using pgbanch tables.
>>
>> -- GUCs under postgres.conf
>> max_parallel_maintenance_workers = 16
>> max_parallel_workers = 16
>> max_parallel_workers_per_gather = 8
>> maintenance_work_mem = 8GB
>> max_wal_size = 4GB
>>
>> ./pgbench -i -s 500 -d postgres
>>
>> postgres=# create index pgb_acc_idx3 on pgbench_accounts(aid,
>> abalance,filler);
>> WARNING:  terminating connection because of crash of another server
>> process
>> DETAIL:  The postmaster has commanded this server process to roll back
>> the current transaction and exit, because another server process exited
>> abnormally and possibly corrupted shared memory.
>> HINT:  In a moment you should be able to reconnect to the database and
>> repeat your command.
>> server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> The connection to the server was lost. Attempting reset: Failed.
>> !>
>>
>
> That makes it look like perhaps one of the worker backends crashed.  Did
> you get a message in the logfile that might indicate the nature of the
> crash?  Something with PANIC or TRAP, perhaps?
>


I am not able to see any PANIC/TRAP in log file,
Here are the contents.

[edb@localhost bin]$ cat logsnew
2018-03-07 19:21:20.922 IST [54400] LOG:  listening on IPv6 address "::1",
port 5432
2018-03-07 19:21:20.922 IST [54400] LOG:  listening on IPv4 address
"127.0.0.1", port 5432
2018-03-07 19:21:20.925 IST [54400] LOG:  listening on Unix socket
"/tmp/.s.PGSQL.5432"
2018-03-07 19:21:20.936 IST [54401] LOG:  database system was shut down at
2018-03-07 19:21:20 IST
2018-03-07 19:21:20.939 IST [54400] LOG:  database system is ready to
accept connections
2018-03-07 19:24:44.263 IST [54400] LOG:  background worker "parallel
worker" (PID 54482) was terminated by signal 9: Killed
2018-03-07 19:24:44.286 IST [54400] LOG:  terminating any other active
server processes
2018-03-07 19:24:44.297 IST [54405] WARNING:  terminating connection
because of crash of another server process
2018-03-07 19:24:44.297 IST [54405] DETAIL:  The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2018-03-07 19:24:44.297 IST [54405] HINT:  In a moment you should be able
to reconnect to the database and repeat your command.
2018-03-07 19:24:44.301 IST [54478] WARNING:  terminating connection
because of crash of another server process
2018-03-07 19:24:44.301 IST [54478] DETAIL:  The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2018-03-07 19:24:44.301 IST [54478] HINT:  In a moment you should be able
to reconnect to the database and repeat your command.
2018-03-07 19:24:44.494 IST [54504] FATAL:  the database system is in
recovery mode
2018-03-07 19:24:44.496 IST [54400] LOG:  all server processes terminated;
reinitializing
2018-03-07 19:24:44.513 IST [54505] LOG:  database system was interrupted;
last known up at 2018-03-07 19:22:54 IST
2018-03-07 19:24:44.552 IST [54505] LOG:  database system was not properly
shut down; automatic recovery in progress
2018-03-07 19:24:44.554 IST [54505] LOG:  redo starts at 0/AB401A38
2018-03-07 19:25:14.712 IST [54505] LOG:  invalid record length at
1/818B8D80: wanted 24, got 0
2018-03-07 19:25:14.714 IST [54505] LOG:  redo done at 1/818B8D48
2018-03-07 19:25:14.714 IST [54505] LOG:  last completed transaction was at
log time 2018-03-07 19:24:05.322402+05:30
2018-03-07 19:25:16.887 IST [54400] LOG:  database system is ready to
accept connections



--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2018-03-07 Thread Prabhat Sahu
Hi all,

While testing this feature I found a crash on PG head with parallel create
index using pgbanch tables.

-- GUCs under postgres.conf
max_parallel_maintenance_workers = 16
max_parallel_workers = 16
max_parallel_workers_per_gather = 8
maintenance_work_mem = 8GB
max_wal_size = 4GB

./pgbench -i -s 500 -d postgres

postgres=# create index pgb_acc_idx3 on pgbench_accounts(aid,
abalance,filler);
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>


--

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Corporation

The Postgres Database Company

On Thu, Feb 8, 2018 at 6:15 PM, Robert Haas  wrote:

> On Tue, Feb 6, 2018 at 3:53 PM, Robert Haas  wrote:
> > On Tue, Feb 6, 2018 at 2:11 PM, Tom Lane  wrote:
> >> Robert Haas  writes:
> >>> Unfortunately valgrind does not work at all on my laptop -- the server
> >>> appears to start, but as soon as you try to connect, the whole thing
> >>> dies with an error claiming that the startup process has failed.  So I
> >>> can't easily test this at the moment.  I'll try to get it working,
> >>> here or elsewhere, but thought I'd send the above reply first.
> >>
> >> Do you want somebody who does have a working valgrind installation
> >> (ie me) to take responsibility for pushing this patch?
> >
> > I committed it before seeing this.  It probably would've been better
> > if you had done it, but I assume Peter tested it, so let's see what
> > the BF thinks.
>
> skink and lousyjack seem happy now, so I think it worked.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2018-01-16 Thread Prabhat Sahu
Hi all,

I have been continue doing testing of parallel create index patch. So far
I haven't came across any sort of issue or regression with the patches.
Here are few performance number for the latest round of testing - which
is perform on top of 6th Jan patch submitted by Peter.

Testing is done on openstack instance with:

CUP: 8
RAM : 16GB
HD: 640 GB

postgres=# select pg_size_pretty(pg_total_relation_size
('lineitem'));
 pg_size_pretty

 93 GB
(1 row)

-- Test 1.
max_parallel_workers_maintenance = 2
max_parallel_workers = 16
max_parallel_workers_per_gather = 8
maintenance_work_mem = 1GB
max_wal_size = 4GB

-- Test 2.
max_parallel_workers_maintenance = 4
max_parallel_workers = 16
max_parallel_workers_per_gather = 8
maintenance_work_mem = 2GB
max_wal_size = 4GB

-- Test 3.
max_parallel_workers_maintenance = 8
max_parallel_workers = 16
max_parallel_workers_per_gather = 8
maintenance_work_mem = 4GB
max_wal_size = 4GB

NOTE: All the time taken entries are the median of 3 consecutive runs for
the same B-tree index creation query.

Time taken for Parallel Index createion:
Test 1 Test 2 Test 3
Simple/Composite Indexes: Without patch With patch ,
max_parallel_workers_maintenance = 2 % Change Without patch With patch,
max_parallel_workers_maintenance = 4 % Change Without patch With patch,
max_parallel_workers_maintenance = 8 % Change
Index on "bigint" column:
CREATE INDEX li_ordkey_idx1 ON lineitem(l_orderkey); 1062446.462 ms
(17:42.446) 1024972.273 ms
(17:04.972) 3.52 % 1053468.945 ms
(17:33.469) 896375.543 ms
(14:56.376) 17.75 % 1082920.703 ms
(18:02.921) 932550.058 ms
(15:32.550) 13.88 %
index on "integer" column:
CREATE INDEX li_lineno_idx2 ON lineitem(l_linenumber); 1538285.499 ms
(25:38.285) 1201008.423 ms
(20:01.008) 21.92 % 1529837.023 ms
(25:29.837) 1014188.489 ms
(16:54.188) 33.70 % 1642160.947 ms
(27:22.161) 978518.253 ms
(16:18.518) 40.41 %
index on "numeric" column:
CREATE INDEX li_qty_idx3 ON lineitem(l_quantity); 3968102.568 ms
(01:06:08.103) 2359304.405 ms
(39:19.304) 40.54 % 4129510.930 ms
(01:08:49.511) 1680201.644 ms
(28:00.202) 59.31 % 4348248.210 ms
(01:12:28.248) 1490461.879 ms
(24:50.462) 65.72 %
index on "character" column:
CREATE INDEX li_lnst_idx4 ON lineitem(l_linestatus); 1510273.931 ms
(25:10.274) 1240265.301 ms
(20:40.265) 17.87 % 1516842.985 ms
(25:16.843) 995730.092 ms
(16:35.730) 34.35 % 1580789.375 ms
(26:20.789) 984975.746 ms
(16:24.976) 37.69 %
index on "date" column:
CREATE INDEX li_shipdt_idx5 ON lineitem(l_shipdate); 1483603.274 ms
(24:43.603) 1189704.930 ms
(19:49.705) 19.80 % 1498348.925 ms
(24:58.349) 1040421.626 ms
(17:20.422) 30.56 % 1653651.499 ms
(27:33.651) 1016305.794 ms
(16:56.306) 38.54 %
index on "character varying" column:
CREATE INDEX li_comment_idx6 ON lineitem(l_comment); 6945953.838 ms
(01:55:45.954) 4329696.334 ms
(01:12:09.696) 37.66 % 6818556.437 ms
(01:53:38.556) 2834034.054 ms
(47:14.034) 58.43 % 6942285.711 ms
(01:55:42.286) 2648430.902 ms
(44:08.431) 61.85 %
composite index on "numeric", "character" columns:
CREATE INDEX li_qtylnst_idx34 ON lineitem
(l_quantity, l_linestatus); 4961563.400 ms
(01:22:41.563) 2959722.178 ms
(49:19.722) 40.34 % 5242809.501 ms
(01:27:22.810) 2077463.136 ms
(34:37.463) 60.37 % 5576765.727 ms
(01:32:56.766) 1755829.420 ms
(29:15.829) 68.51 %
composite index on "date", "character varying" columns:
CREATE INDEX li_shipdtcomment_idx56 ON lineitem
(l_shipdate, l_comment); 4693318.077 ms
(01:18:13.318) 3181494.454 ms
(53:01.494) 32.21 % 4627624.682 ms
(01:17:07.625) 2613289.211 ms
(43:33.289) 43.52 % 4719242.965 ms
(01:18:39.243) 2685516.832 ms
(44:45.517) 43.09 %


*Thanks & Regards,*

*Prabhat Kumar Sahu*
Skype ID: prabhat.sahu1984

www.enterprisedb.co m


On Tue, Jan 16, 2018 at 6:24 AM, Peter Geoghegan  wrote:

> On Fri, Jan 12, 2018 at 10:28 AM, Robert Haas 
> wrote:
> > More comments:
>
> Attached patch has all open issues worked through, including those
> that I respond to or comment on below, as well as the other feedback
> from your previous e-mails. Note also that I fixed the issue that Amit
> raised, as well as the GetOldestXmin()-argument bug that I noticed in
> passing when responding to Amit. I also worked on the attribution in
> the commit message.
>
> Before getting to my responses to your most recent round of feedback,
> I want to first talk about some refactoring that I decided to do. As
> you can see from the master branch, tuplesort_performsort() isn't
> necessarily reached for spool2, even when we start out with a spool2
> (that is, for many unique index builds, spool2 never even does a
> tuplesort_performsort()). We may instead decide to shut down spool2
> when it has no (dead) tuples. I made this work just as well for the
> parallel case in this latest revision. I had to teach tuplesort.c to
> accept an early tuplesort_end() for LEADER() -- it had to be prepared
> to release still-waiting workers in some cases, rather than dependi