Re: backend crash on DELETE, reproducible locally

2018-11-07 Thread Ondřej Bouda

Dne 3.11.2018 v 20:38 Tom Lane napsal(a):

Yeah, dump/reload would make the problem go away.

Same bug.


So we dumped and restored all our databases. After that, the crash on 
DELETE never occurred (before, it was several times a day). However, the 
crash on UPDATE still occurs on specific rows. We are quite certain no 
ALTER TABLE statement was executed on the table after the restore.
There are two AFTER INSERT OR UPDATE constraint triggers and one BEFORE 
INSERT OR UPDATE trigger on the table, all of which are implemented in 
plpgsql. Multiple physical servers, on separate databases with identical 
schema, crash on the same type of UPDATE query (different just in 
concrete values to be updated). The same code worked perfectly on 10.x.


See the attached backtrace below. Can we do something else to catch the 
bug? Or can we hope for this bug to be already fixed and released in the 
upcoming version?



Regards,
Ondřej Bouda



[New LWP 8307]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: 11/main: usr mydb 127.0.0.1(56829) 
UPDATE  '.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
No locals.
#1  0x7f6a6b993296 in datumIsEqual (value1=12, value2=12, 
typByVal=, typLen=16) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/utils/adt/datum.c:249

size1 = 16
size2 = 
s1 = 0xc 
s2 = 0xc 
res = 
#2  0x7f6a6b6b89de in ProjIndexIsUnchanged (newtup=0x7f6a6d5ccdd0, 
oldtup=0x7ffc9c1c1270, relation=0x7f6a6b4c3680) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4539

att = 
indexOid = 
indexDesc = 0x7f6a6b4c7cf0
indexInfo = 0x7f6a6d5cdbf8
i = 0
indexoidlist = 
econtext = 0x7f6a6d4c2c10
new_isnull = {false, false, false, 156, 252, 127, false, false, 
44, 29, 128, 107, 106, 127, false, false, 203, 89, 76, 107, 106, 127, 
false, false, 123, false, false, false, false, false, false, false}

slot = 0x7f6a6d5cf8b8
equals = true
old_isnull = {false, false, false, 109, 106, 127, false, false, 
69, 46, 80, 109, 106, 127, false, false, 64, 46, 80, 109, 106, 127, 
false, false, 72, 46, 80, 109, 106, 127, false, false}
new_values = {12, 2, 140095078059546, 16, 140722927572064, 
140095077232192, 140095077233456, 140722927572079, 0, 0, 
140722927572048, 140095054460096, 140095077050704, 140095078061816, 5, 
4, 140722927572096, 140095046814123, 140722927572096, 140095046825162, 
140095043418528, 140095077051552, 140722927572224, 140095046865098, 
60129542543, 23274993280, 140095043418528, 140095077050704, 
140095043417728, 140095077050976, 140095078056288, 140095043422512}

indexno = 2
l = 0x7f6a6d503840
estate = 0x7f6a6d4c2a00
old_values = {12, 1, 140078631611970, 1, 140095077050704, 1024, 
140095077232192, 0, 140722927571760, 140095049517261, 140722927571808, 
140722927571776, 140722927571792, 140095046814329, 18, 140095077232200, 
140095076976896, 140095078056288, 22, 100, 140722927571856, 
140722927571920, 140095046831813, 140722927571856, 140095046842624, 0, 
140095077226392, 3, 140095077232192, 140095077232528, 140095077232192, 
140095078061816}
#3  heap_update (relation=relation@entry=0x7f6a6b4c3680, 
otid=otid@entry=0x7ffc9c1c1690, newtup=newtup@entry=0x7f6a6d5ccdd0, 
cid=0, crosscheck=, wait=wait@entry=true, 
hufd=hufd@entry=0x7ffc9c1c15b0, lockmode=lockmode@entry=0x7ffc9c1c15a4) 
at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4230

result = 
xid = 245762845
hot_attrs = 0x7f6a6d503740
proj_idx_attrs = 0x7f6a6d503758
key_attrs = 0x7f6a6d503770
id_attrs = 0x7f6a6d503788
interesting_attrs = 0x7f6a6d5037a0
lp = 
oldtup = {t_len = 85, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 
45}, ip_posid = 10}, t_tableOid = 557732818, t_data = 0x7f669913ee10}

heaptup = 0x7f6a6d5ccdd0
old_key_tuple = 0x0
old_key_copied = false
page = 0x7f669913d180 "]="
block = 
mxact_status = 
buffer = 
newbuf = 14683
vmbuffer = 0
vmbuffer_new = 0
need_toast = 
newtupsize = 
pagefree = 
have_tuple_lock = false
iscombo = false
use_hot_update = false
hot_attrs_checked = 
key_intact = 
all_visible_cleared = false
all_visible_cleared_new = false
checked_lockers = 
locker_remains = 
xmax_new_tuple = 
xmax_old_tuple = 245762845
infomask_old_tuple = 0
infomask2_old_tuple = 8192
infomask_new_tuple = 144

Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Andres Freund
Hi,

On 2018-11-06 17:11:40 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
> >> Looks like somebody forgot to list
> >> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> >> fault of commit c203d6cf8 or was it busted before?
> 
> > Looks new:
> > +   RELOPT_KIND_INDEX = 
> > RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> > there aren't any other "for all indexes" type options, so the whole
> > category didn't exist before.
> 
> > It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> > wouldn't have been omitted: It breaks index am extensibility.
> 
> Hm, well, that enum already knows about all index types, doesn't it?

Not quite sure what you mean.

Before c203d6cf8 there weren't reloptions across index types. But after
it, if one adds a new index AM via an extension, one can't set
recheck_on_update = false for indexes using it, even though the feature
affects such indexes. We support adding indexes AMs at runtime these
days, including WAL logging (even though that's a bit
voluminous). There's even a contrib index AM...

The list of AMs is supposed to be extensible at runtime, cf
add_reloption_kind().

Greetings,

Andres Freund



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Andres Freund
Hi,

On 2018-11-06 23:11:29 +0100, Tomas Vondra wrote:
> On 11/6/18 10:54 PM, Andres Freund wrote:
> > Looks new:
> > +   RELOPT_KIND_INDEX = 
> > RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> > 
> > there aren't any other "for all indexes" type options, so the whole
> > category didn't exist before.
> > 
> > It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> > wouldn't have been omitted: It breaks index am extensibility.
> > 
> 
> Does it? The RELOPT_KIND_* stuff is hard-coded in reloptions.h anyway,
> so I'm not sure how this particular thing makes it less extensible?

Well, you can create new index AMs in extensions these days, but given
the relopt design above, the feature cannot be disabled for them. Yes,
there's *currently* probably no great way to have reloptions across all
potential index types, but that's not an excuse for adding something
broken.

Greetings,

Andres Freund



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
Andres Freund  writes:
> On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
>> Looks like somebody forgot to list
>> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
>> fault of commit c203d6cf8 or was it busted before?

> Looks new:
> +   RELOPT_KIND_INDEX = 
> RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> there aren't any other "for all indexes" type options, so the whole
> category didn't exist before.

> It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> wouldn't have been omitted: It breaks index am extensibility.

Hm, well, that enum already knows about all index types, doesn't it?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tomas Vondra
On 11/6/18 10:54 PM, Andres Freund wrote:
> On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
>> =?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
 Ondřej, as a short-term workaround you could prevent the crash
 by setting that index's recheck_on_update property to false.
>>
>>> Thanks for the tip. I am unsuccessful using it, though:
>>> # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
>>> FALSE);
>>> ERROR:  unrecognized parameter "recheck_on_update"
>>
>> Oh, for crying out loud.  That's yet a different bug.
>> I'm not sure that it's the fault of the recheck_on_update
>> feature proper though; it might be a pre-existing bug in
>> the reloptions code.  Looks like somebody forgot to list
>> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
>> fault of commit c203d6cf8 or was it busted before?
> 
> Looks new:
> +   RELOPT_KIND_INDEX = 
> RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> 
> there aren't any other "for all indexes" type options, so the whole
> category didn't exist before.
> 
> It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
> wouldn't have been omitted: It breaks index am extensibility.
> 

Does it? The RELOPT_KIND_* stuff is hard-coded in reloptions.h anyway,
so I'm not sure how this particular thing makes it less extensible?

That being said, we also have RELOPT_KIND_BRIN, and that seems to be
missing from RELOPT_KIND_INDEX too (and AFAICS the optimization works
for all index types).

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Tom Lane wrote:

> =?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> >> Ondřej, as a short-term workaround you could prevent the crash
> >> by setting that index's recheck_on_update property to false.
> 
> > Thanks for the tip. I am unsuccessful using it, though:
> > # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
> > FALSE);
> > ERROR:  unrecognized parameter "recheck_on_update"
> 
> Oh, for crying out loud.  That's yet a different bug.
> I'm not sure that it's the fault of the recheck_on_update
> feature proper though; it might be a pre-existing bug in
> the reloptions code.  Looks like somebody forgot to list
> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> fault of commit c203d6cf8 or was it busted before?

RELOPT_KIND_INDEX was invented by that commit, looks like :-(

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
>> Ondřej, as a short-term workaround you could prevent the crash
>> by setting that index's recheck_on_update property to false.

> Thanks for the tip. I am unsuccessful using it, though:
> # ALTER INDEX public.schedulecard_overlap_idx SET (recheck_on_update = 
> FALSE);
> ERROR:  unrecognized parameter "recheck_on_update"

Oh, for crying out loud.  That's yet a different bug.
I'm not sure that it's the fault of the recheck_on_update
feature proper though; it might be a pre-existing bug in
the reloptions code.  Looks like somebody forgot to list
RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
fault of commit c203d6cf8 or was it busted before?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
I wrote:
> Interestingly, it doesn't crash if I change the index type to btree,
> which I was not expecting because the crashing code seems pretty
> independent of the index type.

Oh ... duh.  The problem here is that ProjIndexIsUnchanged thinks that
the type of the index column is identical to the type of the source
datum for it, which is not true for any opclass making use of the
opckeytype property.

Ondřej, as a short-term workaround you could prevent the crash
by setting that index's recheck_on_update property to false.

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
>>> Hm, what are the data types of those columns?

> scheduletemplate_id bigint NOT NULL,
> period_day smallint NOT NULL,
> timerange timerange NOT NULL,

OK, so here's a minimal reproducer:

drop table schedulecard;

create table schedulecard (
  scheduletemplate_id bigint NOT NULL,
  period_day smallint NOT NULL
);

CREATE INDEX schedulecard_overlap_idx
 ON schedulecard USING gist
 (scheduletemplate_id, (period_day::integer % 7));

insert into schedulecard values(12, 1);

update schedulecard set period_day = period_day + 7;

Interestingly, it doesn't crash if I change the index type to btree,
which I was not expecting because the crashing code seems pretty
independent of the index type.

Haven't traced further than that yet.

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Hm, what are the data types of those columns?

scheduletemplate_id bigint NOT NULL,
period_day smallint NOT NULL,
timerange timerange NOT NULL,

where timerange is defined as follows:
CREATE TYPE public.timerange AS RANGE
(
SUBTYPE=time,
SUBTYPE_OPCLASS = time_ops
);


> And I assume you've got btree_gist installed?

Yes, version 1.5.


Regards,
Ondřej Bouda



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Foreign-key triggers too?

There are four trivial foreign keys from public.schedulecard like this:

ALTER TABLE public.schedulecard
ADD CONSTRAINT fk_schedulecard_schedulecard FOREIGN KEY 
(schedulecard_id)

REFERENCES public.schedulecard (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE RESTRICT;

(two of them having ON UPDATE CASCADE, but all other characteristics are 
the same)


Also, several tables have foreign keys to public.schedulecard.id, some 
of which are ON UPDATE CASCADE, others NO ACTION.



> A stack trace would be really helpful.

Already sent earlier, maybe it got lost. Resending it:

[New LWP 8307]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: 11/main: usr mydb 127.0.0.1(56829) 
UPDATE  '.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:793
No locals.
#1  0x7f6a6b993296 in datumIsEqual (value1=12, value2=12, 
typByVal=, typLen=16) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/utils/adt/datum.c:249

size1 = 16
size2 = 
s1 = 0xc 
s2 = 0xc 
res = 
#2  0x7f6a6b6b89de in ProjIndexIsUnchanged (newtup=0x7f6a6d5ccdd0, 
oldtup=0x7ffc9c1c1270, relation=0x7f6a6b4c3680) at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4539

att = 
indexOid = 
indexDesc = 0x7f6a6b4c7cf0
indexInfo = 0x7f6a6d5cdbf8
i = 0
indexoidlist = 
econtext = 0x7f6a6d4c2c10
new_isnull = {false, false, false, 156, 252, 127, false, false, 
44, 29, 128, 107, 106, 127, false, false, 203, 89, 76, 107, 106, 127, 
false, false, 123, false, false, false, false, false, false, false}

slot = 0x7f6a6d5cf8b8
equals = true
old_isnull = {false, false, false, 109, 106, 127, false, false, 
69, 46, 80, 109, 106, 127, false, false, 64, 46, 80, 109, 106, 127, 
false, false, 72, 46, 80, 109, 106, 127, false, false}
new_values = {12, 2, 140095078059546, 16, 140722927572064, 
140095077232192, 140095077233456, 140722927572079, 0, 0, 
140722927572048, 140095054460096, 140095077050704, 140095078061816, 5, 
4, 140722927572096, 140095046814123, 140722927572096, 140095046825162, 
140095043418528, 140095077051552, 140722927572224, 140095046865098, 
60129542543, 23274993280, 140095043418528, 140095077050704, 
140095043417728, 140095077050976, 140095078056288, 140095043422512}

indexno = 2
l = 0x7f6a6d503840
estate = 0x7f6a6d4c2a00
old_values = {12, 1, 140078631611970, 1, 140095077050704, 1024, 
140095077232192, 0, 140722927571760, 140095049517261, 140722927571808, 
140722927571776, 140722927571792, 140095046814329, 18, 140095077232200, 
140095076976896, 140095078056288, 22, 100, 140722927571856, 
140722927571920, 140095046831813, 140722927571856, 140095046842624, 0, 
140095077226392, 3, 140095077232192, 140095077232528, 140095077232192, 
140095078061816}
#3  heap_update (relation=relation@entry=0x7f6a6b4c3680, 
otid=otid@entry=0x7ffc9c1c1690, newtup=newtup@entry=0x7f6a6d5ccdd0, 
cid=0, crosscheck=, wait=wait@entry=true, 
hufd=hufd@entry=0x7ffc9c1c15b0, lockmode=lockmode@entry=0x7ffc9c1c15a4) 
at 
/build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/access/heap/heapam.c:4230

result = 
xid = 245762845
hot_attrs = 0x7f6a6d503740
proj_idx_attrs = 0x7f6a6d503758
key_attrs = 0x7f6a6d503770
id_attrs = 0x7f6a6d503788
interesting_attrs = 0x7f6a6d5037a0
lp = 
oldtup = {t_len = 85, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 
45}, ip_posid = 10}, t_tableOid = 557732818, t_data = 0x7f669913ee10}

heaptup = 0x7f6a6d5ccdd0
old_key_tuple = 0x0
old_key_copied = false
page = 0x7f669913d180 "]="
block = 
mxact_status = 
buffer = 
newbuf = 14683
vmbuffer = 0
vmbuffer_new = 0
need_toast = 
newtupsize = 
pagefree = 
have_tuple_lock = false
iscombo = false
use_hot_update = false
hot_attrs_checked = 
key_intact = 
all_visible_cleared = false
all_visible_cleared_new = false
checked_lockers = 
locker_remains = 
xmax_new_tuple = 
xmax_old_tuple = 245762845
infomask_old_tuple = 0
infomask2_old_tuple = 8192
infomask_new_tuple = 144
infomask2_new_tuple = 0
__func__ = "heap_update"
#4  0x7f6a6b82f2c9 in ExecUpdate 
(mtstate=mtstate@entry=0x7f6a6d4d81c0, tupleid=0x7ffc9c1c1690, 
oldtuple=0x0, slot=0x7f6a6d5cc160, 
planSlot=planSlot@entry=0x7f6a6d567578, 
epqstate=epqstate@entry=0x7f6a6d4d8280, 
estate=estate@entry=0x7f6a6d4d6a60, canSetTag=true) 

Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> There are some simple btree indexes due to foreign keys, and this one, 
> which seems as the cause for the crashes:

> CREATE INDEX schedulecard_overlap_idx
>  ON public.schedulecard USING gist
>  (scheduletemplate_id, (period_day::integer % 7), timerange)
>  TABLESPACE pg_default;

> When I drop it, the UPDATE statement gets executed without any problems.
> When I create the index again, the same UPDATE crashes the server.

Hm, what are the data types of those columns?

And I assume you've got btree_gist installed?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
Alvaro Herrera  writes:
> What indexes are there in this table?  Indexes on expressions are
> particularly suspect.

I had not looked at the "projection index" code before, and now that
I have, I am desperately unhappy with it.  It's seriously buggy,
badly underdocumented, unbelievably inefficient, and it looks like
it creates a whole new set of reasons for unwanted recursion inside
the relcache.  The relcache has got NO business calling cost_qual_eval,
for example.

Having said that, I'm not real sure how it ends up with this crash.
It looks like ProjIndexIsUnchanged is getting the wrong value for
the index column's typbyval, but how could that be?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Tom Lane
=?UTF-8?Q?Ond=c5=99ej_Bouda?=  writes:
> I thought triggers should be suspected. However, even when all the three 
> triggers have been disabled (ALTER TABLE DISABLE TRIGGER), the UPDATE 
> crashed the server.

Foreign-key triggers too?

> What else could I try?

A stack trace would be really helpful.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

Dne 6.11.2018 v 20:45 Alvaro Herrera napsal(a):

What indexes are there in this table?  Indexes on expressions are
particularly suspect.


There are some simple btree indexes due to foreign keys, and this one, 
which seems as the cause for the crashes:


CREATE INDEX schedulecard_overlap_idx
ON public.schedulecard USING gist
(scheduletemplate_id, (period_day::integer % 7), timerange)
TABLESPACE pg_default;

When I drop it, the UPDATE statement gets executed without any problems.
When I create the index again, the same UPDATE crashes the server.


Regards,
Ondřej Bouda



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Ondřej Bouda wrote:

> > Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
> > that one.  Can you share more details on this?  I think the failing
> > update is on table with oid=557732818, but I might be wrong.
> 
> That's exactly the table, public.schedulecard.
> We issue an UPDATE changing some of its columns. E.g.,
> 
> UPDATE public.schedulecard SET ext_ident=null,
> rotates=false,period_num=1,period_day=2 WHERE id=3817

What indexes are there in this table?  Indexes on expressions are
particularly suspect.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Ondřej Bouda

> Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
> that one.  Can you share more details on this?  I think the failing
> update is on table with oid=557732818, but I might be wrong.

That's exactly the table, public.schedulecard.
We issue an UPDATE changing some of its columns. E.g.,

UPDATE public.schedulecard SET ext_ident=null, 
rotates=false,period_num=1,period_day=2 WHERE id=3817


lets the server crash. See the main log:

2018-11-06 17:29:40.031 CET [30202] LOG:  server process (PID 29879) was 
terminated by signal 11: Segmentation fault
2018-11-06 17:29:40.031 CET [30202] DETAIL:  Failed process was running: 
UPDATE public.schedulecard SET ext_ident=null, 
rotates=false,period_num=1,period_day=2 WHERE id=3817;

select * from schedulecard where id = 3817
2018-11-06 17:29:40.031 CET [30202] LOG:  terminating any other active 
server processes


The query is reproducible - it always lets the server segfault. It 
crashes on multiple rows on that table -- actually, I haven't found any 
non-failing row yet.


I thought triggers should be suspected. However, even when all the three 
triggers have been disabled (ALTER TABLE DISABLE TRIGGER), the UPDATE 
crashed the server.


What else could I try?

Regards,
Ondřej Bouda



Dne 6.11.2018 v 19:52 Alvaro Herrera napsal(a):

On 2018-Nov-06, Ondřej Bouda wrote:


So we dumped and restored all our databases. After that, the crash on DELETE
never occurred (before, it was several times a day). However, the crash on
UPDATE still occurs on specific rows. We are quite certain no ALTER TABLE
statement was executed on the table after the restore.
There are two AFTER INSERT OR UPDATE constraint triggers and one BEFORE
INSERT OR UPDATE trigger on the table, all of which are implemented in
plpgsql. Multiple physical servers, on separate databases with identical
schema, crash on the same type of UPDATE query (different just in concrete
values to be updated). The same code worked perfectly on 10.x.

See the attached backtrace below. Can we do something else to catch the bug?
Or can we hope for this bug to be already fixed and released in the upcoming
version?


Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
that one.  Can you share more details on this?  I think the failing
update is on table with oid=557732818, but I might be wrong.





Re: backend crash on DELETE, reproducible locally

2018-11-06 Thread Alvaro Herrera
On 2018-Nov-06, Ondřej Bouda wrote:

> So we dumped and restored all our databases. After that, the crash on DELETE
> never occurred (before, it was several times a day). However, the crash on
> UPDATE still occurs on specific rows. We are quite certain no ALTER TABLE
> statement was executed on the table after the restore.
> There are two AFTER INSERT OR UPDATE constraint triggers and one BEFORE
> INSERT OR UPDATE trigger on the table, all of which are implemented in
> plpgsql. Multiple physical servers, on separate databases with identical
> schema, crash on the same type of UPDATE query (different just in concrete
> values to be updated). The same code worked perfectly on 10.x.
> 
> See the attached backtrace below. Can we do something else to catch the bug?
> Or can we hope for this bug to be already fixed and released in the upcoming
> version?

Hmm, this one smells like c203d6cf81b4 -- haven't seen any fixes for
that one.  Can you share more details on this?  I think the failing
update is on table with oid=557732818, but I might be wrong.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Ron

On 11/03/2018 02:19 PM, obo...@email.cz wrote:

Hello,

we reached the exactly same problem after upgrading to PostgreSQL 11 - the 
server crashed on a DELETE statement with a trigger.We also observed an 
AFTER DELETE trigger receiving NULL values in OLD. Now I see the problem 
seems to be solved (theoretically). Unfortunately, we are not able to 
build the server with the patch, so we cannot confirm that. However, when 
we just copied the database (within the same server), the same DELETE 
executed on the copy with no problems.


I would like to ask, however: could the same problemarise from an UPDATE 
statement (also on a table with an AFTER trigger), or would that be 
another bug (technical details below)?
As the bug causes our production servers segfault several times a day, 
we'd like to solve the problem as soon as possible. Do I understand it 
correctly that if we dump and restore the database, the bug should not 
occur (until the next ALTER TABLE ADD COLUMN - which we'll avoid until the 
patch is released)?


You can dump that single table, truncate the table (presuming no FKs) and 
then restore it.


--
Angular momentum makes the world go 'round.


Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Tom Lane
 writes:
> we reached the exactly same problem after upgrading to PostgreSQL 11 - the
> server crashed on a DELETE statement with a trigger. We also observed an
> AFTER DELETE trigger receiving NULL values in OLD. Now I see the problem
> seems to be solved (theoretically). Unfortunately, we are not able to build
> the server with the patch, so we cannot confirm that. However, when we just
> copied the database (within the same server), the same DELETE executed on 
> the copy with no problems.

Yeah, dump/reload would make the problem go away.

> I would like to ask, however: could the same problem arise from an UPDATE
> statement (also on a table with an AFTER trigger), or would that be another
> bug (technical details below)?

Same bug.

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread obouda
edulecard_id=NULL, schedulecardblock_id=
NULL, schedule"...) at /build/postgresql-11-oDwOcQ/postgresql-11-11.0/
build/../src/backend/tcop/postgres.c:1122

        parsetree = 0x7f6a6d3b6400

        portal = 0x7f6a6d47f230

        snapshot_set = 

        commandTag = 

        completionTag = "\000\032\034\234\374\177\000\000\341\356\247kj\177\
000\000\000\000\000\000\000\000\000\000\060\316\367kj\177\000\000\340&\360
kj\177\000\000\234U\357kj\177\000\000\240E;mj\177\000\000Q\000\000\000\000\
000\000"

        querytree_list = 

        plantree_list = 

        receiver = 0x7f6a6d5ed298

        format = 0

        dest = DestRemote

        parsetree_list = 0x7f6a6d3b6450

        parsetree_item = 0x7f6a6d3b6430

        save_log_statement_stats = false

        was_logged = false

        use_implicit_block = false

        msec_str = "\000\032\034\234\374\177\000\000\341\356\247kj\177\000\
000\000\000\000\000\000\000\000\000\060\316\367kj\177\000"

        __func__ = "exec_simple_query"

#13 0x7f6a6b95930a in PostgresMain (argc=, argv=argv@
entry=0x7f6a6d400220, dbname=0x7f6a6d4001b0 "mydb", username=
)

    at /build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/
tcop/postgres.c:4159

        query_string = 0x7f6a6d3b45a0 "UPDATE public.schedulecard SET ext_
ident=NULL, finalreport_absence_cnt=NULL, period_day=4, period_num=1,
rotates=false, rotationgroup_id=NULL, schedulecard_id=NULL,
schedulecardblock_id=NULL, schedule"...

        firstchar = 81

        input_message = {

          data = 0x7f6a6d3b45a0 "UPDATE public.schedulecard SET ext_ident=
NULL, finalreport_absence_cnt=NULL, period_day=4, period_num=1, rotates=
false, rotationgroup_id=NULL, schedulecard_id=NULL, schedulecardblock_id=
NULL, schedule"..., len = 319, maxlen = 1024, cursor = 319}

        local_sigjmp_buf = {{__jmpbuf = {140095054460088,
7153736071902479882, 140095054460096, 1, 0, 140095076133088,
7153736071933937162, 7091693993451144714}, __mask_was_saved = 1, __saved_
mask = {__val = {0, 140095076133088, 

                140095004765576, 844424930131970, 206158430256,
140722927574360, 140722927574144, 293, 140095076133088, 0, 0,
140095075861648, 1024, 140722927574468, 0, 140722927574240

        send_ready_for_query = false

        disable_idle_in_transaction_timeout = false

        __func__ = "PostgresMain"

#14 0x7f6a6b67867d in BackendRun (port=0x7f6a6d3f68e0) at /build/
postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/postmaster/
postmaster.c:4361

        ac = 1

        secs = 594476990

        usecs = 933865

        i = 1

        av = 0x7f6a6d400220

        maxac = 

#15 BackendStartup (port=0x7f6a6d3f68e0) at /build/postgresql-11-oDwOcQ/
postgresql-11-11.0/build/../src/backend/postmaster/postmaster.c:4033

        bn = 

        pid = 

#16 ServerLoop () at /build/postgresql-11-oDwOcQ/postgresql-11-11.0/
build/../src/backend/postmaster/postmaster.c:1706

        rmask = {fds_bits = {8, 0 }}

        selres = 

        now = 

        readmask = {fds_bits = {200, 0 }}

        last_lockfile_recheck_time = 1541161751

        last_touch_time = 1541161147

        __func__ = "ServerLoop"

#17 0x7f6a6b8e7644 in PostmasterMain (argc=5, argv=) at /
build/postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/
postmaster/postmaster.c:1379

        opt = 

        status = 

        userDoption = 

        listen_addr_saved = true

        i = 

        output_config_variable = 

        __func__ = "PostmasterMain"

#18 0x7f6a6b679616 in main (argc=5, argv=0x7f6a6d3af1f0) at /build/
postgresql-11-oDwOcQ/postgresql-11-11.0/build/../src/backend/main/main.c:228

No locals.








Best regards, 
Ondřej Bouda


-- Původní e-mail --
Od: Tom Lane 
Komu: Karsten Hilbert 
Datum: 3. 11. 2018 4:57:19
Předmět: Re: backend crash on DELETE, reproducible locally
"Karsten Hilbert  writes:
> On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote:
>> Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del
>> trigger, which surely shouldn't happen. It'd be interesting to look at 
>> the set of triggers on this table. I don't entirely trust psql's \d
>> to show us reality if there's something screwed up about the triggers, 
>> so in addition to \d output, could we see
>> select * from pg_trigger where tgrelid = 'ref.auto_hint'::regclass; 

> [ pretty normal-looking trigger entries ]

I was feeling baffled about this, but it suddenly occurs to me that maybe 
the bug fixed in 040a1df61/372102b81 explains this. If the particular
tuple you're trying to delete predates the last ALTER TABLE ADD COLUMN
operation on the table, then this bug would result in t_self getting
set to "invalid", which'd cause AfterTriggerSaveEvent to save "ate_ctid1" 
as "invalid", which'd lead to a null tuple getting passed when the trigger
eventually gets invoked.

regards, tom lane

"

Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Karsten Hilbert
On Sat, Nov 03, 2018 at 11:39:49AM -0400, Tom Lane wrote:

> Karsten Hilbert  writes:
> > On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote:
> >> I was feeling baffled about this, but it suddenly occurs to me that maybe
> >> the bug fixed in 040a1df61/372102b81 explains this.
> 
> > So, I guess I can work around the issue by the above
> > manoeuvre and report back once 040a1df61/372102b81 is
> > released.
> > Anything else you'd want me to look into ?
> 
> Don't suppose you'd want to build a server with 372102b81 applied
> and see if it works?

Yeah, I feared that answer ;-)

However, I lack the skills and the machine(s) to do so ...

I will, at any rate, report back when the existing fix is released.

Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote:
>> I was feeling baffled about this, but it suddenly occurs to me that maybe
>> the bug fixed in 040a1df61/372102b81 explains this.

> So, I guess I can work around the issue by the above
> manoeuvre and report back once 040a1df61/372102b81 is
> released.
> Anything else you'd want me to look into ?

Don't suppose you'd want to build a server with 372102b81 applied
and see if it works?

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Karsten Hilbert
On Fri, Nov 02, 2018 at 11:56:58PM -0400, Tom Lane wrote:

> > On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote:
> >> Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del
> >> trigger, which surely shouldn't happen.  It'd be interesting to look at
> >> the set of triggers on this table.  I don't entirely trust psql's \d
> >> to show us reality if there's something screwed up about the triggers,
> >> so in addition to \d output, could we see
> >> select * from pg_trigger where tgrelid  = 'ref.auto_hint'::regclass;
> 
> > [ pretty normal-looking trigger entries ]
> 
> I was feeling baffled about this, but it suddenly occurs to me that maybe
> the bug fixed in 040a1df61/372102b81 explains this.  If the particular
> tuple you're trying to delete predates the last ALTER TABLE ADD COLUMN
> operation on the table, then this bug would result in t_self getting
> set to "invalid", which'd cause AfterTriggerSaveEvent to save "ate_ctid1"
> as "invalid", which'd lead to a null tuple getting passed when the trigger
> eventually gets invoked.

Now, that is a theory I can follow up on -- all the
bootstrapping SQL scripts are under version control so I can
pin down the exact sequence of events.

... goes off to do some checks ...

And sure enough there's an ALTER TABLE ADD COLUMN related to
that table:

alter table ref.auto_hint
add column recommendation_query text;

-- (audit log table needs to get the new column, too)
alter table audit.log_auto_hint
add column recommendation_query text;

before the DELETE of the pre-existing tuple.

When running a rigged upgrade that stops right before those
ALTER TABLEs and then doing the following:

begin;
-- instrument:
insert into ref.auto_hint (title, query, hint, source, lang) values (
'DELETE test',
'select 1;',
'insertion before ADD COLUMN',
'testing',
'en'
);
-- works:
DELETE from ref.auto_hint where title = 'DELETE test';
-- instrument, again:
insert into ref.auto_hint (title, query, hint, source, lang) values (
'DELETE test',
'select 1;',
'insertion before ADD COLUMN',
'testing',
'en'
);
alter table ref.auto_hint
add column recommendation_query text;
-- audit log table needs to get the message
alter table audit.log_auto_hint
add column recommendation_query text;
-- fails:
DELETE from ref.auto_hint where title = 'DELETE test';
rollback;

the expected segfault does indeed occur.

Conversely, moving the offending

DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan 
<-> Schwangerschaft';

to right before the ALTER TABLEs makes the full upgrade run
through without further problems.

Looking at 040a1df61/372102b81 feels like it fits the bill.

So, I guess I can work around the issue by the above
manoeuvre and report back once 040a1df61/372102b81 is
released.

Anything else you'd want me to look into ?

Many thanks,
Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: backend crash on DELETE, reproducible locally

2018-11-02 Thread Tom Lane
Karsten Hilbert  writes:
> On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote:
>> Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del
>> trigger, which surely shouldn't happen.  It'd be interesting to look at
>> the set of triggers on this table.  I don't entirely trust psql's \d
>> to show us reality if there's something screwed up about the triggers,
>> so in addition to \d output, could we see
>> select * from pg_trigger where tgrelid  = 'ref.auto_hint'::regclass;

> [ pretty normal-looking trigger entries ]

I was feeling baffled about this, but it suddenly occurs to me that maybe
the bug fixed in 040a1df61/372102b81 explains this.  If the particular
tuple you're trying to delete predates the last ALTER TABLE ADD COLUMN
operation on the table, then this bug would result in t_self getting
set to "invalid", which'd cause AfterTriggerSaveEvent to save "ate_ctid1"
as "invalid", which'd lead to a null tuple getting passed when the trigger
eventually gets invoked.

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 11:43:56AM -0400, Tom Lane wrote:

> Alvaro Herrera  writes:
> > Ah, now this is interesting.  Can you please supply the definition of
> > the table?  I'm wondering if there is a partitioned table with an FK to
> > this one.  I'm not quite seeing how come 'tup' is NULL there.
> 
> Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del
> trigger, which surely shouldn't happen.  It'd be interesting to look at
> the set of triggers on this table.  I don't entirely trust psql's \d
> to show us reality if there's something screwed up about the triggers,
> so in addition to \d output, could we see
> 
> select * from pg_trigger where tgrelid  = 'ref.auto_hint'::regclass;

Ausgabeformat ist »wrapped«.
 tgrelid |tgname | tgfoid | tgtype | tgenabled | 
tgisinternal | tgconstrrelid | tgconstrindid | tgconstraint | tgdeferrable | 
tginitdeferred | tgnargs | tgattr | tgargs | tgqual | tgoldtable | tgnewtable 
-+---+++---+--+---+---+--+--++-+++++
  111088 | RI_ConstraintTrigger_a_116970 |   1646 |  9 | O | t  
  |116789 |111099 |   116969 | f| f 
 |   0 || \x ||| 
  111088 | RI_ConstraintTrigger_a_116971 |   1649 | 17 | O | t  
  |116789 |111099 |   116969 | f| f 
 |   0 || \x ||| 
  111088 | zt_ins_auto_hint  | 141652 |  7 | O | f  
  | 0 | 0 |0 | f| f 
 |   0 || \x ||| 
  111088 | zt_upd_auto_hint  | 141654 | 19 | O | f  
  | 0 | 0 |0 | f| f 
 |   0 || \x ||| 
  111088 | zt_del_auto_hint  | 141656 | 11 | O | f  
  | 0 | 0 |0 | f| f 
 |   0 || \x ||| 
(5 Zeilen)


BTW, do you want me to attach text files or include into the
body (potential wrapping issues on display) ?

Thanks,
Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote:

> > Program received signal SIGSEGV, Segmentation fault.
> > heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at 
> > ./build/../src/backend/access/common/heaptuple.c:403
> > 403 ./build/../src/backend/access/common/heaptuple.c: Datei oder 
> > Verzeichnis nicht gefunden.
> > (gdb) thread apply all bt full
> > 
> > Thread 1 (Thread 0xb4c429c0 (LWP 22367)):
> > #0  heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at 
> > ./build/../src/backend/access/common/heaptuple.c:403
> > __func__ = "heap_attisnull"
> > #1  0x0087690d in ri_NullCheck (tupDesc=0xb2990ef4, tup=0x0, 
> > riinfo=0x2e1d548, rel_is_pk=true) at 
> > ./build/../src/backend/utils/adt/ri_triggers.c:2894
> > attnums = 0x2e1d5a4
> > i = 0
> > allnull = true
> > nonenull = true
> > #2  0x00879bf7 in RI_FKey_cascade_del (fcinfo=0xbfbda9e4) at 
> > ./build/../src/backend/utils/adt/ri_triggers.c:917
> 
>  Can you 'print trigdata' in frame 2?

Does this help at all ?

(gdb) frame 2
#2  0x00879bf7 in RI_FKey_cascade_del (fcinfo=0xbfbda9e4) at 
./build/../src/backend/utils/adt/ri_triggers.c:917
917 ./build/../src/backend/utils/adt/ri_triggers.c: Datei oder 
Verzeichnis nicht gefunden.
(gdb) print trigdata
$1 = (TriggerData *) 0xbfbdaca4

It is from another run, however.

Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Tom Lane
Alvaro Herrera  writes:
> Ah, now this is interesting.  Can you please supply the definition of
> the table?  I'm wondering if there is a partitioned table with an FK to
> this one.  I'm not quite seeing how come 'tup' is NULL there.

Yeah, apparently we've passed a null OLD tuple to an RI_FKey_cascade_del
trigger, which surely shouldn't happen.  It'd be interesting to look at
the set of triggers on this table.  I don't entirely trust psql's \d
to show us reality if there's something screwed up about the triggers,
so in addition to \d output, could we see

select * from pg_trigger where tgrelid  = 'ref.auto_hint'::regclass;

regards, tom lane



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote:

> Ah, now this is interesting.  Can you please supply the definition of
> the table?

Attached.

> I'm wondering if there is a partitioned table with an FK to
> this one.

There is. Both ref.auto_hint and clin.suppressed_hint are
using inheritance (from audit.audit_fields). However, GNUmed
does not use inheritance for explicit partitioning but rather
similar to how classes and subclasses are used in OO languages.

>  I'm not quite seeing how come 'tup' is NULL there.  Can you
> 'print trigdata' in frame 2?

Sure, how ? :-)

(I can surely type "print trigdata" but does that already
auto-select from "frame 2" ?)

Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
Ausgabeformat ist »wrapped«.

 Tabelle »audit.audit_fields«
Spalte |   Typ| Sortierfolge | NULL erlaubt? |  
   Vorgabewert  | Speicherung | Statistikziel | 
 Beschreibung  
---+--+--+---+--+-+---+
 pk_audit  | integer  |  | not null  | 
nextval('audit.audit_fields_pk_audit_seq'::regclass) | plain   |
   | 
 row_version   | integer  |  | not null  | 0
| plain   |   | 
the version of the row; mainly just a count
 modified_when | timestamp with time zone |  | not null  | 
CURRENT_TIMESTAMP| plain   |
   | when has this row been committed (created/modified)
 modified_by   | name |  | not null  | 
CURRENT_USER | plain   |
   | by whom has this row been committed (created/modified)
Indexe:
"audit_fields_pkey" PRIMARY KEY, btree (pk_audit)
Check-Constraints:
"audit_audit_fields_sane_modified_when" CHECK ((modified_when <= 
clock_timestamp()) IS TRUE)
Regeln:
audit_fields_no_del AS
ON DELETE TO audit.audit_fields DO INSTEAD NOTHING
audit_fields_no_ins AS
ON INSERT TO audit.audit_fields DO INSTEAD NOTHING
audit_fields_no_upd AS
ON UPDATE TO audit.audit_fields DO INSTEAD NOTHING
Kindtabellen: bill.bill,
  bill.bill_item,
  blobs.doc_desc,
  blobs.doc_med,
  blobs.lnk_doc2hospital_stay,
  blobs.lnk_doc_med2episode,
  cfg.report_query,
  clin.allergy_state,
  clin.clin_diag,
  clin.clin_item_type,
  clin.clin_root_item,
  clin.encounter,
  clin.episode,
  clin.external_care,
  clin.fhx_relation_type,
  clin.form_data,
  clin.health_issue,
  clin.incoming_data_unmatchable,
  clin.incoming_data_unmatched,
  clin.lnk_code2item_root,
  clin.lnk_constraint2vacc_course,
  clin.lnk_pat2vaccination_course,
  clin.lnk_substance2episode,
  clin.lnk_tst2norm,
  clin.lnk_type2item,
  clin.lnk_vaccination_course2schedule,
  clin.lnk_vaccine2inds,
  clin.patient,
  clin.review_root,
  clin.suppressed_hint,
  clin.test_org,
  clin.test_panel,
  clin.test_type,
  clin.vaccination_course,
  clin.vaccination_course_constraint,
  clin.vaccination_definition,
  clin.vaccination_schedule,
  clin.vacc_indication,
  clin.vaccine,
  clin.vaccine_batches,
  clin.vacc_route,
  clin.waiting_list,
  de_de.beh_fall_typ,
  de_de.lab_test_gnr,
  de_de.prax_geb_paid,
  dem.address,
  dem.gender_label,
  dem.identity,
  dem.identity_tag,
  dem.inbox_item_category,
  dem.inbox_item_type,
  dem.lnk_identity2ext_id,
  dem.lnk_job2person,
  dem.lnk_org_unit2comm,
  dem.lnk_org_unit2ext_id,
  dem.lnk_person2relative,
  dem.message_inbox,
  dem.occupation,
  dem.org,
  dem.org_unit,
  dem.praxis_branch,
  dem.region,
  dem.relation_types,
  dem.staff,
  dem.street,
  dem.urb,
  gm.access_log,
  ref.auto_hint,
  ref.branded_drug,
  ref.consumable_substance,
  ref.data_source,
 

Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Alvaro Herrera
On 2018-Nov-01, Karsten Hilbert wrote:

> Program received signal SIGSEGV, Segmentation fault.
> heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at 
> ./build/../src/backend/access/common/heaptuple.c:403
> 403   ./build/../src/backend/access/common/heaptuple.c: Datei oder 
> Verzeichnis nicht gefunden.
> (gdb) thread apply all bt full
> 
> Thread 1 (Thread 0xb4c429c0 (LWP 22367)):
> #0  heap_attisnull (tup=0x0, attnum=5, tupleDesc=0xb2990ef4) at 
> ./build/../src/backend/access/common/heaptuple.c:403
> __func__ = "heap_attisnull"
> #1  0x0087690d in ri_NullCheck (tupDesc=0xb2990ef4, tup=0x0, 
> riinfo=0x2e1d548, rel_is_pk=true) at 
> ./build/../src/backend/utils/adt/ri_triggers.c:2894
> attnums = 0x2e1d5a4
> i = 0
> allnull = true
> nonenull = true
> #2  0x00879bf7 in RI_FKey_cascade_del (fcinfo=0xbfbda9e4) at 
> ./build/../src/backend/utils/adt/ri_triggers.c:917

Ah, now this is interesting.  Can you please supply the definition of
the table?  I'm wondering if there is a partitioned table with an FK to
this one.  I'm not quite seeing how come 'tup' is NULL there.  Can you
'print trigdata' in frame 2?

In general terms, this bug report would have been more actionable if you
had shown the definition of the tables involved right from the start.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 12:27:23PM -0300, Alvaro Herrera wrote:

> In general terms, this bug report would have been more actionable if you
> had shown the definition of the tables involved right from the start.

Sorry for that, will supply.

Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 04:11:33PM +0100, Karsten Hilbert wrote:

>> Will attempt to get a stack trace ...

Eventually, the stack trace (attached).

Thanks for insights,
Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
Script started on 2018-11-01 16:16:02+01:00
root@hermes:~/tmp/pg# gdb --pid=22367 --readnow
GNU gdb (Debian 8.1-4+b1) 8.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 22367
Reading symbols from /usr/lib/postgresql/11/bin/postgres...Reading symbols from 
/usr/lib/debug/.build-id/c0/e1914ba590a236d6ee6e9ccdf7bee87a5c00da.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /lib/i386-linux-gnu/libpthread.so.0...Reading symbols from 
/usr/lib/debug/.build-id/9b/6447528b2fffba536afc815a31304adde16f8b.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
Reading symbols from /usr/lib/i386-linux-gnu/libxml2.so.2...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libpam.so.0...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libssl.so.1.1...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libcrypto.so.1.1...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/librt.so.1...Reading symbols from 
/usr/lib/debug/.build-id/d1/de1cfe9b8c85c6195a28a26206906acd9083ff.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /lib/i386-linux-gnu/libdl.so.2...Reading symbols from 
/usr/lib/debug/.build-id/65/c7a119514fa310ef4bd22bbf97f82b30511805.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /lib/i386-linux-gnu/libm.so.6...Reading symbols from 
/usr/lib/debug/.build-id/59/1fca8f9efd1964477652a51e7936f88ce15440.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /usr/lib/i386-linux-gnu/libldap_r-2.4.so.2...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libicui18n.so.60...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libicuuc.so.60...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libsystemd.so.0...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libc.so.6...Reading symbols from 
/usr/lib/debug/.build-id/a2/03af6935d5ab00f79fe12b4098d27d3f70c19f.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /lib/ld-linux.so.2...Reading symbols from 
/usr/lib/debug/.build-id/d3/4d111ccc131067fcbc126906e01374c678c06e.debug...expanding
 to full symbols...done.
expanding to full symbols...done.
Reading symbols from /usr/lib/i386-linux-gnu/libicudata.so.60...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libz.so.1...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/liblzma.so.5...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libaudit.so.1...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libkrb5.so.3...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libk5crypto.so.3...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libcom_err.so.2...expanding to full 
symbols...(no debugging symbols found)...done.
Reading symbols from /usr/lib/i386-linux-gnu/libkrb5support.so.0...expanding to 
full symbols...(no debugging symbols found)...done.
Reading symbols from /lib/i386-linux-gnu/libkeyutils.so.1...expanding to full 
symbols...(no debugging symbols 

Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
On Thu, Nov 01, 2018 at 03:42:57PM +0100, Karsten Hilbert wrote:

> Will attempt to get a stack trace ...

Meanwhile, in case it helps, an strace of the crash.

epoll_wait(7, [{EPOLLIN, {u32=47607120, u64=35184419695952}}], 1, -1) = 
1
recv(13, "Q\0\0\0`DELETE FROM ref.auto_hint W"..., 8192, 0) = 97
gettimeofday({tv_sec=1541084179, tv_usec=197395}, NULL) = 0
gettimeofday({tv_sec=1541084179, tv_usec=197442}, NULL) = 0
write(2, "2018-11-01 14:56:19 GMT LOG:  00"..., 209) = 209
_llseek(45, 0, [16384], SEEK_END)   = 0
_llseek(47, 0, [16384], SEEK_END)   = 0
_llseek(48, 0, [16384], SEEK_END)   = 0
_llseek(49, 0, [16384], SEEK_END)   = 0
_llseek(45, 0, [16384], SEEK_END)   = 0
kill(10377, SIGUSR1)= 0
_llseek(45, 0, [0], SEEK_SET)   = 0
read(45, "\0\0\0\0\234\270\246F\10\373\0\0D\0x\n\0 \4 
I\337\1\0\0\236\0\4\0\200\1\0"..., 8192) = 8192
_llseek(43, 81920, [81920], SEEK_SET)   = 0
read(43, "\0\0\0\0x{pF8o\0\0\264\1\34\33\360\37\4 
\0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192
_llseek(44, 925696, [925696], SEEK_SET) = 0
read(44, "\0\0\0\0\310\324\231FE\231\1\0L\0l\1\0 \4 
\0\0\0\0\24\234\326\7<\231\254\5"..., 8192) = 8192
openat(AT_FDCWD, "base/139580/139695", O_RDWR|O_LARGEFILE) = 56
read(56, "\0\0\0\0\350\347\220E`B\0\0<\0\360\37\360\37\4 
\0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192
read(56, "\0\0\0\0h\347\220E\1,\0\0(\0\300\37\360\37\4 
\0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192
openat(AT_FDCWD, "base/139580/2612", O_RDWR|O_LARGEFILE) = 57
read(57, "\0\0\0\0\200\334s\1a\6\4\0(\0@\36\0 \4 \0\0\0\0\220\237\340\0 
\237\340\0"..., 8192) = 8192
_llseek(43, 65536, [65536], SEEK_SET)   = 0
read(43, "\0\0\0\0\214e^F\215w\0\0H\7`\n\360\37\4 
\0\0\0\0x\212\30\0\344\237\30\0"..., 8192) = 8192
_llseek(44, 606208, [606208], SEEK_SET) = 0
read(44, "\0\0\0\0\20\206\237FVP\1\0p\0\230\3\0 \4 
\361\360\1\\237\232\1T\236\266\1"..., 8192) = 8192
stat64("/usr/lib/postgresql/11/lib/plpgsql", 0xbfbda4ec) = -1 ENOENT 
(No such file or directory)
stat64("/usr/lib/postgresql/11/lib/plpgsql.so", {st_mode=S_IFREG|0644, 
st_size=197944, ...}) = 0
stat64("/usr/lib/postgresql/11/lib/plpgsql.so", {st_mode=S_IFREG|0644, 
st_size=197944, ...}) = 0
futex(0xb78b904c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
openat(AT_FDCWD, "/usr/lib/postgresql/11/lib/plpgsql.so", 
O_RDONLY|O_CLOEXEC) = 58
read(58, 
"\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300a\0\0004\0\0\0"..., 512) = 
512
fstat64(58, {st_mode=S_IFREG|0644, st_size=197944, ...}) = 0
mmap2(NULL, 201108, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 58, 0) = 
0xb293f000
mmap2(0xb2944000, 102400, PROT_READ|PROT_EXEC, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x5000) = 0xb2944000
mmap2(0xb295d000, 69632, PROT_READ, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x1e000) = 0xb295d000
mmap2(0xb296e000, 12288, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 58, 0x2e000) = 0xb296e000
close(58)   = 0
mprotect(0xb296e000, 8192, PROT_READ)   = 0
openat(AT_FDCWD, "global/139651", O_RDWR|O_LARGEFILE) = 58
read(58, "\0\0\0\0\320\200zE\324\16\0\0<\0\360\37\360\37\4 
\0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192
read(58, "\0\0\0\0P\200zE.+\0\0\320\0\310\35\360\37\4 
\0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192
openat(AT_FDCWD, "base/139580/139719", O_RDWR|O_LARGEFILE) = 59
read(59, "\0\0\0\0x\347\232E\226s\0\0<\0\360\37\360\37\4 
\0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192
read(59, "\0\0\0\0l\250\16F\22\340\0\0h\0\0\37\360\37\4 
\0\0\0\0\344\237\30\0\330\237\30\0"..., 8192) = 8192
read(42, "\0\0\0\0`\235^\1\177\242\4\0\320\0\30\1\0 \4 
\0\0\0\0T\237X\1\250\236X\1"..., 8192) = 8192
openat(AT_FDCWD, "base/139580/139606", O_RDWR|O_LARGEFILE) = 60
read(60, "\0\0\0\0\244E{F\206q\0\0<\0\360\37\360\37\4 
\0\0\0\0b1\5\0\3\0\0\0"..., 8192) = 8192
_llseek(60, 24576, [24576], SEEK_SET)   = 0
read(60, "\0\0\0\0\224\272wE\re\0\0008\0\20\37\360\37\4 
\0\0\0\0\350\237\20\0\320\2370\0"..., 8192) = 8192
read(60, "\0\0\0\0tD{F\34S\0\0\220\3\324\6\360\37\4 
\0\0\0\0t\2078\0\300\237`\0"..., 8192) = 8192
brk(0x2e07000)  = 0x2e07000
_llseek(41, 32768, [32768], SEEK_SET)   = 0
read(41, "\0\0\0\0\320B{F\23I\0\0\230\6p\f\360\37\4 
\0\0\0\0p\214\30\0\344\237\30\0"..., 8192) = 8192
_llseek(42, 335872, [335872], SEEK_SET) = 0
read(42, "\0\0\0\0 \"|FB\177\5\0\340\0\320\7\0 \4 
Z\362\1\0\0\0\0\0\0\0\0\0"..., 8192) = 8192
openat(AT_FDCWD, "base/139580/139666", O_RDWR|O_LARGEFILE) = 61
read(61, "\0\0\0\0\244\326\212E\354\30\0\0<\0\360\37\360\37\4 

Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert

> I am running this delete statement via psql
> 
>   DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan 
> <-> Schwangerschaft';
> 
> and it crashes the backend it is running on.

For what it is worth I have identified the physical file

gnumed_v21=# select pg_relation_filepath('ref.auto_hint');
 pg_relation_filepath
--
 base/139580/111088
(1 Zeile)

and phyically copied that to another place (so there's no
read error on the disk - but the DELETE would perhaps ask for
a physical write test ?).

Attached the output of pg_filedump which, to me, doesn't seem
to show anything out of the ordinary either.

Corrupt index should not be the case because the databse is
REINDEXed during the upgrade process before the failing
statement.

Will attempt to get a stack trace ...

Thanks,
Karsten
-- 
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B

***
* PostgreSQL File/Block Formatted Dump Utility - Version 11.0
*
* File: 111088_fsm
* Options used: -v -f -i -k 
*
* Dump created on: Thu Nov  1 15:37:59 2018
***

Block0 
 -
 Block Offset: 0x Offsets: Lower  24 (0x0018)
 Block: Size 8192  Version4Upper8192 (0x2000)
 LSN:  logid  0 recoff 0x  Special  8192 (0x2000)
 Items:0  Free Space: 8168
 Checksum: 0x6560  Prune XID: 0x  Flags: 0x ()
 Length (including item array): 24

  :   6065 1820  `e. 
  0010: 00200420 . . 

 -- 
 Empty block - no items listed 


Block1 
 -
 Block Offset: 0x2000 Offsets: Lower  24 (0x0018)
 Block: Size 8192  Version4Upper8192 (0x2000)
 LSN:  logid  0 recoff 0x  Special  8192 (0x2000)
 Items:0  Free Space: 8168
 Checksum: 0x655f  Prune XID: 0x  Flags: 0x ()
 Length (including item array): 24

  :   5f65 1820  _e. 
  0010: 00200420 . . 

 -- 
 Empty block - no items listed 


Block2 
 -
 Block Offset: 0x4000 Offsets: Lower  24 (0x0018)
 Block: Size 8192  Version4Upper8192 (0x2000)
 LSN:  logid  0 recoff 0x465b8208  Special  8192 (0x2000)
 Items:0  Free Space: 8168
 Checksum: 0xeb9b  Prune XID: 0x  Flags: 0x ()
 Length (including item array): 24

  :  08825b46 9beb 1820  ..[F... 
  0010: 00200420 . . 

 -- 
 Empty block - no items listed 


*** End of File Encountered. Last Block Read: 2 ***

***
* PostgreSQL File/Block Formatted Dump Utility - Version 11.0
*
* File: 111088
* Options used: -v -f -i -k 
*
* Dump created on: Thu Nov  1 15:37:24 2018
***

Block0 
 -
 Block Offset: 0x Offsets: Lower  68 (0x0044)
 Block: Size 8192  Version4Upper2680 (0x0a78)
 LSN:  logid  0 recoff 0x46a073a0  Special  8192 (0x2000)
 Items:   11  Free Space: 2612
 Checksum: 0x3086  Prune XID: 0x0001df49  Flags: 0x ()
 Length (including item array): 68

  :  a073a046 8630 4400780a  .s.F.0..D.x.
  0010: 00200420 49df0100 009e0004 00800100  . . I...
  0020: 00800100 e89a2a06 48994003 00800100  ..*.H.@.
  0030: 14956208 b892b604 b890fa03 3c8df606  ..b.<...
  0040: 788a8405 x...

 -- 
 Item   1 -- Length:  512  Offset: 7680 (0x1e00)  Flags: NORMAL
  XMIN: 119568  XMAX: 0  CID|XVAC: 0
  Block Id: 0  linp Index: 1   Attributes: 12   Size: 24
  infomask: 0x0902 (HASVARWIDTH|XMIN_COMMITTED|XMAX_INVALID) 

  1e00: 10d30100     
  1e10: 01000c00 02091800 a634   .4..
  1e20: 8f2bee84 971c0200 676d2d64 626f  .+..gm-dbo..
  1e30:      
  1e40:      
  1e50:      
  1e60:   0100 fb53454c  .SEL
  1e70: 45435420 45584953 54532853 454c4543  ECT EXISTS(SELEC
  1e80: 54203120 46524f4d 20636c69 6e2e765f  T 1 FROM clin.v_
  1e90: 66616d69 6c795f68 6973746f 72792057  family_history W
  1ea0: 48455245 20636f6e 64697469 6f6e207e  HERE condition ~
  1eb0: 2a202764 

Re: backend crash on DELETE, reproducible locally

2018-11-01 Thread Tom Lane
Karsten Hilbert  writes:
> I am running this delete statement via psql
>   DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan 
> <-> Schwangerschaft';

Doesn't look particularly special ...

> What else can I provide/test/look into ?

Stack trace would be helpful, self-contained test case even better.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane



backend crash on DELETE, reproducible locally

2018-11-01 Thread Karsten Hilbert
Dear list members !

I am running this delete statement via psql

DELETE FROM ref.auto_hint WHERE title = 'Kontraindikation: ACE/Sartan 
<-> Schwangerschaft';

against a

name| server_version
setting | 11.0 (Debian 11.0-1)

postgres@hermes:~$ psql -d gnumed_v21
Ausgabeformat ist »wrapped«.
psql (11.0 (Debian 11.0-1))

on Debian

Distributor ID: Debian
Description:Debian GNU/Linux testing (buster)
Release:testing
Codename:   buster

postgresql-11:
  Installiert:   11.0-1
  Installationskandidat: 11.0-1
  Versionstabelle:
 *** 11.0-1 990
990 http://httpredir.debian.org/debian buster/main i386 Packages
500 http://httpredir.debian.org/debian unstable/main i386 
Packages

and it crashes the backend it is running on.

The DELETE is part of a lengthy database upgrade process. If
I comment out that statement the entire process runs find
start to end (there are - succeeding - SQL statements after
the failing DELETE).

The cluster in question

Ver Cluster Port Status OwnerData directory  Log file
10  main5433 down   postgres /var/lib/postgresql/10/main 
/var/log/postgresql/postgresql-10-main.log
11  main5432 online postgres /var/lib/postgresql/11/main 
/var/log/postgresql/postgresql-11-main.log

was upgraded from PG10 to PG11 without a hitch earlier today
using pg_upgrade through Debian's pg_upgradecluster
(dump/restore method).

The backend crash is reproducible each time I run the above
DELETE. If I bump log_min_messages to debug5 the following
ensues:

2018-11-01 13:09:55 GMT LOG:  0: Datenbanksystem ist bereit, um 
Verbindungen anzunehmen
2018-11-01 13:09:55 GMT ORT:  reaper, postmaster.c:2888
2018-11-01 13:19:03 GMT LOG:  0: SIGHUP empfangen, 
Konfigurationsdateien werden neu geladen
2018-11-01 13:19:03 GMT ORT:  SIGHUP_handler, postmaster.c:2529
2018-11-01 13:19:04 GMT LOG:  0: Parameter »log_min_messages« auf 
»debug5« gesetzt
2018-11-01 13:19:04 GMT ORT:  ProcessConfigFileInternal, guc-file.l:456
2018-11-01 13:19:04 GMT DEBUG:  0: sending signal 1 to process 5703
2018-11-01 13:19:04 GMT ORT:  SignalSomeChildren, postmaster.c:3920
2018-11-01 13:19:04 GMT DEBUG:  0: StartTransaction(1) name: 
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:04 GMT DEBUG:  0: CommitTransaction(1) name: 
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:04 GMT DEBUG:  0: checkpointer updated shared 
memory configuration values
2018-11-01 13:19:04 GMT ORT:  UpdateSharedMemoryConfig, 
checkpointer.c:1362
2018-11-01 13:19:04 GMT DEBUG:  0: StartTransaction(1) name: 
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:04 GMT DEBUG:  0: CommitTransaction(1) name: 
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:04 GMT DEBUG:  0: received inquiry for database 0
2018-11-01 13:19:04 GMT ORT:  pgstat_recv_inquiry, pgstat.c:5640
2018-11-01 13:19:04 GMT DEBUG:  0: writing stats file 
"/run/postgresql/global.stat"
2018-11-01 13:19:04 GMT ORT:  pgstat_write_statsfiles, pgstat.c:4684
2018-11-01 13:19:04 GMT DEBUG:  0: StartTransaction(1) name: 
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:04 GMT DEBUG:  0: CommitTransaction(1) name: 
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2018-11-01 13:19:04 GMT ORT:  ShowTransactionStateRec, xact.c:5151
2018-11-01 13:19:25 GMT DEBUG:  0: forked new backend, pid=6264 
socket=13
2018-11-01 13:19:25 GMT ORT:  BackendStartup, postmaster.c:4055
2018-11-01 13:19:25 GMT LOG:  0: Verbindung empfangen: Host=[local]
2018-11-01 13:19:25 GMT ORT:  BackendInitialize, postmaster.c:4209
2018-11-01 13:19:25 GMT DEBUG:  0: postgres child[6264]: starting 
with (
2018-11-01 13:19:25 GMT ORT:  BackendRun, postmaster.c:4348
2018-11-01 13:19:25 GMT DEBUG:  0:  postgres
2018-11-01 13:19:25 GMT ORT:  BackendRun, postmaster.c:4351
2018-11-01 13:19:25 GMT DEBUG:  0: )
2018-11-01 13:19:25 GMT ORT:  BackendRun, postmaster.c:4353
2018-11-01 13:19:25 GMT DEBUG:  0: InitPostgres