Re: DDL support for logical replication

2019-10-10 Thread Jeremy Finzel
On Thu, Oct 10, 2019 at 3:09 PM Lev Kokotov  wrote:

> Hi Miles,
>
> One issue is keeping the subscriber and the publisher schema identical.
> Running migrations on both the publisher and subscriber does not seem
> atomic to me, therefore I don't have a way to enforce consistency between
> the two. The use case is simple: schemas change all the time, and keeping
> two databases (or more!) in sync manually is tough.
>

We had this requirement as well and this is why I wrote pgl_ddl_deploy
 on top of pglogical to handle the
issue.  It's by no means an all-encompassing solution, but it's worked
really well for us.  It only supports pglogical currently (which could be
expanded for in-core logical... but we have not tried yet).

It basically uses event triggers, and inspecting what is being executed, as
an idea of how to propagate DDL.

However, I am very interested in future of this feature for in-core logical
replication in a much more natively-supported way.  As such, I have not
heard anything for awhile.  Schema mapping for logical replication is
another feature we really would want, and would also very much relate to
DDL replication.  @Alvaro Herrera  did a lot of
great work on support for in-core DDL replication a few years back.  Here
is that thread:
https://postgr.es/m/cacaco5qquav+n4gi+ya1jf_a+qenr6sjup8cydpsrxka+fh...@mail.gmail.com

Thanks,
Jeremy


Re: DDL support for logical replication

2019-10-10 Thread Cory Nemelka
Another use case is installations where there is heavy reliance on
temporary tables in queries.  Since you can't run queries that create
temporary tables on servers that are binary replicas, this leaves the
master (not horizontally scalable) or  logical replicas

--cnemelka


On Thu, Oct 10, 2019 at 2:09 PM Lev Kokotov  wrote:

> Hi Miles,
>
> One issue is keeping the subscriber and the publisher schema identical.
> Running migrations on both the publisher and subscriber does not seem
> atomic to me, therefore I don't have a way to enforce consistency between
> the two. The use case is simple: schemas change all the time, and keeping
> two databases (or more!) in sync manually is tough.
>
> Another issue is migrating large databases (in the terabytes) to a new
> version of Postgres with minimal downtime. Say it takes a week or two to
> sync up the subscriber with the publisher, we have to enforce a migration
> freeze for that duration. That's often inconvenient.
>
> We often want to have a database with a subset of data of a particular
> table somewhere else, and logical replication is great, since it allows me
> to run write operations on the subscriber. Binary replication forces me to
> have two identical databases.
>
> Best,
> Lev
>
> On Thu, Oct 10, 2019 at 12:49 PM Miles Elam 
> wrote:
>
>> Hi Lev,
>>
>> While I don't have an answer to your roadmap question, you've raised a
>> different question for me.
>>
>> What are you expecting to get from logical replication of DDL commands
>> that is not served by binary replication?  I ask because typically someone
>> would want to use logical replication if they wanted triggers to fire on
>> the subscriber, they only a subset of all tables replicated, etc.
>>
>> Perhaps a better question would be "What problem are you trying to
>> solve?" rather than focus on how you expected to solve that problem.
>>
>>
>> Cheers,
>>
>> Miles Elam
>>
>> On Thu, Oct 10, 2019 at 11:08 AM Lev Kokotov 
>> wrote:
>>
>>> Hello,
>>>
>>> Is DDL support on a roadmap for logical replication?
>>>
>>> Thank you.
>>> - Lev
>>>
>>


Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Achilleas Mantzios



On 10/10/19 11:31 π.μ., Wim Bertels wrote:



3) Were there ever any problems with BSD?

as far as i understand BSD and variants are very solid,
so good for server use, not for desktop


Desktop software might be a little old, but that does not mean that this 
is not usable.


I run FreeBSD desktop for ages.










Re: DDL support for logical replication

2019-10-10 Thread Lev Kokotov
Hi Miles,

One issue is keeping the subscriber and the publisher schema identical.
Running migrations on both the publisher and subscriber does not seem
atomic to me, therefore I don't have a way to enforce consistency between
the two. The use case is simple: schemas change all the time, and keeping
two databases (or more!) in sync manually is tough.

Another issue is migrating large databases (in the terabytes) to a new
version of Postgres with minimal downtime. Say it takes a week or two to
sync up the subscriber with the publisher, we have to enforce a migration
freeze for that duration. That's often inconvenient.

We often want to have a database with a subset of data of a particular
table somewhere else, and logical replication is great, since it allows me
to run write operations on the subscriber. Binary replication forces me to
have two identical databases.

Best,
Lev

On Thu, Oct 10, 2019 at 12:49 PM Miles Elam 
wrote:

> Hi Lev,
>
> While I don't have an answer to your roadmap question, you've raised a
> different question for me.
>
> What are you expecting to get from logical replication of DDL commands
> that is not served by binary replication?  I ask because typically someone
> would want to use logical replication if they wanted triggers to fire on
> the subscriber, they only a subset of all tables replicated, etc.
>
> Perhaps a better question would be "What problem are you trying to solve?"
> rather than focus on how you expected to solve that problem.
>
>
> Cheers,
>
> Miles Elam
>
> On Thu, Oct 10, 2019 at 11:08 AM Lev Kokotov 
> wrote:
>
>> Hello,
>>
>> Is DDL support on a roadmap for logical replication?
>>
>> Thank you.
>> - Lev
>>
>


Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
That is,
It worked by changing the code to:

 if (idxcd == NULL) {
elog( INFO, "idxcd IS NULL" );
continue; /* Or is that fatal enough to break instead? */
}

if (!idxcd->idxused)
continue;

Very thanks,
Yessica Brinkmann

El jue., 10 oct. 2019 a las 16:43, Yessica Brinkmann (<
yessica.brinkm...@gmail.com>) escribió:

> Thank you very much for your answer.
> It helped me.
> Really now the get_columnnames function is already working and ends
> cleanly.
> I have an error in the following function to be executed that generates
> the same error: The connection to the server was lost. Attempting reset:
> Failed. I will try to solve it alone and if I cannot write another new mail
> thread.
> Because this problem of get_columnnames is already solved.
> Many thanks,
> Best regards,
> Yessica Brinkmann
>
> El jue., 10 oct. 2019 a las 13:15, Alban Hertroys ()
> escribió:
>
>>
>> > On 10 Oct 2019, at 17:55, Yessica Brinkmann <
>> yessica.brinkm...@gmail.com> wrote:
>> >
>> > I really thought a lot, but I don't understand why but the function
>> fails after the expression is executed:
>> > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""),
>> idxcd-> varattno [i]);
>> > The error appears only to me when entering the cycle:
>> > foreach (cell, candidates) / * foreach cell in candidates * /
>> > more than once, that is, when you have more than one candidate index.
>> If the cycle is entered only once, the function works correctly.
>> > The error that appears to me is that the connection to the PostgreSQL
>> server is directly lost. I proved that the error occurs in that statement,
>> printing some values.
>>
>> There is probably an error in the Postgres log-file providing you more
>> info.
>>
>> That said, at least the below bit in your code is dangerous:
>>
>> foreach( cell, candidates ) /* foreach cell in candidates */
>> {
>>
>> idxcd = (IndexCandidate*)lfirst( cell );
>>
>> if( !idxcd->idxused )
>> continue;
>>
>> if (idxcd!=NULL)
>> {
>>
>>
>> You should at least check for NULL before referencing an attribute of
>> that structure. Personally, I would invert the test like so (and then move
>> it before the idxused test:
>>
>> if (idxcd == NULL) {
>> elog( INFO, "idxcd IS NULL" );
>> continue; /* Or is that fatal enough to break instead? */
>> )
>>
>> if (!idxcd->idxused)
>> continue;
>>
>>
>>
>> Alban Hertroys
>> --
>> There is always an exception to always.
>>
>>
>>
>>
>>


Re: DDL support for logical replication

2019-10-10 Thread Miles Elam
Hi Lev,

While I don't have an answer to your roadmap question, you've raised a
different question for me.

What are you expecting to get from logical replication of DDL commands that
is not served by binary replication?  I ask because typically someone would
want to use logical replication if they wanted triggers to fire on the
subscriber, they only a subset of all tables replicated, etc.

Perhaps a better question would be "What problem are you trying to solve?"
rather than focus on how you expected to solve that problem.


Cheers,

Miles Elam

On Thu, Oct 10, 2019 at 11:08 AM Lev Kokotov  wrote:

> Hello,
>
> Is DDL support on a roadmap for logical replication?
>
> Thank you.
> - Lev
>


Re: plpgsql copy import csv double quotes

2019-10-10 Thread pascal . crozet
Hoho 樂


Don't paid attention to the rest of the conversation that begins with "GRANT 
and REVOKE trigger on a ddl_command_end "


I've forwarded the email without deleting the previous message 




Salutations, Pascal







On Thu, Oct 10, 2019 at 9:20 PM +0200, "Adrian Klaver" 
 wrote:










On 10/10/19 8:30 AM, PASCAL CROZET wrote:
> Thanks,
> 
> 
> As it's write in the page, 
>  the file format is 
> more a convention than a standard.
> 
> 
> With collegues, we've find the solution with a regex in sed
> 
> 
> sed -e 's/\([^,]\)"\([^,]\)/\1\2/g' -e  's/\([^,]\)"\([^,]\)/\1\2/g'
> 
> Because file contains values like
> 
> ,"hostname1.fqdn.ad|\"\"\"\"\"\"\"\"\"\"0Clean|OK"

The issue is less the file format then the data value format. I am 
struggling to figure out what the above is doing.

> 
> 
> But, other csv file, that contains this value ","NAME=\"UBUNTU\"","| 
> works well. I find this value in destination column |NAME=\UBUNTU\| in 
> the destination table.
> 
> The main lines in the plpgsql function are :

You might want to look at dollar quoting:

https://www.postgresql.org/docs/9.3/plpgsql-development-tips.html
40.11.1. Handling of Quotation Marks


If it where me I would separate out the commands below into distinct 
EXECUTES, it would be easier to follow. That will still result in the 
TRUNCATE being rolled back as it is part of the function transaction and:

https://www.postgresql.org/docs/11/sql-truncate.html
"TRUNCATE is transaction-safe with respect to the data in the tables: 
the truncation will be safely rolled back if the surrounding transaction 
does not commit."

If you want to deal with errors then:

https://www.postgresql.org/docs/9.3/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING


> 
> CREATE OR REPLACE FUNCTION insert_into_db_sources_csv(
>      filename character varying,
>      tablename character varying,
>      delimiter character varying,
>      header character varying)
>    RETURNS void AS
> 
> request := 'TRUNCATE ' || tablename || '; COPY ' || tablename || ' FROM 
> ''' || filename || ''' CSV ' || header || ' DELIMITER ''' || delimiter 
> || ''' ENCODING ''UTF-8'';';
> EXECUTE request;
> 
> The function call :
> 
> select 
> insert_into_db_sources_csv('/DATA/input/files/Extract_software.csv', 
> 't_m03_software', ',', 'HEADER');
> 
> 
> If the import fails, TRUNCATE isn't executed. The previous data's that 
> was in table remains the same.
> 
> 
> *_*
> 
> Cordialement, *Pascal CROZET**
> *
> 
> *DBA *
> 
> •www.qualis-consulting.com •04 78 22 
> 74 90
> 
> •Le Bois des Côtes 1 – Bâtiment A
> •300 Route Nationale 6 – 69760 LIMONEST
> *_*
> 
> 
> 
> 
> *De :* Tom Lane 
> *Envoyé :* jeudi 10 octobre 2019 00:31
> *À :* PASCAL CROZET
> *Cc :* PG-General Mailing List
> *Objet :* Re: plpgsql copy import csv double quotes
> PASCAL CROZET  writes:
>> I’ve experience issues with double quotes \34 inside fields, in a csv file.
> 
>> Ex :
>> "value1","some text","other text with "double quotes" inside","last field"
> 
> I don't know of any definition of CSV format by which that's legal data.
> The typical rule is that double quotes that are data must be doubled;
> at least, that's what COPY expects by default.  You can also get COPY
> to handle variants like backslash-quote.
> 
>      regards, tom lane


-- 
Adrian Klaver
adrian.kla...@aklaver.com







Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh

På torsdag 10. oktober 2019 kl. 21:32:38, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: 
Andres Freund  writes:
 > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
 >> Well, it shows that the failure is occurring while trying to evaluate
 >> a variable in a trigger's WHEN clause during
 >> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 >> And I'd bet that the root cause is something to do with Andres' tuple slot
 >> work. But (at least to my eye) it's not apparent exactly what's wrong.

 > It looks like this could "just" be another report of #16036, which was
 > already fixed in:
 > commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 > Author: Andres Freund 
 > Date: 2019-10-04 11:59:34 -0700
 > Fix crash caused by EPQ happening with a before update trigger present.

 Bingo. I can reproduce the crash (using concurrent updates of the same
 table row, in the schema Andreas sent off-list) on the predecessor of
 that commit, but on that commit it's fine. 


That's great! 


Andreas, that's a pretty simple patch if you're in a position to
 build from source ...

 regards, tom lane 


Yes, we've built a new .deb-package from 
f224c7c11ea7be2751e3342e11317070ffb5622d in REL_12_STABLE which we'll deploy 
tonight. 
Thanks! 


-- 
Andreas Joseph Krogh 


Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
Thank you very much for your answer.
It helped me.
Really now the get_columnnames function is already working and ends cleanly.
I have an error in the following function to be executed that generates the
same error: The connection to the server was lost. Attempting reset:
Failed. I will try to solve it alone and if I cannot write another new mail
thread.
Because this problem of get_columnnames is already solved.
Many thanks,
Best regards,
Yessica Brinkmann

El jue., 10 oct. 2019 a las 13:15, Alban Hertroys ()
escribió:

>
> > On 10 Oct 2019, at 17:55, Yessica Brinkmann 
> wrote:
> >
> > I really thought a lot, but I don't understand why but the function
> fails after the expression is executed:
> > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd->
> varattno [i]);
> > The error appears only to me when entering the cycle:
> > foreach (cell, candidates) / * foreach cell in candidates * /
> > more than once, that is, when you have more than one candidate index. If
> the cycle is entered only once, the function works correctly.
> > The error that appears to me is that the connection to the PostgreSQL
> server is directly lost. I proved that the error occurs in that statement,
> printing some values.
>
> There is probably an error in the Postgres log-file providing you more
> info.
>
> That said, at least the below bit in your code is dangerous:
>
> foreach( cell, candidates ) /* foreach cell in candidates */
> {
>
> idxcd = (IndexCandidate*)lfirst( cell );
>
> if( !idxcd->idxused )
> continue;
>
> if (idxcd!=NULL)
> {
>
>
> You should at least check for NULL before referencing an attribute of that
> structure. Personally, I would invert the test like so (and then move it
> before the idxused test:
>
> if (idxcd == NULL) {
> elog( INFO, "idxcd IS NULL" );
> continue; /* Or is that fatal enough to break instead? */
> )
>
> if (!idxcd->idxused)
> continue;
>
>
>
> Alban Hertroys
> --
> There is always an exception to always.
>
>
>
>
>


Re: Segmentation fault with PG-12

2019-10-10 Thread Tom Lane
Andres Freund  writes:
> On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
>> Well, it shows that the failure is occurring while trying to evaluate
>> a variable in a trigger's WHEN clause during
>> "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
>> IN ($3)\nRETURNING entity_id"
>> And I'd bet that the root cause is something to do with Andres' tuple slot
>> work.  But (at least to my eye) it's not apparent exactly what's wrong.

> It looks like this could "just" be another report of #16036, which was
> already fixed in:
> commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
> Author: Andres Freund 
> Date:   2019-10-04 11:59:34 -0700
> Fix crash caused by EPQ happening with a before update trigger present.

Bingo.  I can reproduce the crash (using concurrent updates of the same
table row, in the schema Andreas sent off-list) on the predecessor of
that commit, but on that commit it's fine.

Andreas, that's a pretty simple patch if you're in a position to
build from source ...

regards, tom lane




Re: plpgsql copy import csv double quotes

2019-10-10 Thread Adrian Klaver

On 10/10/19 8:30 AM, PASCAL CROZET wrote:

Thanks,


As it's write in the page, 
 the file format is 
more a convention than a standard.



With collegues, we've find the solution with a regex in sed


sed -e 's/\([^,]\)"\([^,]\)/\1\2/g' -e  's/\([^,]\)"\([^,]\)/\1\2/g'

Because file contains values like

,"hostname1.fqdn.ad|\"\"\"\"\"\"\"\"\"\"0Clean|OK"


The issue is less the file format then the data value format. I am 
struggling to figure out what the above is doing.





But, other csv file, that contains this value ","NAME=\"UBUNTU\"","| 
works well. I find this value in destination column |NAME=\UBUNTU\| in 
the destination table.


The main lines in the plpgsql function are :


You might want to look at dollar quoting:

https://www.postgresql.org/docs/9.3/plpgsql-development-tips.html
40.11.1. Handling of Quotation Marks


If it where me I would separate out the commands below into distinct 
EXECUTES, it would be easier to follow. That will still result in the 
TRUNCATE being rolled back as it is part of the function transaction and:


https://www.postgresql.org/docs/11/sql-truncate.html
"TRUNCATE is transaction-safe with respect to the data in the tables: 
the truncation will be safely rolled back if the surrounding transaction 
does not commit."


If you want to deal with errors then:

https://www.postgresql.org/docs/9.3/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING




CREATE OR REPLACE FUNCTION insert_into_db_sources_csv(
     filename character varying,
     tablename character varying,
     delimiter character varying,
     header character varying)
   RETURNS void AS

request := 'TRUNCATE ' || tablename || '; COPY ' || tablename || ' FROM 
''' || filename || ''' CSV ' || header || ' DELIMITER ''' || delimiter 
|| ''' ENCODING ''UTF-8'';';

EXECUTE request;

The function call :

select 
insert_into_db_sources_csv('/DATA/input/files/Extract_software.csv', 
't_m03_software', ',', 'HEADER');



If the import fails, TRUNCATE isn't executed. The previous data's that 
was in table remains the same.



*_*

Cordialement, *Pascal CROZET**
*

*DBA *

•www.qualis-consulting.com •04 78 22 
74 90


•Le Bois des Côtes 1 – Bâtiment A
•300 Route Nationale 6 – 69760 LIMONEST
*_*




*De :* Tom Lane 
*Envoyé :* jeudi 10 octobre 2019 00:31
*À :* PASCAL CROZET
*Cc :* PG-General Mailing List
*Objet :* Re: plpgsql copy import csv double quotes
PASCAL CROZET  writes:

I’ve experience issues with double quotes \34 inside fields, in a csv file.



Ex :
"value1","some text","other text with "double quotes" inside","last field"


I don't know of any definition of CSV format by which that's legal data.
The typical rule is that double quotes that are data must be doubled;
at least, that's what COPY expects by default.  You can also get COPY
to handle variants like backslash-quote.

     regards, tom lane



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
Thank you so much for your answer. I will be testing the indicated and then
I give you return.
Best regards,
Yessica Brinkmann

El jue., 10 oct. 2019 a las 15:25, Jaime Soler ()
escribió:

> Why don't have a try to gdb ?
> https://wiki.postgresql.org/wiki/Developer_FAQ#What_debugging_features_are_available.3F
>
> It might be a extra free memory executions or null pointer accesses .. ,
> gdb could help you.
>
> Regards
>
> El jue., 10 oct. 2019 a las 20:01, Yessica Brinkmann (<
> yessica.brinkm...@gmail.com>) escribió:
>
>> Thank you very much for the reply.
>> Well, really, resetStringInfo () is a function of the StringInfo data
>> structure.
>> What I used at the end was initStringInfo, which is a function of the
>> data structure StringInfoData, which is what I am using, although I don't
>> know if they are equivalent.
>> The code remained as follows:
>>if (cols.len> 0)
>>   {
>>initStringInfo (& cols);
>>} / * IF col.len> 0 * /
>> But it continues giving me the same error.
>> Best regards,
>> Yessica Brinkmann
>>
>> El jue., 10 oct. 2019 a las 13:33, Yessica Brinkmann (<
>> yessica.brinkm...@gmail.com>) escribió:
>>
>>> Thank you so much for your answer. I will be testing the indicated and
>>> then I give you return.
>>> Best regards,
>>> Yessica Brinkmann
>>>
>>> El jue., 10 oct. 2019 a las 13:14, Tom Lane ()
>>> escribió:
>>>
 Yessica Brinkmann  writes:
 > I really thought a lot, but I don't understand why but the function
 fails
 > after the expression is executed:
 > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""),
 idxcd->
 > varattno [i]);

 I think you're probably shooting yourself in the foot here:

 /* pfree() the memory allocated for the previous candidate.
 FIXME: Avoid
 * meddling with the internals of a StringInfo, and try to
 use an API.
 */
 if( cols.len > 0 )
 {
 pfree( cols.data );
 cols.data = NULL;
 } /*IF col.len>0*/

 Don't do that, use resetStringInfo() instead.

 regards, tom lane

>>>


Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Jaime Soler
Why don't have a try to gdb ?
https://wiki.postgresql.org/wiki/Developer_FAQ#What_debugging_features_are_available.3F

It might be a extra free memory executions or null pointer accesses .. ,
gdb could help you.

Regards

El jue., 10 oct. 2019 a las 20:01, Yessica Brinkmann (<
yessica.brinkm...@gmail.com>) escribió:

> Thank you very much for the reply.
> Well, really, resetStringInfo () is a function of the StringInfo data
> structure.
> What I used at the end was initStringInfo, which is a function of the data
> structure StringInfoData, which is what I am using, although I don't know
> if they are equivalent.
> The code remained as follows:
>if (cols.len> 0)
>   {
>initStringInfo (& cols);
>} / * IF col.len> 0 * /
> But it continues giving me the same error.
> Best regards,
> Yessica Brinkmann
>
> El jue., 10 oct. 2019 a las 13:33, Yessica Brinkmann (<
> yessica.brinkm...@gmail.com>) escribió:
>
>> Thank you so much for your answer. I will be testing the indicated and
>> then I give you return.
>> Best regards,
>> Yessica Brinkmann
>>
>> El jue., 10 oct. 2019 a las 13:14, Tom Lane ()
>> escribió:
>>
>>> Yessica Brinkmann  writes:
>>> > I really thought a lot, but I don't understand why but the function
>>> fails
>>> > after the expression is executed:
>>> > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""),
>>> idxcd->
>>> > varattno [i]);
>>>
>>> I think you're probably shooting yourself in the foot here:
>>>
>>> /* pfree() the memory allocated for the previous candidate.
>>> FIXME: Avoid
>>> * meddling with the internals of a StringInfo, and try to
>>> use an API.
>>> */
>>> if( cols.len > 0 )
>>> {
>>> pfree( cols.data );
>>> cols.data = NULL;
>>> } /*IF col.len>0*/
>>>
>>> Don't do that, use resetStringInfo() instead.
>>>
>>> regards, tom lane
>>>
>>


DDL support for logical replication

2019-10-10 Thread Lev Kokotov
Hello,

Is DDL support on a roadmap for logical replication?

Thank you.
- Lev


Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
Thank you very much for the reply.
Well, really, resetStringInfo () is a function of the StringInfo data
structure.
What I used at the end was initStringInfo, which is a function of the data
structure StringInfoData, which is what I am using, although I don't know
if they are equivalent.
The code remained as follows:
   if (cols.len> 0)
  {
   initStringInfo (& cols);
   } / * IF col.len> 0 * /
But it continues giving me the same error.
Best regards,
Yessica Brinkmann

El jue., 10 oct. 2019 a las 13:33, Yessica Brinkmann (<
yessica.brinkm...@gmail.com>) escribió:

> Thank you so much for your answer. I will be testing the indicated and
> then I give you return.
> Best regards,
> Yessica Brinkmann
>
> El jue., 10 oct. 2019 a las 13:14, Tom Lane ()
> escribió:
>
>> Yessica Brinkmann  writes:
>> > I really thought a lot, but I don't understand why but the function
>> fails
>> > after the expression is executed:
>> > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd->
>> > varattno [i]);
>>
>> I think you're probably shooting yourself in the foot here:
>>
>> /* pfree() the memory allocated for the previous candidate.
>> FIXME: Avoid
>> * meddling with the internals of a StringInfo, and try to use
>> an API.
>> */
>> if( cols.len > 0 )
>> {
>> pfree( cols.data );
>> cols.data = NULL;
>> } /*IF col.len>0*/
>>
>> Don't do that, use resetStringInfo() instead.
>>
>> regards, tom lane
>>
>


Re: SELECT d02name::bytea FROM ... && DBI::Pg

2019-10-10 Thread Matthias Apitz
El día jueves, octubre 10, 2019 a las 06:46:02p. m. +0200, Francisco Olarte 
escribió:

> Matthias:
> 
> On Thu, Oct 10, 2019 at 1:25 PM Matthias Apitz  wrote:
> > I can SELECT && print a column in hex with:
> > pos71=# select d02name::bytea from d02ben where d02bnr = '1048313' ;
> > ...
> >  
> > \x50c3a46461676f67697363686520486f6368736368756c65205765696e67617274656e2020
> >  ...
> >
> > but when I use the same in Perl DBI::Pg with:
> > $sth=$dbh->prepare( "select d02name::bytea from d02ben where d02bnr = 
> > '1048313'");
> ...
> > It prints the UTF-8 string and not the hex string:
> 
> May be because perl does not need the contents in hex DBI converts it
> to a native string, which is very similar to a "bytea" inside the db (
> not in the wire protocol, but it does a similar thing for numbers ).

Hmm. But *I* do need the content in hex to see if the varchar column
contains correct encoded UTF-8 data. We're on the way to port a huge
database application from Sybase to PostgreSQL and are facing any kind of
problems one can think of. Magically, sometimes strings, expected to be
coded in UTF-8, arrive in the Perl $variables coded in ISO-8859-1 and than cause
other problems when German Umlauts should be translated into HTML
encodings like  etc. to be presented in the web browser.

Perl (and Java) sucks, it does magic things below the surface of
string (objects). That's why I like C :-)

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
Thank you so much for your answer. I will be testing the indicated and then
I give you return.
Best regards,
Yessica Brinkmann

El jue., 10 oct. 2019 a las 13:15, Alban Hertroys ()
escribió:

>
> > On 10 Oct 2019, at 17:55, Yessica Brinkmann 
> wrote:
> >
> > I really thought a lot, but I don't understand why but the function
> fails after the expression is executed:
> > appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd->
> varattno [i]);
> > The error appears only to me when entering the cycle:
> > foreach (cell, candidates) / * foreach cell in candidates * /
> > more than once, that is, when you have more than one candidate index. If
> the cycle is entered only once, the function works correctly.
> > The error that appears to me is that the connection to the PostgreSQL
> server is directly lost. I proved that the error occurs in that statement,
> printing some values.
>
> There is probably an error in the Postgres log-file providing you more
> info.
>
> That said, at least the below bit in your code is dangerous:
>
> foreach( cell, candidates ) /* foreach cell in candidates */
> {
>
> idxcd = (IndexCandidate*)lfirst( cell );
>
> if( !idxcd->idxused )
> continue;
>
> if (idxcd!=NULL)
> {
>
>
> You should at least check for NULL before referencing an attribute of that
> structure. Personally, I would invert the test like so (and then move it
> before the idxused test:
>
> if (idxcd == NULL) {
> elog( INFO, "idxcd IS NULL" );
> continue; /* Or is that fatal enough to break instead? */
> )
>
> if (!idxcd->idxused)
> continue;
>
>
>
> Alban Hertroys
> --
> There is always an exception to always.
>
>
>
>
>


Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Alban Hertroys


> On 10 Oct 2019, at 17:55, Yessica Brinkmann  
> wrote:
> 
> I really thought a lot, but I don't understand why but the function fails 
> after the expression is executed:
> appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd-> 
> varattno [i]);
> The error appears only to me when entering the cycle:
> foreach (cell, candidates) / * foreach cell in candidates * /
> more than once, that is, when you have more than one candidate index. If the 
> cycle is entered only once, the function works correctly.
> The error that appears to me is that the connection to the PostgreSQL server 
> is directly lost. I proved that the error occurs in that statement, printing 
> some values.

There is probably an error in the Postgres log-file providing you more info.

That said, at least the below bit in your code is dangerous:

foreach( cell, candidates ) /* foreach cell in candidates */
{

idxcd = (IndexCandidate*)lfirst( cell );

if( !idxcd->idxused )
continue;

if (idxcd!=NULL)
{


You should at least check for NULL before referencing an attribute of that 
structure. Personally, I would invert the test like so (and then move it before 
the idxused test:

if (idxcd == NULL) {
elog( INFO, "idxcd IS NULL" );
continue; /* Or is that fatal enough to break instead? */
)

if (!idxcd->idxused)
continue;



Alban Hertroys
--
There is always an exception to always.








Re: The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Tom Lane
Yessica Brinkmann  writes:
> I really thought a lot, but I don't understand why but the function fails
> after the expression is executed:
> appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd->
> varattno [i]);

I think you're probably shooting yourself in the foot here:

/* pfree() the memory allocated for the previous candidate. FIXME: 
Avoid
* meddling with the internals of a StringInfo, and try to use an 
API.
*/
if( cols.len > 0 )
{
pfree( cols.data );
cols.data = NULL;
} /*IF col.len>0*/

Don't do that, use resetStringInfo() instead.

regards, tom lane




The connection to the server was lost. Attempting reset: Failed.

2019-10-10 Thread Yessica Brinkmann
Good afternoon,
I am doing a thesis from the University. My thesis is the modification of
the Gurjeet Index Adviser, to add some features.
At the moment I am modifying one of the .c files and adding a function
called get_columnames that returns the names of the columns of a table,
passing it the numbers of the attributes.
I really thought a lot, but I don't understand why but the function fails
after the expression is executed:
appendStringInfo (& cols, "% s a.attnum =% d", (i> 0? "OR": ""), idxcd->
varattno [i]);
The error appears only to me when entering the cycle:
foreach (cell, candidates) / * foreach cell in candidates * /
more than once, that is, when you have more than one candidate index. If
the cycle is entered only once, the function works correctly.
The error that appears to me is that the connection to the PostgreSQL
server is directly lost. I proved that the error occurs in that statement,
printing some values.
The error that appears to me is the following:
The connection to the server was lost. Attempting reset: Failed.
I attach the source code of the function (function.c) and the log file with
the output of the last executions.
I clarify that the foreach cycle for the c language is defined in #include
"nodes / pg_list.h"
I am using Postgresql 8.3.23.
I clarify that I use that version because it is compatible with the Index
Adviser, which I am modifying for my University thesis.
 I was analyzing and researching on the compilation without optimizations,
with symbols and also on the use of gbd. Very interesting everything.
My query is now if all this can be applied the same if a Makefile is used.
Because the Index Adviser comes with a Makefile that runs for compilation.
That is, as I read it is more or less easy to apply all this if I'm going
to compile from the command line but having the Makefile, the truth is that
I don't know how to apply it. I am new with Linux.
The Linux distribution I use is Debian 9.x 64-bit.
And the version of C (gcc) that I use is 6.3.0
I will greatly appreciate an answer please,
I also attach the Index Adviser Makefile.
I would greatly appreciate a help please. I really am not knowing how to
move forward.
As I commented, my function works for the case in which the foreach is
entered only once. But when you enter more than once, it doesn't work. That
is, when you have more than one candidate index, it gives me that error.
This means that, for example, for a simple Select type statement or that
has an AND it works for me. But for the case where the Select statement has
an OR for example, it doesn't work. Because you already have more than one
candidate index and enter the foreach more than once.
I will be very grateful if anyone can help me please.
I already thought a lot why the mistake could happen but I can't really
find a reason.
Best regards,
Yessica Brinkmann
FATAL:  the database system is in recovery mode
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  database system was shut down at 2019-10-04 00:58:21 CEST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  server process (PID 1245) was terminated by signal 11: Segmentation fault
LOG:  terminating any other active server processes
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2019-10-04 02:08:57 CEST
LOG:  database system was not properly shut down; automatic recovery in progress
FATAL:  the database system is in recovery mode
LOG:  redo starts at 0/BEE6A00
LOG:  record with zero length at 0/BEFA848
LOG:  redo done at 0/BEFA818
LOG:  last completed transaction was at log time 2019-10-04 02:11:33.931119+02
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  server process (PID 1388) was terminated by signal 11: Segmentation fault
LOG:  terminating any other active server processes
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2019-10-04 02:11:44 CEST
LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  redo starts at 0/BEFA890
LOG:  record with zero length at 0/BF0F940
LOG:  redo done at 0/BF0F910
LOG:  last completed transaction was at log time 2019-10-04 02:17:13.560035+02
FATAL:  the database system is in recovery mode
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  received smart shutdown request
LOG:  received SIGHUP, reloading configuration files
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
LOG:  database system was shut down at 2019-10-04 02:36:09 CEST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  server process (PID 1271) was terminated by signal 11: Segmentation fault
LOG:  terminating any other active server processes
LOG:  all server processes terminated; reinitializing
LOG:  

Re: Case Insensitive Comparison with Postgres 12

2019-10-10 Thread Igal @ Lucee.org

Thomas,

On 10/10/2019 6:22 AM, Thomas Kellerer wrote:


Igal @ Lucee.org schrieb am 10.10.2019 um 14:41:

Thank you all for replying.  I tried to use the locale suggested by
both Laurenz and Pavel, en-US-u-ks-level2, but I'm still getting
false for a simple comparison of 'Abc' = 'abc'.  I tried the locale
both as a 'string' and as an "identifier":


drop collation if exists case_insensitive;
create collation case_insensitive (

    provider=icu, locale="en-US-u-ks-level2", deterministic=false
);


select 'Abc' = 'abc' collate case_insensitive as is_equal;

is_equal|
|
false   |

What am I doing wrong here?

Check the version of libicu that your Linux is using.
That locale format requires version 54 or later.
(My up-to-date CentOS 7.7 for example is still using version 50 and the EDB 
Windows binaries include version 53).

In another thread about ICU problems, Daniel Verite explained that in more 
detail:


With ICU 53 or older, instead of the locale above, we must use the old-style 
syntax:

  locale = 'de-DE@colStrength=secondary'

In your case I guess, it should be

locale = 'en-US@colStrength=secondary'


That works, thank you!

I also have CentOS installed on that machine: CentOS Linux release 
7.7.1908 (Core), showing libicu Version 50.2 via `yum info libicu`.


Best,

Igal







RE: plpgsql copy import csv double quotes

2019-10-10 Thread PASCAL CROZET
Thanks,


As it's write in the page, 
the file format is more a convention than a standard.


With collegues, we've find the solution with a regex in sed


sed -e 's/\([^,]\)"\([^,]\)/\1\2/g' -e  's/\([^,]\)"\([^,]\)/\1\2/g'

Because file contains values like

,"hostname1.fqdn.ad|\"\"\"\"\"\"\"\"\"\"0Clean|OK"


But, other csv file, that contains this value ","NAME=\"UBUNTU\"","| works 
well. I find this value in destination column |NAME=\UBUNTU\| in the 
destination table.

The main lines in the plpgsql function are :

CREATE OR REPLACE FUNCTION insert_into_db_sources_csv(
filename character varying,
tablename character varying,
delimiter character varying,
header character varying)
  RETURNS void AS

request := 'TRUNCATE ' || tablename || '; COPY ' || tablename || ' FROM ''' || 
filename || ''' CSV ' || header || ' DELIMITER ''' || delimiter || ''' ENCODING 
''UTF-8'';';
EXECUTE request;

The function call :

select insert_into_db_sources_csv('/DATA/input/files/Extract_software.csv', 
't_m03_software', ',', 'HEADER');


If the import fails, TRUNCATE isn't executed. The previous data's that was in 
table remains the same.

_

Cordialement, Pascal CROZET

DBA

• www.qualis-consulting.com • 04 78 22 74 90

• Le Bois des Côtes 1 – Bâtiment A
• 300 Route Nationale 6 – 69760 LIMONEST
_



De : Tom Lane 
Envoyé : jeudi 10 octobre 2019 00:31
À : PASCAL CROZET
Cc : PG-General Mailing List
Objet : Re: plpgsql copy import csv double quotes

PASCAL CROZET  writes:
> I’ve experience issues with double quotes \34 inside fields, in a csv file.

> Ex :
> "value1","some text","other text with "double quotes" inside","last field"

I don't know of any definition of CSV format by which that's legal data.
The typical rule is that double quotes that are data must be doubled;
at least, that's what COPY expects by default.  You can also get COPY
to handle variants like backslash-quote.

regards, tom lane


Re: Event Triggers and GRANT/REVOKE

2019-10-10 Thread Adrian Klaver

On 10/9/19 3:20 PM, Miles Elam wrote:
Using my example below from another thread, GRANTs and REVOKEs leave all 
fields NULL except for command_tag (set to 'GRANT' or 'REVOKE'), 
object_type (set to upper case target like 'TABLE'), and in_extension 
(set to whatever is appropriate, but typically false).


From the source:

https://doxygen.postgresql.org/event__trigger_8c.html#a4fc09f5b8231780b7f1abdd2a72f95a7

https://doxygen.postgresql.org/event__trigger_8c.html#a2b4e240a2c0aa9cfcc61f906af134913

Line 2185 case SCT_Grant

Looks like the information is never collected. The why is going to need 
an answer from a developer.



As to case it looks like GRANT/REVOKE use a different method of deriving 
the string then other commands, namely using stringify_adefprivs_objtype()





-

CREATE TABLE IF NOT EXISTS ddl_info (
   classid oid,
   objid oid,
   objsubid integer,
   command_tag text,
   object_type text,
   schema_name text,
   object_identity text,
   in_extension bool,
   transaction_id bigint NOT NULL DEFAULT txid_current(),
   inserted timestamptz NOT NULL DEFAULT clock_timestamp()
);

CREATE OR REPLACE FUNCTION ddl_log()
RETURNS EVENT_TRIGGER
LANGUAGE plpgsql AS $$
BEGIN
   INSERT INTO ddl_info (
     classid, objid, objsubid, command_tag, object_type,
     schema_name, object_identity, in_extension
   )
   SELECT
     classid, objid, objsubid, command_tag, object_type,
     schema_name, object_identity, in_extension
   FROM pg_event_trigger_ddl_commands();
END;
$$;

CREATE EVENT TRIGGER aa_ddl_info ON ddl_command_end
EXECUTE PROCEDURE ddl_log();

On Wed, Oct 9, 2019 at 2:27 PM Adrian Klaver > wrote:


On 10/9/19 1:56 PM, Miles Elam wrote:
 > GRANT and REVOKE trigger on a ddl_command_end event trigger but
don't
 > provide any information beyond whether it was a table, schema,
function,
 > etc. that was affected. No object IDs or the like are included. How
 > would you find out which table had its ACLs modified?

What is the code for trigger and function?

 >
 > Also, why do grants and revokes have an object_type of 'TABLE'
instead
 > of lower case names like 'table' for all other event types?
 >
 >
 > Thanks,
 >
 > Miles Elam
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: SOLVED Re: Recovering disk space

2019-10-10 Thread Melvin Davidson
>OOK, after  a period of time, I recovered my disk space.

Stan, for those of us that have lost the ability to long distance mind
read, would you please
inform us of the steps you took to solve the problem? It might be helpful
in the future to someone
else in the same situation. It might also be useful to include the O/S and
PostgreSQL version.


On Thu, Oct 10, 2019 at 4:01 AM stan  wrote:

> OOK, after  a period of time, I recovered my disk space.
>
> On Thu, Oct 10, 2019 at 03:57:00AM -0400, stan wrote:
> > Sorry to repeat this, I am still in trouble on it.
> >
> > I made the mistake of trying to import the mouse gnome database in an
> > effort to do some testing with very large data sets.
> >
> > This import failed, probably because I ran out of disk space.  I deleted
> the
> > dump file, and a few other things, and now I have a bit of disk space.
> > Looks like I see the mad schema both in the postfres db, and in my own
> > stan db/ I did a drop schema cascade on both as the db superuser, and ran
> > vacuum full on both. This command came back almost instantly, which
> > surprises me.
> >
> > However, my 50G disk is still 96% full. How can I recover the disk space
> I
> > seem to have used u doing this?
> >
> >
> > --
> > "They that would give up essential liberty for temporary safety deserve
> > neither liberty nor safety."
> >   -- Benjamin Franklin
> >
> >
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: psql \copy hanging

2019-10-10 Thread Adrian Klaver

On 10/10/19 12:00 AM, Arnaud L. wrote:

Le 08/10/2019 à 16:59, Arnaud L. a écrit :

Yes, I already did that, and unfortunately there's no activy.
There is absolutely no activity in procmon using psql.exe as a filter.
process-xp is not giving more information, processor usage is 0%.


My apologies, I obviously did something wrong last time I checked this 
process with process-xp and procmon.

Now I see that there IS activity on the problematic process !
100% CPU and some (but not much) disk activity.

I think I'll try to let it run for some time to get the auto_explain do 
its work. If I kill the backend now, I won't see anything I believe.


You can look at:

pg_locks
https://www.postgresql.org/docs/11/view-pg-locks.html

and

pg_stat_activity
https://www.postgresql.org/docs/11/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

in real time to see if there is any useful information.



I dont now it it'll ever complete this query though, it usually takes 
~100 seconds, and here it has already been running for 9 hours.


Regards
--
Arnaud






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Stephen Frost
Greetings,

* Pól Ua Laoínecháin (lineh...@tcd.ie) wrote:
> > > 1) Is my lecturer full of it or does he really have a point?
> 
> > He's full of it, as far as I can tell anyway, based on what you've
> > shared with us.  Just look at the committers and the commit history to
> > PostgreSQL, and look at who the largest contributors are and who they
> > work for.  That alone might be enough to surprise your lecturer with.
> 
> The only non-PostgreSQL company that I could find was Fujitisu - where
> can I find a (list of) the others?

Not sure where you were looking...  The contributors list is here:

https://www.postgresql.org/community/contributors/

The committers list is here:

https://wiki.postgresql.org/wiki/Committers

The git tree is here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary

Perhaps not the best stat, but you can view the contributions by
committer pretty easily, for 2018, here:

https://github.com/postgres/postgres/graphs/contributors?from=2018-01-01=2018-12-31=c

Note that this isn't very representative of the actual authors though-
we don't track those in the way git would prefer, instead we note who
the author of a given patch was in the commit message itself.

> > Databases that do direct I/O don't depend on fsync.  That said, I do
> > think this could have been an issue for Oracle if you ran it without
> > direct i/o.
> 
> I think that Oracle are big into asyncio? I know that you have to sudo
> dnf install some_library with a name like asio/asyncio or something
> like that?

Oracle supports both, but running with direct i/o is pretty popular,
yes.

> Anyway, why doesn't PostgreSQL use Direct I/O?

There's an awful lot that the kernel provides when it comes to things
like good read-ahead and dealing with disks and SSDs and such that we
(currently, at least) prefer to leverage instead of writing lots of new
code to deal with that ourselves, which would be required to use Direct
I/O (and not have it be completely terrible performance wise, anyway).

The whole issue behind fsync was because our expectation (and POSIX's,
if you ask me anyway) was different from what the Linux kernel was
providing (specifically, you could end up in a situation where an
fsync() call "worked" and didn't return an error, even though there
remained pages that were dirty and not written out).  Now, this is under
other error conditions typically and you'll get messages in the kernel
log about such failures usually, so if you're properly monitoring and
managing your systems there's a good chance you would have realized
there was a problem even though the Linux kernel was telling PG that
everything was fine (have backups!!).

Thanks,

Stephen


signature.asc
Description: PGP signature


websearch_to_tsquery() and handling of ampersand characters inside double quotes

2019-10-10 Thread Alastair McKinley
Hi all,

I have recently discovered an unexpected difference in behaviour using 
websearch_to_tsquery() with quoted strings containing ampersands.

These two queries are equivalent without surrounding double quotes

select websearch_to_tsquery('something and another') = 
websearch_to_tsquery('something & another');
 ?column?
--
 t
(1 row)

select websearch_to_tsquery('something and another');
 websearch_to_tsquery
--
 'someth' & 'anoth'
(1 row)


With surrounding double quotes they produce subtly different queries, with 
different positional information.


 select websearch_to_tsquery('"something and another"') = 
websearch_to_tsquery('"something & another"');
 ?column?
--
 f
(1 row)


select websearch_to_tsquery('"something and another"');
 websearch_to_tsquery
--
 'someth' <2> 'anoth'
(1 row)

select websearch_to_tsquery('"something & another"');
 websearch_to_tsquery
--
 'someth' <-> 'anoth'
(1 row)

I imagine the difference is due to the ts_vector type recording different 
information for the underlying strings.

select to_tsvector('something & another');
 to_tsvector
--
 'anoth':2 'someth':1
(1 row)

chimera=# select to_tsvector('something and another');
 to_tsvector
--
 'anoth':3 'someth':1
(1 row)


This leads to quite different search results and my current workaround is to 
suggest to users to do both searches with an OR.  Is this the right solution?

Best regards,

Alastair


Re: websearch_to_tsquery() and apostrophe inside double quotes

2019-10-10 Thread Alastair McKinley
Hi Tom,

Thank you for looking at this.  You are right I couldn't find anything in the 
docs that would explain this.

I can't think of any rationale for producing a query like this so it does look 
like a bug.

Best regards,

Alastair





From: Tom Lane 
Sent: 10 October 2019 14:35
To: Alastair McKinley 
Cc: pgsql-general@lists.postgresql.org ; 
teo...@sigaev.ru 
Subject: Re: websearch_to_tsquery() and apostrophe inside double quotes

Alastair McKinley  writes:
> I am a little confused about what us being generated by 
> websearch_to_tsquery() in the case of an apostrophe inside double quotes.
> ...

> select websearch_to_tsquery('"peter o''toole"');
>  websearch_to_tsquery
> --
>  'peter' <-> ( 'o' & 'tool' )
> (1 row)

> I am not quite sure what text this will actually match?

I believe it's impossible for that to match anything :-(.
It would require 'o' and 'tool' to match the same lexeme
(one immediately after a 'peter') which of course is impossible.

The underlying tsvector type seems to treat the apostrophe the
same as whitespace; it separates 'o' and 'toole' into
distinct words:

# select to_tsvector('peter o''toole');
   to_tsvector
--
 'o':2 'peter':1 'tool':3
(1 row)

So it seems to me that this is a bug: websearch_to_tsquery
should also treat "'" like whitespace.  There's certainly
not anything in its documentation that suggests it should
treat "'" specially.  If it didn't, you'd get

# select websearch_to_tsquery('"peter o toole"');
websearch_to_tsquery

 'peter' <-> 'o' <-> 'tool'
(1 row)

which would match this tsvector.

regards, tom lane


Re: syntax error with v12

2019-10-10 Thread Adrian Klaver

On 10/10/19 5:51 AM, Ivan Kabaivanov wrote:

Hi, everyone.

First time poster here.

This snippet (from ovirt-engine-dwh) used to work fine with 
postgresql-11.x, but emits a syntax error with postgresql-12.0:


CREATETABLEhistory_configuration
(
var_name VARCHAR(50) NOTNULL,
var_value VARCHAR(255),
var_datetime TIMESTAMPWITHTIME ZONE,
CONSTRAINT PK_history_configuration PRIMARY KEY(var_name)
) WITHOIDS;

psql:/usr/share/ovirt-engine-dwh/dbscripts/create_tables.sql:7: ERROR: 
  syntax error at or near "OIDS"

LINE 7: ) WITH OIDS;

Is what I provided sufficient to troubleshoot the issue?  Let me know 
what else must be necessary for me to add.


https://www.postgresql.org/docs/12/release-12.html#id-1.11.6.5.3

"

Remove the special behavior of oid columns (Andres Freund, John Naylor)

Previously, a normally-invisible oid column could be specified during 
table creation using WITH OIDS; that ability has been removed. Columns 
can still be explicitly declared as type oid. Operations on tables that 
have columns created using WITH OIDS will need adjustment.


The system catalogs that previously had hidden oid columns now have 
ordinary oid columns. Hence, SELECT * will now output those columns, 
whereas previously they would be displayed only if selected explicitly.

"



Thanks,
IvanK.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: syntax error with v12

2019-10-10 Thread Ivan Kabaivanov
Thanks!

ivanK.

On Thu, Oct 10, 2019 at 3:56 PM Félix GERZAGUET 
wrote:

> Hi,
>
> On Thu, Oct 10, 2019 at 2:51 PM Ivan Kabaivanov 
> wrote:
>
>> psql:/usr/share/ovirt-engine-dwh/dbscripts/create_tables.sql:7: ERROR:
>>  syntax error at or near "OIDS"
>> LINE 7: ) WITH OIDS;
>>
>
> Indeed, OID were removed from 12 release as described in:
>
> https://www.postgresql.org/docs/release/12.0/
>
> You must adjust the statement.
>
> --
> Félix
>


Re: Case Insensitive Comparison with Postgres 12

2019-10-10 Thread Thomas Kellerer
Igal @ Lucee.org schrieb am 10.10.2019 um 14:41:
> Thank you all for replying.  I tried to use the locale suggested by
> both Laurenz and Pavel, en-US-u-ks-level2, but I'm still getting
> false for a simple comparison of 'Abc' = 'abc'.  I tried the locale
> both as a 'string' and as an "identifier":
> 
>> drop collation if exists case_insensitive;
> 
>> create collation case_insensitive (
>    provider=icu, locale="en-US-u-ks-level2", deterministic=false
> );
> 
>> select 'Abc' = 'abc' collate case_insensitive as is_equal;
> 
> is_equal|
> |
> false   |
> 
> What am I doing wrong here?

Check the version of libicu that your Linux is using. 
That locale format requires version 54 or later.
(My up-to-date CentOS 7.7 for example is still using version 50 and the EDB 
Windows binaries include version 53). 

In another thread about ICU problems, Daniel Verite explained that in more 
detail:

> With ICU 53 or older, instead of the locale above, we must use the old-style 
> syntax:
> 
>  locale = 'de-DE@colStrength=secondary'

In your case I guess, it should be 

   locale = 'en-US@colStrength=secondary'

Thomas




Re: logical replication - negative bitmapset member not allowed

2019-10-10 Thread Jehan-Guillaume de Rorthais
Hello,

On Thu, 4 Apr 2019 23:37:04 +0200
Peter Eisentraut  wrote:

> On 2019-04-01 23:43, Alvaro Herrera wrote:
> > Maybe the replica identity of a table got set to a unique index on oid?
> > Or something else involving system columns?  (If replication is
> > otherwise working, the I suppose there's a separate publication that's
> > having the error; the first thing to isolate would be to see what tables
> > are involved in that publication).  
> 
> Looking through the code, the bms_add_member() call in
> logicalrep_read_attrs() does not use the usual
> FirstLowInvalidHeapAttributeNumber offset, so that seems like a possible
> problem.
> 
> However, I can't quite reproduce this.  There are various other checks
> that prevent this scenario, but it's plausible that with a bit of
> whacking around you could hit this error message.

Here is a script to reproduce it under version 10, 11 and 12:


# env
PUB=/tmp/pub
SUB=/tmp/sub
unset PGPORT PGHOST PGDATABASE PGDATA
export PGUSER=postgres

# cleanup
kill %1
pg_ctl -w -s -D "$PUB" -m immediate stop; echo $?
pg_ctl -w -s -D "$SUB" -m immediate stop; echo $?
rm -r "$PUB" "$SUB"

# cluster
initdb -U postgres -N "$PUB" &>/dev/null; echo $?
initdb -U postgres -N "$SUB" &>/dev/null; echo $?
echo "wal_level=logical" >> "$PUB"/postgresql.conf
echo "port=5433" >> "$SUB"/postgresql.conf
pg_ctl -w -s -D $PUB -l "$PUB"-"$(date +%FT%T)".log start; echo $?
pg_ctl -w -s -D $SUB -l "$SUB"-"$(date +%FT%T)".log start; echo $?
pgbench -p 5432 -qi 
pg_dump -p 5432 -s | psql -qXp 5433

# fake activity
pgbench -p 5432 -T 300 -c 2 &

# replication setup
psql -p 5432 -Xc "CREATE PUBLICATION prov FOR ALL TABLES"
psql -p 5433 -Xc "CREATE SUBSCRIPTION sub
  CONNECTION 'port=5432'
  PUBLICATION prov"

# wait for the streaming
unset V;
while [ "$V" != "streaming" ]; do sleep 1
V=$(psql -AtXc "SELECT 'streaming'
FROM pg_stat_replication WHERE state='streaming'")
done

# trigger the error message
psql -p 5433 -Xc "ALTER SUBSCRIPTION sub DISABLE"
psql -p 5433 -Xc "ALTER TABLE pgbench_history ADD id SERIAL PRIMARY KEY"
psql -p 5432 -Xc "ALTER TABLE pgbench_history ADD id SERIAL PRIMARY KEY"
psql -p 5433 -Xc "ALTER SUBSCRIPTION sub ENABLE"


Regards,




Re: syntax error with v12

2019-10-10 Thread Félix GERZAGUET
Hi,

On Thu, Oct 10, 2019 at 2:51 PM Ivan Kabaivanov 
wrote:

> psql:/usr/share/ovirt-engine-dwh/dbscripts/create_tables.sql:7: ERROR:
>  syntax error at or near "OIDS"
> LINE 7: ) WITH OIDS;
>

Indeed, OID were removed from 12 release as described in:

https://www.postgresql.org/docs/release/12.0/

You must adjust the statement.

--
Félix


Re: syntax error with v12

2019-10-10 Thread Josef Šimánek
Hello, I think this change is mentioned in release notes
https://www.postgresql.org/docs/release/12.0/.

čt 10. 10. 2019 v 14:51 odesílatel Ivan Kabaivanov <
ivan.kabaiva...@gmail.com> napsal:

> Hi, everyone.
>
> First time poster here.
>
> This snippet (from ovirt-engine-dwh) used to work fine with
> postgresql-11.x, but emits a syntax error with postgresql-12.0:
>
> CREATE TABLE history_configuration
> (
> var_name VARCHAR(50) NOT NULL,
> var_value VARCHAR(255),
> var_datetime TIMESTAMP WITH TIME ZONE,
> CONSTRAINT PK_history_configuration PRIMARY KEY(var_name)
> ) WITH OIDS;
>
> psql:/usr/share/ovirt-engine-dwh/dbscripts/create_tables.sql:7: ERROR:
>  syntax error at or near "OIDS"
> LINE 7: ) WITH OIDS;
>
> Is what I provided sufficient to troubleshoot the issue?  Let me know what
> else must be necessary for me to add.
>
> Thanks,
> IvanK.
>


syntax error with v12

2019-10-10 Thread Ivan Kabaivanov
Hi, everyone.

First time poster here.

This snippet (from ovirt-engine-dwh) used to work fine with
postgresql-11.x, but emits a syntax error with postgresql-12.0:

CREATE TABLE history_configuration
(
var_name VARCHAR(50) NOT NULL,
var_value VARCHAR(255),
var_datetime TIMESTAMP WITH TIME ZONE,
CONSTRAINT PK_history_configuration PRIMARY KEY(var_name)
) WITH OIDS;

psql:/usr/share/ovirt-engine-dwh/dbscripts/create_tables.sql:7: ERROR:
 syntax error at or near "OIDS"
LINE 7: ) WITH OIDS;

Is what I provided sufficient to troubleshoot the issue?  Let me know what
else must be necessary for me to add.

Thanks,
IvanK.


Re: Case Insensitive Comparison with Postgres 12

2019-10-10 Thread Igal @ Lucee.org

On 10/9/2019 12:34 AM, Laurenz Albe wrote:

Igal Sapir wrote:

I am trying to test a simple case insensitive comparison.  Most likely the
collation that I chose is wrong, but I'm not sure how to choose the correct
one (for English/US?).  Here is my snippet:

create collation case_insensitive(
 provider=icu, locale='en-US-x-icu', deterministic=false
);
select 'Abc' = 'abc' collate case_insensitive;

I expected true but am getting false.

Any thoughts?

Yes, the LOCALE is wrong. Use

create collation case_insensitive (
provider=icu, locale='en-US-u-ks-level2', deterministic=false
);

The name of the locale defines it.

My blog post can give a simple introduction:
https://www.cybertec-postgresql.com/en/icu-collations-against-glibc-2-28-data-corruption/


Thank you all for replying.  I tried to use the locale suggested by both 
Laurenz and Pavel, en-US-u-ks-level2, but I'm still getting false for a 
simple comparison of 'Abc' = 'abc'.  I tried the locale both as a 
'string' and as an "identifier":


> select version();

version |
---|
PostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-39), 64-bit|


> drop collation if exists case_insensitive;

> create collation case_insensitive (
   provider=icu, locale="en-US-u-ks-level2", deterministic=false
);

> select 'Abc' = 'abc' collate case_insensitive as is_equal;

is_equal|
|
false   |

What am I doing wrong here?

Thanks,

Igal






Re: Minimum privilege for Backup and replication

2019-10-10 Thread Luca Ferrari
On Thu, Oct 10, 2019 at 1:26 PM Timmy Siu  wrote:
> Now, my question is -
> What is the Minimum Privilege of a pgsql Backup or Replication user?

It is not clear what your question really is. If you are thinking
about a "database user with backup grant", there is no thing like that
in PostgreSQL. If you asking what you need for replication, your user
must have "with replication" flag (create role/alter role).

Hope this helps.
Luca




websearch_to_tsquery() and apostrophe inside double quotes

2019-10-10 Thread Alastair McKinley
Hi all,

I am a little confused about what us being generated by websearch_to_tsquery() 
in the case of an apostrophe inside double quotes.

Here is an example of searching for a name containing an apostrophe.

The following works as expected:

select to_tsvector('peter o''toole') @@ websearch_to_tsquery('peter o''toole');
 ?column?
--
 t
(1 row)


When the name is in double quotes, the search fails:

select to_tsvector('peter o''toole') @@ websearch_to_tsquery('"peter 
o''toole"');
 ?column?
--
 f
(1 row)

In the first case, websearch_to_tsquery() returns:

select websearch_to_tsquery('peter o''toole');
  websearch_to_tsquery

 'peter' & 'o' & 'tool'
(1 row)

which makes sense to me.

In the second case websearch_to_tsquery() returns something that I can't quite 
understand:

select websearch_to_tsquery('"peter o''toole"');
 websearch_to_tsquery
--
 'peter' <-> ( 'o' & 'tool' )
(1 row)

I am not quite sure what text this will actually match?

Best regards,

Alastair




Minimum privilege for Backup and replication

2019-10-10 Thread Timmy Siu

Dear All Postgresql Users,
I am new to pgsql. I have an installation on my Ubuntu 18.04.

Now, my question is -
What is the Minimum Privilege of a pgsql Backup or Replication user?

I know how to do it for mysql but not pgsql.?? I begin to switch to pgsql 
v11 from mysql v5.7 after knowing it many nice features.?? Mysql seems 
very old-fashioned.?? Oracle and IBM DB2 are just too $$$ to me.


Thanks for your opinions.

Timmy





SELECT d02name::bytea FROM ... && DBI::Pg

2019-10-10 Thread Matthias Apitz


Hello,

I can SELECT && print a column in hex with:

pos71=# select d02name::bytea from d02ben where d02bnr = '1048313' ;
...
 \x50c3a46461676f67697363686520486f6368736368756c65205765696e67617274656e2020 
...

but when I use the same in Perl DBI::Pg with:

...
$sth=$dbh->prepare( "select d02name::bytea from d02ben where d02bnr = 
'1048313'");

$sth->execute or die "exec error\n".${DBI::errstr}."\n";

while ( (@row = $sth->fetchrow_array) )  {
   print "$row[0]\n";
}

It prints the UTF-8 string and not the hex string:

./utf8-01.pl  
DBI is version 1.642, DBD::Pg is version 3.8.0
client_encoding=UTF8, server_encoding=UTF8
Pädagogische Hochschule Weingarten

I checked with strace that the PG server delivers to DBI:Pg the
hex string, so it must be converted somewhere (or in print ...) again to
UTF-8. See below.

Any ideas?

matthias


recvfrom(3, "T\0\0\0
\0\1d02name\0\0\0\0\0\0\0\0\0\0\21\377\377\377\377\377\377\0\0D\0\0\0\376\0\1\0\0\0\364\\x50c3a46461676f67697363686520486f6368736368756c65205765696e67617274656e2020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020C\0\0\0\rSELECT
1\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 308
write(1, "P\303\244dagogische Hochschule Weingarten 


-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
May, 9: Спаси́бо освободители! Thank you very much, Russian liberators!




Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Simon Riggs
On Wed, 9 Oct 2019 at 20:06, Pól Ua Laoínecháin  wrote:


> One of my courses is "Advanced Databases" - yummy I thought - it's not
> even compulsory for me but I just *_had_* to take this module. The
> lecturer is a bit of an Oracle fan-boy (ACE director no less...
> hmmm...) and I want(ed) - becoming less enthusiasic by the minute - to
> do my dissertation with him.


It's a good thing that you have the opportunity to do that course and to
have an lecturer with strong real-world experience.


> So, we're having a chat and I make plain
> my love of good 'ol PostgreSQL as my RDBMS of choice and he tells me
> that there are problems with random block corruption with PostgreSQL.
> I said "really" and before that conversation could go any further,
> another student came over and asked a question.
>

I think its true that there have been reported problems with block
corruption with both Oracle and PostgreSQL. The main difference is that the
PostgreSQL project is open enough for people to see much of that on public
record.

Given the efforts made on resilience and recovery, such as PITR, block
checksums, those problems are pretty much solved, based upon a
statistically sufficient sample of the real world: 2ndQuadrant customers.
Some block-level problems do still recur - the recent fsync() problems were
reported by us and have been handled (not resolved), but they were and are
rare. There are still many issues of other kinds.

Many Oracle customers I have worked with years ago experienced block
corruptions and it was very common to hear the reply "talk to your disk
vendor". Those happened probably 20-30 years earlier, so in many cases have
now been forgotten. There isn't an easy way to go back and check for trends
on that.

Having said all of that, its easy to point at some of these things and use
them as FUD - fear, uncertainty and doubt. No doubt unintentionally.

I'd go and learn more from your lecturer. Your disagreement has made you
think, so he helped you. Learning from your own mistakes takes longer.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Solutions for the Enterprise


Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
We had another crash today, and it appears to be the same: #0 
slot_deform_heap_tuple (natts=26, offp=0x5598eba0b968,
 tuple=, slot=0x5598eba0b920)
 at ./build/../src/backend/executor/execTuples.c:895 -- Andreas Joseph Krogh 

Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Geoff Winkless
On Thu, 10 Oct 2019 at 09:31, Wim Bertels  wrote:
> sometimes people are really a fan of certain product,
> sometimes in combination with the thought that all other products are
> bad; i don't know if this is the case, you could compare it with
> soccer, a barcalona fan will never become a real madrid fan and vice
> versa; so "rational" decisions (at first, based on some reasoning) tend
> to get loaded emotional feelings.

Yeah, this. Bear in mind it's possible that having made a sweeping
statement that he cannot back up and that he secretly knows was
unfounded, your lecturer will be defensive and uncomfortable. Chances
are after your conversation he will have gone away and done the same
research you did and may well have modified his opinion but will be
too embarrassed to admit that to you.

Geoff




Re: Version 10.7 of postgres

2019-10-10 Thread Magnus Hagander
On Thu, Oct 10, 2019 at 6:52 AM Shankar Bhaskaran 
wrote:

> Hi ,
>
> We are planning to use postgres 10.7 version as that is the latest
> version supported on Aurora DB. Since we have an on premise installation
> also , i was trying to download the same version of postgres for windows
> and linux.
> Unfortunately that version is not available in the download site as well
> as the ftp site .Is there any reason why an installer for 10.7 version in
> windows and linux is not added to archives?
>
>
Note that Aurora is not PostgreSQL, so you cannot compare versions
straight. If you want an on premise installation that's identical to the
one in the cloud, you should use PostgreSQL on both sides.

You should be using 10.10, as it has important bug fixes including security
fixes. Whether Aurora contains those fixes or not you need to check with
the vendor, but if not you should be complaining to that vendor that they
are missing important fixes, rather than try to install a version with
known bugs and security issues.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Wim Bertels


> 
> Now, I have four questions:
> 
> 1) Is my lecturer full of it or does he really have a point?

Hallo Pol,

i don't know, a also teaching a databases,
personally i never experienced this

sometimes people are really a fan of certain product,
sometimes in combination with the thought that all other products are
bad; i don't know if this is the case, you could compare it with
soccer, a barcalona fan will never become a real madrid fan and vice
versa; so "rational" decisions (at first, based on some reasoning) tend
to get loaded emotional feelings.

in these kind of discussions both parties should have there say,
not just one (in this case you, we haven't heard the teacher) 

there is no such thing as a perfect dbms,
nor postgres, nor oracle, nor ..,

> 
> 2) The actual concrete acknowledged problem with fsync that affected
> PostgreSQL - why didn't it affect Oracle? Or MySQL? Or did it but it
> was so rare that it never became apparent - it wasn't that obvious
> with PostgreSQL either - one of those rare and intermittent problems?

you can set fsync to off (not default), for more performance,
but it comes with the cost of D in ACID, you no longer have it 

> 
> 3) Were there ever any problems with BSD?

as far as i understand BSD and variants are very solid,
so good for server use, not for desktop 

> 
> 4) What is the OS of choice for *_serious_* PostgreSQL installations?

it depends,

if transparancy is important to you, choose an opensource os
if time is important to you, choose what you already know
if you are willing to spent time on it, i would personally choose a
linux, bsd or solaris based os

if it helps:
i my case, i had to make a decision about the dbms for the classes as
well, the reasons i have choosen postgres are in a nutshell:
* free
* open
* runs good on servers that are comparable with an average desktop pc
or better
* close to ISO sql standard (the reason why i didn't choose mysql/now
mariadb)
* seems to have a future
-* within all these, postgres seems to have implemented most features
* after using it for a while (18 years now), i should now add: a great
community

some links:
https://www.top500.org/
https://en.wikipedia.org/wiki/DB-Engines_ranking
https://db-engines.com/en/ranking

> 
> I hope that I have been clear, but should anyone require any
> clarification, please don't hesitate to ask me.
> 
> Tia and rgs,
> 
> Pól...
> 
> 
-- 
mvg,
Wim Bertels
--
Lector
UC Leuven-Limburg
--
The human race has one really effective weapon, and that is laughter.
-- Mark Twain




SOLVED Re: Recovering disk space

2019-10-10 Thread stan
OOK, after  a period of time, I recovered my disk space.

On Thu, Oct 10, 2019 at 03:57:00AM -0400, stan wrote:
> Sorry to repeat this, I am still in trouble on it.
> 
> I made the mistake of trying to import the mouse gnome database in an
> effort to do some testing with very large data sets.
> 
> This import failed, probably because I ran out of disk space.  I deleted the
> dump file, and a few other things, and now I have a bit of disk space.
> Looks like I see the mad schema both in the postfres db, and in my own
> stan db/ I did a drop schema cascade on both as the db superuser, and ran
> vacuum full on both. This command came back almost instantly, which
> surprises me.
> 
> However, my 50G disk is still 96% full. How can I recover the disk space I
> seem to have used u doing this?
> 
> 
> -- 
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
>   -- Benjamin Franklin
> 
> 

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Recovering disk space

2019-10-10 Thread stan
Sorry to repeat this, I am still in trouble on it.

I made the mistake of trying to import the mouse gnome database in an
effort to do some testing with very large data sets.

This import failed, probably because I ran out of disk space.  I deleted the
dump file, and a few other things, and now I have a bit of disk space.
Looks like I see the mad schema both in the postfres db, and in my own
stan db/ I did a drop schema cascade on both as the db superuser, and ran
vacuum full on both. This command came back almost instantly, which
surprises me.

However, my 50G disk is still 96% full. How can I recover the disk space I
seem to have used u doing this?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Segmentation fault with PG-12

2019-10-10 Thread Andres Freund
On 2019-10-10 09:05:06 +0200, Andreas Joseph Krogh wrote:
> (Tom: This mail is only viewable as text/html, to if you're reading the 
> text/plain version it will seem "hashed")

I'm totally not ok with that btw.




Re: Segmentation fault with PG-12

2019-10-10 Thread Andreas Joseph Krogh
På torsdag 10. oktober 2019 kl. 07:25:26, skrev Andres Freund <
and...@anarazel.de >: On 2019-10-09 10:16:37 -0400, 
Tom Lane wrote:
 > Andreas Joseph Krogh  writes:
 > > Attached is output from "bt full". Is this helpful?
 >
 > Well, it shows that the failure is occurring while trying to evaluate
 > a variable in a trigger's WHEN clause during
 > "UPDATE origo_email_delivery SET folder_id=$1, created=$2\nWHERE entity_id 
IN ($3)\nRETURNING entity_id"
 > And I'd bet that the root cause is something to do with Andres' tuple slot
 > work. But (at least to my eye) it's not apparent exactly what's wrong.

 It looks like this could "just" be another report of #16036, which was
 already fixed in:

 commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
 Author: Andres Freund 
 Date: 2019-10-04 11:59:34 -0700

 Fix crash caused by EPQ happening with a before update trigger present.

(Tom: This mail is only viewable as text/html, to if you're reading the 
text/plain version it will seem "hashed") Aha, that whould be 
60e97d63e5d19098e11fa32431a20eea820e2ae9 in REL_12_STABLE We'll build and run 
HEAD of REL_12_STABLE, and report back. > This doesn't seem to correlate with 
your original report, btw,
 > as that claimed the crash was during COMMIT.

 That however, would be confusing, unless there's some deferred trigger
 that causes another update, which then fires a before update trigger
 causing the problem.

 Greetings,

 Andres Freund We have a deferred trigger which updates origo_email_delivery: 
CREATE OR REPLACE FUNCTIONorigo_index_email_props_tf() RETURNS TRIGGER AS $$ 
declare v_prop origo_email_message_property; BEGIN v_prop := NEW; UPDATE 
origo_email_delivery SET is_seen = v_prop.is_seen, followup_id = 
v_prop.followup_id, is_replied = v_prop.is_replied, is_forwarded = 
v_prop.is_forwarded, is_draft = v_prop.is_draft, is_done = v_prop.is_done, 
is_flagged = v_prop.is_flagged, modseq =greatest(modseq, v_prop.modseq) WHERE 
message_id = v_prop.message_idAND owner_id = v_prop.owner_id; RETURN NEW; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER origo_index_email_props_t AFTER 
INSERT OR UPDATE ON origo_email_message_property DEFERRABLE INITIALLY DEFERRED  
FOR EACH ROWEXECUTE PROCEDURE origo_index_email_props_tf(); .. and then trigger 
the following UPDATE-trigger: CREATE TRIGGER origo_email_delivery_update_t 
BEFORE UPDATE ON origo_email_delivery FOR EACH ROW  WHEN (OLD.folder_id <> NEW
.folder_idOR NEW.is_deleted <> OLD.is_deleted) EXECUTE PROCEDURE 
origo_email_delivery_update_tf(); Maybe that will trigger the bug. Thanks. -- 
Andreas Joseph Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 
andr...@visena.com  www.visena.com 
  

Re: psql \copy hanging

2019-10-10 Thread Arnaud L.

Le 08/10/2019 à 16:59, Arnaud L. a écrit :

Yes, I already did that, and unfortunately there's no activy.
There is absolutely no activity in procmon using psql.exe as a filter.
process-xp is not giving more information, processor usage is 0%.


My apologies, I obviously did something wrong last time I checked this 
process with process-xp and procmon.

Now I see that there IS activity on the problematic process !
100% CPU and some (but not much) disk activity.

I think I'll try to let it run for some time to get the auto_explain do 
its work. If I kill the backend now, I won't see anything I believe.


I dont now it it'll ever complete this query though, it usually takes 
~100 seconds, and here it has already been running for 9 hours.


Regards
--
Arnaud