Re: [GENERAL] Query Using Massive Temp Space

2017-11-20 Thread legrand legrand
Hi, 
there are many Hash and Merge joins that may generate a lot of temp space,
it could also be a problem of forgotten column in the join.

Could you also provide indexes definitions (pk, uk and others)
with the EXPLAIN (ANALYZE BUFFERS)

(you can limit the scope of the query to help it to finish by limiting the
number of lines from candidates  by
...  
*FROM match_candidate_address WHERE account_id NOT IN (:exclude_ids)
limit 100
*
)

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Using Massive Temp Space

2017-11-20 Thread Semler Miloslav
Hello,
I have one question… Why you using so huge amount of grouping columns? Is there 
some reason for it? It is not definitelly fast method.  I would prefer firstly 
do named query grouped by ids (account_id, candidate_id, parent_id) and then 
left join table candidates (to get rest of requested columns) out of this named 
query on this ids if it is possible.
You also avoid of hash indexing of these nine columns so you will save certain 
ammount of memory.

Miloslav

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Cory Tucker
Sent: Monday, November 20, 2017 7:17 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Query Using Massive Temp Space

Hello,

I have a query that is using a tremendous amount of temp disk space given the 
overall size of the dataset.  I'd love for someone to try to explain what PG is 
doing and why its using so much space for the query.

First off, the system is PG 9.6 on Ubuntu with 4 cores and 28 GB of RAM.  The 
query in question is a fairly large join of several tables (6) including some 
aggregations.  The overall dataset size of the 6 tables in question is about 
20GB and the largest table is about 15M rows.  The query is essentially a dump 
of most of the data from these tables joined together to be used in another 
system.

When the query runs it begins to use an aggressive amount of temp space on the 
volume over the course of many hours, until it reaches about 95% capacity and 
then tapers off.  Never completes though.  The temp space it uses is around 
1.5TB out of a 2TB volume.  Again, the total size of the relations in question 
is only 20GB.

Can anyone explain how the query could possibly use so much temp space?

Query and query plan are attached.  Please let me know if any more info is 
needed.

thanks
--Cory


Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-20 Thread Luca Ferrari
On Fri, Nov 17, 2017 at 9:41 PM, Justin Pryzby  wrote:
> I think you want something like this ?
>
> postgres=# SELECT schemaname, tablename, attname, 
> unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 
> 9;
>  pg_catalog | pg_pltemplate | tmplname| plperl  | 
> {plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}

I don't think it is what I'm looking for, I would like something ,like:

 select unnest( histogram_bounds::text::text[] ), unnest(
most_common_freqs ) from pg_stats

but with correlation between the two array indexes. Is it something
achievable in SQL? Or should I use a plpgsql loop with an index?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Using Massive Temp Space

2017-11-19 Thread Laurenz Albe
Cory Tucker wrote:
> I have a query that is using a tremendous amount of temp disk space given the 
> overall size of the dataset.
> I'd love for someone to try to explain what PG is doing and why its using so 
> much space for the query.

It could be a sort or a hash operation.

Do determine what is going on, EXPLAIN (ANALYZE, BUFFERS) output
would be useful.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
brahmesr  writes:
> SELECT ARRAY (SELECT ROW (COL1,COL2, COUNT(*) *AS txn_cnt* )::
> ap.validate_crtr_line_items$inv_lines_rt FROM Distinct_Records GROUP BY
> COL1, COL2  HAVING COUNT(*) > 1) INTO L_INV_LINES;

> ERROR:  syntax error at or near "AS"
> LINE 73: COL1,COL2, COUNT(*) AS txn_cnt...

> Why "AS" is throwing an error ?

"AS" is part of SELECT-list syntax, not ROW(...) syntax.

Even if it were allowed in ROW(), it would be totally pointless in
this context, because when you cast the ROW() result to the
ap.validate_crtr_line_items$inv_lines_rt composite type, that type
is what determines the column names.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread brahmesr
I already defined the composite type as
"validate_crtr_line_items$inv_lines_rt" with the selected
columns(COL1,COL2,COl3)

DeCLARE Block :

INV_LINES_T  validate_crtr_line_items$inv_lines_rt ARRAY;
L_INV_LINES INV_LINES_T%TYPE;
L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;

 Collecting the records into L_INV_LINES 

SELECT ARRAY (SELECT ROW (COL1,COL2, COUNT(*) *AS txn_cnt* )::
ap.validate_crtr_line_items$inv_lines_rt FROM Distinct_Records GROUP BY
COL1, COL2  HAVING COUNT(*) > 1) INTO L_INV_LINES;

ERROR:  syntax error at or near "AS"
LINE 73: COL1,COL2, COUNT(*) AS txn_cnt...

Why "AS" is throwing an error ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Chris Mair

Hi,


So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.


Thanks a lot for looking so quickly into this!

I've opened an issue with oracle_fdw:

https://github.com/laurenz/oracle_fdw/issues/215

Thanks,
Chris.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Tom Lane
Chris Mair  writes:
> Whenever a session has performed a query on a foreign table, any subsequent
> query on a local table big enough to use the parallel query feature exits with
> an error:
>  ERROR:  invalid cache ID: 41
>  CONTEXT:  parallel worker

Hm, syscache 41 is PROCNAMEARGSNSP in 9.6, if I counted right.

> (gdb) bt
> #0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
> #1  0x7f16a0f4e8e8 in abort () from /lib64/libc.so.6
> #2  0x008094b4 in errfinish (dummy=dummy@entry=0) at elog.c:557
> #3  0x0080aea2 in elog_finish (elevel=elevel@entry=22, 
> fmt=fmt@entry=0x9d9965 "invalid cache ID: %d") at elog.c:1378
> #4  0x007ffd82 in SearchSysCacheList (cacheId=cacheId@entry=41, 
> nkeys=nkeys@entry=2, key1=key1@entry=139734905138463, key2=, 
> key3=key3@entry=0, key4=key4@entry=0) at syscache.c:1210
> #5  0x7f169161a59f in _PG_init () at oracle_fdw.c:709
> #6  0x0080c476 in internal_load_library (libname=, 
> libname@entry=0x7f16a1bfdde8 ) at 
> dfmgr.c:276
> #7  0x0080c708 in RestoreLibraryState (start_address=0x7f16a1bfdde8 
> ) at dfmgr.c:741
> #8  0x004e72cf in ParallelWorkerMain (main_arg=) at 
> parallel.c:1069

Apparently, oracle_fdw is trying to do a procedure lookup in its _PG_init
function.  This is a horrible idea: it assumes that _PG_init is invoked
inside a transaction, which is wrong if the library is preloaded, for
example.  (I'd bet that adding oracle_fdw to shared_preload_libraries
would fail badly, though perhaps not with this exact error message.)

So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
Pavel Stehule  writes:
> 2017-11-19 18:57 GMT+01:00 Brahmam Eswar :
>> How to collect multiple columns into array which is composite data type of
>> all select colums

> SELECT ARRAY(SELECT ROW(col1, col2, ...) INTO

You probably need an explicit cast to the rowtype.  That is,

declare myarray rowtypename[];
...
select array(select row(col1, ...)::rowtypename from ...) into myarray;

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Pavel Stehule
2017-11-19 18:57 GMT+01:00 Brahmam Eswar :

> Hi ,
>
> System is migrating from Oracle to Postgre SQL.
> Oracle is providing BULK COLLECT INTO function to collect the multiple
> records from table .
>
>  Select COL1,COL2 ,COL3  BULK COLLECT INTO LINES from Distinct_Records.
>
> LINES IS TABLE OF TABLE1  (Defined lines as IS TABLE OF type).
>
> In PotGres:
>
>INV_LINES_T validate_crtr_line_items$inv_lines_rt ARRAY;
> L_INV_LINES INV_LINES_T%TYPE;
> L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;
>
>  I'm trying to collect the records in L_INV_LINES
>
> SELECT ARRAY (SELECT COL1,COL2,COL3  FROM Distinct_Records) INTO
> L_INV_LINES;
>
> Seems, Selecting  multiple columns into an array doesn't work in PL/pgSQL
> .
>
> How to collect multiple columns into array which is composite data type of
> all select colums
>

SELECT ARRAY(SELECT ROW(col1, col2, ...) INTO

>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: [GENERAL] mild modification to pg_dump

2017-11-18 Thread Matt Zagrabelny
On Fri, Nov 17, 2017 at 3:58 PM, marcelo  wrote:

> Again: knowing of .pgpass (thank you Scott) this is what I will do.
>
>
Just in case you might not know. The perms of the .pgpass file need to not
have group or all write access. For instance:

chmod 0600 .pgpass

-m


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Andres Freund
On 2017-11-17 18:56:45 -0300, marcelo wrote:
> Truly, I'm catched in a very big app, so I have no time to read all
> the docs.

People on this list also have jobs.

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Again: knowing of .pgpass (thank you Scott) this is what I will do.

On 17/11/17 17:49, Ron Johnson wrote:

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump 
at the appropriate time?







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Thank you, Scott.
That's happening me because incomplete docs reading.
Truly, I'm catched in a very big app, so I have no time to read all the 
docs.



On 17/11/17 18:31, Scott Mead wrote:


On Fri, Nov 17, 2017 at 4:06 PM, marcelo > wrote:


I need to "emulate" the pg_dump code because the password prompt.
Years ago I write a program (for the QnX environment) that catched
some prompt and emulates the standard input. I don't like to do
that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution 
(actually, all libpq programs can).  You could have a wrapper that 
sets the environment variable and then executes pg_dump, this would 
get you around that prompt.  Similarly, you could use the .pgpass file.


https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on
pg_dump code, I need to develop a daemon which can receive
a TCP message (from a privileged app) containing some
elements: the database to dump, the user under which do
that, and his password. (My apps are using that same data,
of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than
try and incorporate its source code into your app.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
--
Scott Mead
Sr. Architect
/OpenSCG /
http://openscg.com




Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 4:06 PM, marcelo  wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution
(actually, all libpq programs can).  You could have a wrapper that sets the
environment variable and then executes pg_dump, this would get you around
that prompt.  Similarly, you could use the .pgpass file.

https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



>
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson


What about the pgpass file?

https://www.postgresql.org/docs/9.2/static/libpq-pgpass.html

On 11/17/2017 03:06 PM, marcelo wrote:
I need to "emulate" the pg_dump code because the password prompt. Years 
ago I write a program (for the QnX environment) that catched some prompt 
and emulates the standard input. I don't like to do that again.


On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, 
I need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same 
data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.







--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

I will give expect a try. But the source code embedded in my daemon.

On 17/11/17 17:49, Ron Johnson wrote:

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump 
at the appropriate time?







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Vick Khera
pg_dump is a libpq client, and thus will read the environment for a
variable with the password. no need to emulte any command prompt tty
operations.

On Fri, Nov 17, 2017 at 4:06 PM, marcelo  wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo
I need to "emulate" the pg_dump code because the password prompt. Years 
ago I write a program (for the QnX environment) that catched some prompt 
and emulates the standard input. I don't like to do that again.


On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, I 
need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same data, 
of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump at 
the appropriate time?



--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-17 Thread Justin Pryzby
On Fri, Nov 17, 2017 at 09:32:23PM +0100, Luca Ferrari wrote:
> Hi all,
> maybe this is trivial, but I need an hint on a way to see a table form
> of the MCVs and MCFs out of pg_stats with a query. Is it possible to
> get a set of rows each with a most common value on one column and the
> corresponding column on the the other? (assuming I can cast the array
> of MCVs to the right type array)

I think you want something like this ?

postgres=# SELECT schemaname, tablename, attname, 
unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 9;
 pg_catalog | pg_pltemplate | tmplname| plperl  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plperlu | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpgsql | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython2u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython3u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpythonu   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltcl   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltclu  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplhandler | plperl_call_handler | 
{plperl_call_handler,plperlu_call_handler,plpgsql_call_handler,plpython2_call_handler,plpython3_call_handler,plpython_call_handler,pltcl_cal
l_handler,pltclu_call_handler}

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread John R Pierce

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message (from 
a privileged app) containing some elements: the database to dump, the 
user under which do that, and his password. (My apps are using that 
same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, 
I need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same 
data, of course, encripted to the common users).

Thank you, Scott.

On 17/11/17 10:49, Scott Mead wrote:



On Fri, Nov 17, 2017 at 7:51 AM, marcelo > wrote:


I would need to do a mild change to pg_dump, working against a 9.4
server on linux.
Which source tree do I need? Have gcc 4.9.2 in my Lubuntu
installation.
TIA


What exactly do you need to change?  Most likely, there is a quick and 
easy fix for whatever you're doing without modifying pg_dump itself.


That being said, if you really want to modify the source, download the 
source tarball: https://www.postgresql.org/ftp/source/






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
--
Scott Mead
Sr. Architect
/OpenSCG /
http://openscg.com




Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 7:51 AM, marcelo  wrote:

> I would need to do a mild change to pg_dump, working against a 9.4 server
> on linux.
> Which source tree do I need? Have gcc 4.9.2 in my Lubuntu installation.
> TIA


What exactly do you need to change?  Most likely, there is a quick and easy
fix for whatever you're doing without modifying pg_dump itself.

That being said, if you really want to modify the source, download the
source tarball: https://www.postgresql.org/ftp/source/


>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> It doesn't seem impossible to get into a situation where syslogger is
> the source of the OOM. Just enabling a lot of logging in a workload with
> many large query strings might do it.  So making it less likely to be
> killed might make the problem worse...

Hm, so that's another angle David didn't report on: is it possible that
his workload could have resulted in a very large volume of incomplete
in-progress log messages?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund


On November 16, 2017 7:06:23 PM PST, Tom Lane  wrote:
>Andres Freund  writes:
>> On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
>>> What might be worth thinking about is allowing the syslogger process
>to
>>> inherit the postmaster's OOM-kill-proofness setting, instead of
>dropping
>>> down to the same vulnerability as the postmaster's other child
>processes.
>
>> Hm. I'm a bit scared about that - it doesn't seem that inconceivable
>> that various backends log humongous multi-line messages, leading to
>> syslogger *actually* taking up a fair amount of memory. Note that
>we're
>> using plain stringinfos that ereport(ERROR) out of memory situations,
>> rather than failing more gracefully.
>
>True, but there's no hard limits on the postmaster's memory consumption
>either ... 

Is there a credible scenario where it'd allocate many gigabytes of memory?

> and if the syslogger does get killed on such a basis, we
>have at the least lost a bunch of log output.  On the whole I think we'd be
>better off trying to prevent OOM kills on the syslogger.  (That doesn't
>preclude other mitigation measures.)

It doesn't seem impossible to get into a situation where syslogger is the 
source of the OOM. Just enabling a lot of logging in a workload with many large 
query strings might do it.  So making it less likely to be killed might make 
the problem worse...

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
>> What might be worth thinking about is allowing the syslogger process to
>> inherit the postmaster's OOM-kill-proofness setting, instead of dropping
>> down to the same vulnerability as the postmaster's other child processes.

> Hm. I'm a bit scared about that - it doesn't seem that inconceivable
> that various backends log humongous multi-line messages, leading to
> syslogger *actually* taking up a fair amount of memory. Note that we're
> using plain stringinfos that ereport(ERROR) out of memory situations,
> rather than failing more gracefully.

True, but there's no hard limits on the postmaster's memory consumption
either ... and if the syslogger does get killed on such a basis, we have
at the least lost a bunch of log output.  On the whole I think we'd be
better off trying to prevent OOM kills on the syslogger.  (That doesn't
preclude other mitigation measures.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
> > We could work around a situation like that if we made postmaster use a
> > *different* pipe as stderr than the one we're handing to normal
> > backends. If postmaster created a new pipe and closed the read end
> > whenever forking a syslogger, we should get EPIPEs when writing after
> > syslogger died and could fall back to proper stderr or such.
> 
> I think that's nonsense, unfortunately.

Nice phrasing.


> If the postmaster had its own pipe, that would reduce the risk of this
> deadlock because only the postmaster would be filling that pipe, not
> the postmaster and all its other children --- but it wouldn't
> eliminate the risk.

The deadlock happens because postmaster is waiting for syslogger accept
a message, and syslogger waits for postmaster to restart it. To resolve
the deadlock postmasterneeds to not wait for a dead sylogger, even if it
hasn't yet received & processed the SIGCLD - what other postmaster
children do or don't do doesn't matter for resolving that cycle. The
reason postmaster currently block on writing to the pipe, instead of
getting EPIPE, is because both ends of the pipe are still
existing. Which in turn is the case because we need to be able to
restart syslogger without passing a new file descriptor to all
subprocesses.  If postmaster instead uses a different pipe to write to
it'll not block anymore, instead getting EPIPE, and can continue towards
starting a new syslogger.  So I don't think the described deadlock
exists if we were to apply my proposed fix.


What this obviously would not *not* guarantee is being able start a new
syslogger, but it seems fairly impossible to guarantee that. So sure,
other processes would still block until syslogger has successfully
restarted - but it's a resolvable situation rather than a hard deadlock,
which the described situation appears to be.


Note that there's plenty of cases where you could run into this even
without being unable to fork new processes. You'd e.g. could also run
into this while logging the exit of some other subprocess or such,
there's enough ereports in postmaster.


> I doubt the increase in reliability would be enough to justify the
> extra complexity and cost.

I'm doubtful about that too.


> What might be worth thinking about is allowing the syslogger process to
> inherit the postmaster's OOM-kill-proofness setting, instead of dropping
> down to the same vulnerability as the postmaster's other child processes.
> That presumes that this was an otherwise-unjustified OOM kill, which
> I'm not quite sure of ... but it does seem like a situation that could
> arise from time to time.

Hm. I'm a bit scared about that - it doesn't seem that inconceivable
that various backends log humongous multi-line messages, leading to
syslogger *actually* taking up a fair amount of memory. Note that we're
using plain stringinfos that ereport(ERROR) out of memory situations,
rather than failing more gracefully.

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
>> David Pacheco  writes:
>>> I ran into what appears to be a deadlock in the logging subsystem.  It
>>> looks like what happened was that the syslogger process exited because it
>>> ran out of memory.  But before the postmaster got a chance to handle the
>>> SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
>>> That also failed, and the postmaster went to log a message about it, but
>>> it's blocked on the pipe that's normally connected to the syslogger,
>>> presumably because the pipe is full because the syslogger is gone and
>>> hasn't read from it.

>> Ugh.

> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.

David's report isn't too clear: did the syslogger process actually run
out of memory and exit of its own volition after an ENOMEM, or did it get
killed by the dreaded OOM killer?  In either case, it's unclear whether
it was really using an excessive amount of memory.  We have not heard
reports suggesting a memory leak in the syslogger, but maybe there is
one under unusual circumstances?

I think you're probably right that the real cause here is the OOM
killer just randomly seizing on the syslogger as a victim process;
although since the syslogger disconnects from shared memory, it's
not very clear why it would score high on the OOM killer's metrics.
The whole thing is definitely odd.

> We could work around a situation like that if we made postmaster use a
> *different* pipe as stderr than the one we're handing to normal
> backends. If postmaster created a new pipe and closed the read end
> whenever forking a syslogger, we should get EPIPEs when writing after
> syslogger died and could fall back to proper stderr or such.

I think that's nonsense, unfortunately.  If the postmaster had its
own pipe, that would reduce the risk of this deadlock because only
the postmaster would be filling that pipe, not the postmaster and
all its other children --- but it wouldn't eliminate the risk.
I doubt the increase in reliability would be enough to justify the
extra complexity and cost.

What might be worth thinking about is allowing the syslogger process to
inherit the postmaster's OOM-kill-proofness setting, instead of dropping
down to the same vulnerability as the postmaster's other child processes.
That presumes that this was an otherwise-unjustified OOM kill, which
I'm not quite sure of ... but it does seem like a situation that could
arise from time to time.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Michael Paquier
On Fri, Nov 17, 2017 at 11:14 AM, Andres Freund  wrote:
> On 2017-11-17 11:09:56 +0900, Michael Paquier wrote:
>> when redirection_done is switched to true because the first process
>> generating a message to the syslogger pipe needs to open it first if
>> not done yet?
>
> I can't follow. The syslogger pipe is created when the first syslogger
> is started (before it's forked!). Which happens before other processes
> are created, because they all need to inherit that file descriptor.

Ah, OK. I didn't recall this dependency. Sorry for the confusion.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-17 11:09:56 +0900, Michael Paquier wrote:
> On Fri, Nov 17, 2017 at 10:50 AM, Andres Freund  wrote:
> > On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
> >> David Pacheco  writes:
> >> > I ran into what appears to be a deadlock in the logging subsystem.  It
> >> > looks like what happened was that the syslogger process exited because it
> >> > ran out of memory.  But before the postmaster got a chance to handle the
> >> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> >> > That also failed, and the postmaster went to log a message about it, but
> >> > it's blocked on the pipe that's normally connected to the syslogger,
> >> > presumably because the pipe is full because the syslogger is gone and
> >> > hasn't read from it.
> >>
> >> Ugh.
> >
> > I'm somewhat inclined to say that one has to live with this if the
> > system is so resource constrainted that processes barely using memory
> > get killed.
> >
> > We could work around a situation like that if we made postmaster use a
> > *different* pipe as stderr than the one we're handing to normal
> > backends. If postmaster created a new pipe and closed the read end
> > whenever forking a syslogger, we should get EPIPEs when writing after
> > syslogger died and could fall back to proper stderr or such.
> 
> I don't have the code on top of my mind, but isn't a custom fd causing
> a small penalty

Sure, there's some minor overhead because every process would need to
close another fd after forking.


> when redirection_done is switched to true because the first process
> generating a message to the syslogger pipe needs to open it first if
> not done yet?

I can't follow. The syslogger pipe is created when the first syslogger
is started (before it's forked!). Which happens before other processes
are created, because they all need to inherit that file descriptor.


> So you'd need proper locking to save from race conditions.

I completely fail to see why this'd be the case. All I'm talking about
is using another pipe between syslogger and postmaster than between
other-processes and syslogger.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Michael Paquier
On Fri, Nov 17, 2017 at 10:50 AM, Andres Freund  wrote:
> On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
>> David Pacheco  writes:
>> > I ran into what appears to be a deadlock in the logging subsystem.  It
>> > looks like what happened was that the syslogger process exited because it
>> > ran out of memory.  But before the postmaster got a chance to handle the
>> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
>> > That also failed, and the postmaster went to log a message about it, but
>> > it's blocked on the pipe that's normally connected to the syslogger,
>> > presumably because the pipe is full because the syslogger is gone and
>> > hasn't read from it.
>>
>> Ugh.
>
> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.
>
> We could work around a situation like that if we made postmaster use a
> *different* pipe as stderr than the one we're handing to normal
> backends. If postmaster created a new pipe and closed the read end
> whenever forking a syslogger, we should get EPIPEs when writing after
> syslogger died and could fall back to proper stderr or such.

I don't have the code on top of my mind, but isn't a custom fd causing
a small penalty when redirection_done is switched to true because the
first process generating a message to the syslogger pipe needs to open
it first if not done yet? So you'd need proper locking to save from
race conditions. Or is the first message redirected message always
generated by the postmaster or the syslogger? I don't recall that this
is actually true..
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
> David Pacheco  writes:
> > I ran into what appears to be a deadlock in the logging subsystem.  It
> > looks like what happened was that the syslogger process exited because it
> > ran out of memory.  But before the postmaster got a chance to handle the
> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> > That also failed, and the postmaster went to log a message about it, but
> > it's blocked on the pipe that's normally connected to the syslogger,
> > presumably because the pipe is full because the syslogger is gone and
> > hasn't read from it.
> 
> Ugh.

I'm somewhat inclined to say that one has to live with this if the
system is so resource constrainted that processes barely using memory
get killed.

We could work around a situation like that if we made postmaster use a
*different* pipe as stderr than the one we're handing to normal
backends. If postmaster created a new pipe and closed the read end
whenever forking a syslogger, we should get EPIPEs when writing after
syslogger died and could fall back to proper stderr or such.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread David Pacheco
On Mon, Nov 6, 2017 at 12:35 PM, Tom Lane  wrote:

> David Pacheco  writes:
> > I ran into what appears to be a deadlock in the logging subsystem.  It
> > looks like what happened was that the syslogger process exited because it
> > ran out of memory.  But before the postmaster got a chance to handle the
> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> > That also failed, and the postmaster went to log a message about it, but
> > it's blocked on the pipe that's normally connected to the syslogger,
> > presumably because the pipe is full because the syslogger is gone and
> > hasn't read from it.
>
> Ugh.
>


Should I file a bug on this issue?

Thanks,
Dave


Re: [GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

On 11/16/2017 03:13 PM, bricklen wrote:


On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson > wrote:


v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump,
when is the data actually loaded?  I've looked in the list output and
don't see any "load" statements.


Look for COPY lines, that's how the data is restored.


$ pg_restore -l CDSHA01.dump > CDSHA01.txt
$ grep --color -i copy CDSHA01.txt
$ echo $?
1

There are lots of "restoring data", though.  I should have thought to grep 
for that.


One thing that puzzles me is how fast the tables (even large ones) loaded 
compared to how slow the pg_dump -Fc was.  Granted, I'm running -j4 but 
still, these were some really large, poorly compressible tables (the dump 
file was about as big as du -mc data/base).


--
World Peace Through Nuclear Pacification



Re: [GENERAL] pg_restore load data

2017-11-16 Thread melvin6925
The command is COPY, not load.


Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone
 Original message From: Ron Johnson  
Date: 11/16/17  16:07  (GMT-05:00) To: pgsql-general@postgresql.org Subject: 
[GENERAL] pg_restore load data 
Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.

Thanks

-- 
World Peace Through Nuclear Pacification



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore load data

2017-11-16 Thread bricklen
On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson  wrote:

> v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)
>
> During a "whole database" restore using pg_restore of a custom dump, when
> is the data actually loaded?  I've looked in the list output and don't see
> any "load" statements.
>

Look for COPY lines, that's how the data is restored.


Re: [GENERAL] expanded auto and header linestyle double

2017-11-16 Thread Pavel Stehule
2017-11-16 21:16 GMT+01:00 Matt Zagrabelny :

> Thanks for the reply, Pavel!
>
> On Thu, Nov 16, 2017 at 1:01 AM, Pavel Stehule 
> wrote:
>
>> Hi
>>
>> 2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :
>>
>>> Greetings,
>>>
>>> Using PG 10.1.
>>>
>>> In my .psqlrc I have:
>>>
>>> \x auto
>>> \pset linestyle 'unicode'
>>> \pset unicode_header_linestyle double
>>>
>>> and when the output is expanded, I do not see a double line for the
>>> first record, but I do for all subsequent records. For example:
>>>
>>> % select  * from artist;
>>> ─[ RECORD 1 ]─
>>> artistid │ 1
>>> name │ AC/DC
>>> ═[ RECORD 2 ]═
>>> artistid │ 2
>>> name │ Accept
>>> ═[ RECORD 3 ]═
>>> artistid │ 3
>>> name │ Aerosmith
>>> ═[ RECORD 4 ]═
>>> artistid │ 4
>>> name │ Alanis Morissette
>>> ═[ RECORD 5 ]═
>>> artistid │ 5
>>>
>>> I would like to have the initial "RECORD 1" line have the same "double"
>>> linestyle as the other records.
>>>
>>> Am I missing a config item?
>>>
>>
>> yes - it is border line
>>
>> use \pset border 2
>>
>>
> Hmmm I didn't use the "border" setting.
>
>
>> and you understand
>>
>> you are missing
>>
>> \pset unicode_border_linestyle double
>>
>
> But I did use the above setting.
>
> So my .psqlrc looks like:
>
> \pset linestyle 'unicode'
> \pset unicode_border_linestyle double
> \pset unicode_header_linestyle double
>
> ═[ RECORD 1 ]
> artistid │ 1
> name │ AC/DC
> ═[ RECORD 2 ]
> artistid │ 2
> name │ Accept
> ═[ RECORD 3 ]
> artistid │ 3
> name │ Aerosmith
>
> Thanks for helping me get it corrected.
>
> It still feels strange that when using "expanded" display the first record
> separator would be categorized under "border_linestyle" and the remaining
> record separators would be categorized under "header_linestyle".
>

There is some simplification - this line is header and together border. It
has sense when you use border 2

Regards

Pavel


> Cheers!
>
> -m
>


Re: [GENERAL] expanded auto and header linestyle double

2017-11-16 Thread Matt Zagrabelny
Thanks for the reply, Pavel!

On Thu, Nov 16, 2017 at 1:01 AM, Pavel Stehule 
wrote:

> Hi
>
> 2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :
>
>> Greetings,
>>
>> Using PG 10.1.
>>
>> In my .psqlrc I have:
>>
>> \x auto
>> \pset linestyle 'unicode'
>> \pset unicode_header_linestyle double
>>
>> and when the output is expanded, I do not see a double line for the first
>> record, but I do for all subsequent records. For example:
>>
>> % select  * from artist;
>> ─[ RECORD 1 ]─
>> artistid │ 1
>> name │ AC/DC
>> ═[ RECORD 2 ]═
>> artistid │ 2
>> name │ Accept
>> ═[ RECORD 3 ]═
>> artistid │ 3
>> name │ Aerosmith
>> ═[ RECORD 4 ]═
>> artistid │ 4
>> name │ Alanis Morissette
>> ═[ RECORD 5 ]═
>> artistid │ 5
>>
>> I would like to have the initial "RECORD 1" line have the same "double"
>> linestyle as the other records.
>>
>> Am I missing a config item?
>>
>
> yes - it is border line
>
> use \pset border 2
>
>
Hmmm I didn't use the "border" setting.


> and you understand
>
> you are missing
>
> \pset unicode_border_linestyle double
>

But I did use the above setting.

So my .psqlrc looks like:

\pset linestyle 'unicode'
\pset unicode_border_linestyle double
\pset unicode_header_linestyle double

═[ RECORD 1 ]
artistid │ 1
name │ AC/DC
═[ RECORD 2 ]
artistid │ 2
name │ Accept
═[ RECORD 3 ]
artistid │ 3
name │ Aerosmith

Thanks for helping me get it corrected.

It still feels strange that when using "expanded" display the first record
separator would be categorized under "border_linestyle" and the remaining
record separators would be categorized under "header_linestyle".

Cheers!

-m


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Michael Nolan
On Thu, Nov 16, 2017 at 1:56 AM, Nick Dro  wrote:

> I beleieve that every information system has the needs to send emails.
> Currently PostgreSQL doesn't have a function which gets TEXT and return
> true if it's valid email address (x...@yyy.com / .co.ZZ)
> Do you believe such function should exist in PostgreSQL or it's best to
> let every user to implement his own function?
>

There's a world of difference between an email address that is well-formed
and one that actually works.

In the systems I administer there's a lot of time spent dealing with
bounced mail to make sure that the email addresses we have actually reach
someone, hopefully the intended target.  And in the US, bulk emailers also
have to deal with the CAN-SPAM act, which specifies procedures that must be
in place to allow easy administrative options to remove one'e email address
from mailing lists.

Procedures to verify that an email address works and to administer its use
under rules like CAN_SPAM cannot exist solely within the database itself.
And as others have noted, what makes for a 'well-formed' email address has
always been a bit complicated.
--
Mike Nolan


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Andreas Joseph Krogh
På torsdag 16. november 2017 kl. 09:05:00, skrev Pavel Stehule <
pavel.steh...@gmail.com >:
Hi   2017-11-16 8:56 GMT+01:00 Nick Dro >: I beleieve that every information system has 
the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and return true 
if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best to let 
every user to implement his own function?

 
 I don't think so this functionality should be in upstream - but it is good 
use for some extension and placing it in PGXN or PostgreSQL community 
repository.
 
Postgres has good regexp support and this case can be implemented by one 
regexp.
 
you can use PLPerlu and use some modules from CPAN
 
http://search.cpan.org/~rjbs/Email-Valid-1.202/lib/Email/Valid.pm 

 
Regards
 
Pavel



 
In general, I see no reason for a modern RDBMS not to provide an 
email-datatype. IMV that's no different from other types which also could have 
been plain-text but are convenient to have datatypes for.
Being an open-source project I guess one must show initiative and start a 
discussion on -hackers to see what interesst there's in having one in core. I 
for one hope there will be.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread bto...@computer.org


- Original Message -


From: "Nick Dro"  
To: pgsql-general@postgresql.org 
Sent: Thursday, November 16, 2017 2:56:42 AM 
Subject: [GENERAL] Build in function to verify email addresses 

I beleieve that every information system has the needs to send emails. 
Currently PostgreSQL doesn't have a function which gets TEXT and return true if 
it's valid email address (x...@yyy.com / .co.ZZ) 
Do you believe such function should exist in PostgreSQL or it's best to let 
every user to implement his own function? 




I don't mean to sound snarky, but did you even try to google search? 

https://pgxn.org/dist/email/ 

https://github.com/asotolongo/email 

or 

https://github.com/petere/pgemailaddr 



-- B 






Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Chris Withers
I'm pretty sure this isn't as simple as you think it is, I'd suggest 
having a good read of:


https://stackoverflow.com/a/201378/216229

Chris


On 16/11/2017 07:56, Nick Dro wrote:

I beleieve that every information system has the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and 
return true if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best 
to let every user to implement his own function?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Pavel Stehule
Hi

2017-11-16 8:56 GMT+01:00 Nick Dro :

> I beleieve that every information system has the needs to send emails.
> Currently PostgreSQL doesn't have a function which gets TEXT and return
> true if it's valid email address (x...@yyy.com / .co.ZZ)
> Do you believe such function should exist in PostgreSQL or it's best to
> let every user to implement his own function?
>

I don't think so this functionality should be in upstream - but it is good
use for some extension and placing it in PGXN or PostgreSQL community
repository.

Postgres has good regexp support and this case can be implemented by one
regexp.

you can use PLPerlu and use some modules from CPAN

http://search.cpan.org/~rjbs/Email-Valid-1.202/lib/Email/Valid.pm

Regards

Pavel


Re: [GENERAL] expanded auto and header linestyle double

2017-11-15 Thread Pavel Stehule
Hi

2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :

> Greetings,
>
> Using PG 10.1.
>
> In my .psqlrc I have:
>
> \x auto
> \pset linestyle 'unicode'
> \pset unicode_header_linestyle double
>
> and when the output is expanded, I do not see a double line for the first
> record, but I do for all subsequent records. For example:
>
> % select  * from artist;
> ─[ RECORD 1 ]─
> artistid │ 1
> name │ AC/DC
> ═[ RECORD 2 ]═
> artistid │ 2
> name │ Accept
> ═[ RECORD 3 ]═
> artistid │ 3
> name │ Aerosmith
> ═[ RECORD 4 ]═
> artistid │ 4
> name │ Alanis Morissette
> ═[ RECORD 5 ]═
> artistid │ 5
>
> I would like to have the initial "RECORD 1" line have the same "double"
> linestyle as the other records.
>
> Am I missing a config item?
>

yes - it is border line

use \pset border 2

and you understand

you are missing

\pset unicode_border_linestyle double

Regards

Pavel


> Thanks!
>
> -m
>


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-15 Thread Jeff Janes
On Tue, Nov 14, 2017 at 8:28 AM, Laurenz Albe 
wrote:

> rakeshkumar464 wrote:
> > If pg_basebackup is run from a remote machine with compress option
> --gzip ,
> > compress level 9,
> > will the compression occur prior to the data being sent on the network or
> > after it has been received
> > at the remote machine.
>
> That only means that the output TAR file will be compressed, it has
> nothing to do with the data transfered from the server.
>
> If you want to compress the data sent over the network, use
> pg_basebackup over an SSL connection with SSL compression enabled.
>

But ssl compression is disabled by default on most systems, and not even
supported at all on many without doing a custom compilation of the ssl
library.

Cheers,

Jeff


Re: [GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread Rory Falloon
Thank you for that. Back to the drawing board!

On Wed, Nov 15, 2017 at 9:30 PM, John R Pierce  wrote:

> On 11/15/2017 6:02 PM, Rory Falloon wrote:
>
>>
>> Right now I am trying to dump the database, gzip, move across, and import
>> into the new slave (which is configured as a master to perform the initial
>> setup). Ideally I do this dump, move and import during a period of
>> inactivity on the master so the new server will come up and immediately be
>> able to catch up on replication due to lack of activity. However, I have
>> been importing the current db as a test and after 90 minutes it seems to
>> have only got 2/3 of the way. I am not confident this will work but it
>> seems like the most efficient way to start.
>>
>
>
> you can't use pg_dump to create a slave, as it won't have the same
> timeline.
>
> I would use pg_basebackup, but in general streaming replication over a
> high latency erratic link will never work real well.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread John R Pierce

On 11/15/2017 6:02 PM, Rory Falloon wrote:


Right now I am trying to dump the database, gzip, move across, and 
import into the new slave (which is configured as a master to perform 
the initial setup). Ideally I do this dump, move and import during a 
period of inactivity on the master so the new server will come up and 
immediately be able to catch up on replication due to lack of 
activity. However, I have been importing the current db as a test and 
after 90 minutes it seems to have only got 2/3 of the way. I am not 
confident this will work but it seems like the most efficient way to 
start.



you can't use pg_dump to create a slave, as it won't have the same timeline.

I would use pg_basebackup, but in general streaming replication over a 
high latency erratic link will never work real well.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-15 Thread Tatsuo Ishii
I am not familiar with swarm cluster nor repmgr. So I wonder why you
wouldn't be able to get to the goal you are trying to achieve without
repmgr. Can you please elaborate more?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> I am also interested in this set-up
> My idea is to run pgpool in a docker swarm cluster and to have two instances 
> of pgpool on different nodes (but not in watchdog mode). In this case it is 
> not possible to give the responsablity to pgpool to do the failover otherwise 
> both instances would try to do the failover (they don't know about each 
> other, no watchdog mode). So I have to set failover_command parameter to an 
> empty string and let repmgr do the automatic failover. I have tested and I 
> believe it is ok: in case of master failure repmgr does the promote, pgpool 
> will not do the failover but will constantly try to reconnect to a new master 
> and as soon as repmgr has done the promotion it is OK.
> The important think is that when pgpool starts I have to read the state of 
> the cluster in the repmgr metadata table (repl_nodes) and generate the 
> /tmp/pgpool_status file based on that. (I even have to do a promote_node in 
> the edge case that the master is down and was brought down when pgpool also 
> was down).  
> But the inconvenient is that if pgpool fails, let's say because the docker 
> node of pgpool is stopped, then docker swarm restart the service on another 
> node but this can take a few seconds. So I would prefer to have multiple 
> pgpool. I did not have the time right now to test such a scenario but I am 
> interested to know if it is possible.
> Pierre 
> 
> On Wednesday, November 15, 2017, 11:02:31 AM GMT+1, Vikas Sharma 
>  wrote:  
>  
>  Thank you Tatsuo for the Reply, I will post this in list you mentioned.
> 
> By Master-Master, I meant two pgpool servers both active accepting 
> connections from Application, If one Pgpool Server becomes unlavailable other 
> still accepting connections. At an Ideal time, both Pgpool instances on each 
> PgPool server working actively.  The database in this case will be 1 master 
> and 2 slaves, replicated by streaming replication.
> 
> Regards
> VS
> 
> On 14 November 2017 at 22:56, Tatsuo Ishii  wrote:
> 
>> Hello There,
>>
>> I need to setup two PGPool Servers in Master-Master mode, First thing I
>> want to know, is it possible?
>>
>> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
>> http://www.pgpool.net/pgpool- web/contrib_docs/watchdog_ 
>> master_slave_3.3/en.html
>>
>> Could anyone please enlighten me and any workaround for this?
>>
>> Regards
>> VS
> 
>> I need to setup two PGPool Servers in Master-Master mode, First thing I
>> want to know, is it possible?
>>
>> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
>> http://www.pgpool.net/pgpool- web/contrib_docs/watchdog_ 
>> master_slave_3.3/en.html
>>
>> Could anyone please enlighten me and any workaround for this?
> 
> This is not an appropriate list. You should go to the Pgpool mailing
> list:
> 
> https://www.pgpool.net/ mailman/listinfo/pgpool- general
> 
> Anyway... I am not sure what you mean by "Master-Master mode" but if
> that means you want to issue write queries by connecting to any of
> Pgpool-II port, it's already possible in the set up you have created.
> 
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_ en.php
> Japanese:http://www.sraoss.co. jp
> 
> 
>   


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem changing jsonb attribute to null value

2017-11-15 Thread Dmitry Dolgov
> On 15 November 2017 at 22:54, RODRIGUEZ CORTES MARIO IGNACIO <
ignacio.cor...@inegi.org.mx> wrote:
>
> I have a problem with a record in a jsonb type table, I'm trying to
> change the value of an attribute to null but it leaves me all the
> content in null and not just the value
>
> prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', to_jsonb(null));
> ERROR:  no se pudo determinar el tipo polimórfico porque el tipo de
> entrada es «unknown»
>
> defining null value as a text type:
>
> prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}',
> to_jsonb(null::text));
>  jsonb_set
> ---
>
> (1 fila)
>
> it leaves the record in null, when I hope it leaves it with the null
> value in attribute "v": {"v": null}

I think something like this should work:

=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', 'null'::jsonb);
  jsonb_set
-
 {"v": null}
(1 row)

is that what you want?


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-15 Thread rakeshkumar464
Does pg_basebackup on a remote machine follow the standard libpq protocol.  I
am not able to force it to use ssl, despite having an entry in pg_hba.conf:

hostnossl all all all reject

>From the same remote machine, psql is forced to use ssl.

Makes me wonder whether pg_basebackup has a different protocol.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query on pg_settings view

2017-11-15 Thread Stephen Frost
Greetings,

* Laurenz Albe (laurenz.a...@cybertec.at) wrote:
> Ronen Nofar wrote:
> > I have a weird case when running a query on  the pg_settings view.
> > I have two users, first one is the default user - postgres which is a 
> > superuser
> > and another one is a role which i had created, i called it test_role and 
> > it's not a superuser.
> > When I run a select on pg_settings with these two users I recieve different 
> > results.
> 
> That is intentional, because some settings should only be
> visible for superusers, for example everything that has to
> do with the operating system (location of configuration file
> or socket directories).

This isn't quite correct any longer- with PG10, we have a default role
called 'pg_read_all_settings' which can be GRANT'd to other roles to
allow viewing of all settings, even those previously restricted to
superuser-only.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Query on pg_settings view

2017-11-15 Thread Laurenz Albe
Ronen Nofar wrote:
> I have a weird case when running a query on  the pg_settings view.
> I have two users, first one is the default user - postgres which is a 
> superuser
> and another one is a role which i had created, i called it test_role and it's 
> not a superuser.
> When I run a select on pg_settings with these two users I recieve different 
> results.

That is intentional, because some settings should only be
visible for superusers, for example everything that has to
do with the operating system (location of configuration file
or socket directories).

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-15 Thread Vikas Sharma
Thank you Tatsuo for the Reply, I will post this in list you mentioned.

By Master-Master, I meant two pgpool servers both active accepting
connections from Application, If one Pgpool Server becomes unlavailable
other still accepting connections. At an Ideal time, both Pgpool instances
on each PgPool server working actively.  The database in this case will be
1 master and 2 slaves, replicated by streaming replication.

Regards
VS

On 14 November 2017 at 22:56, Tatsuo Ishii  wrote:

> > Hello There,
> >
> > I need to setup two PGPool Servers in Master-Master mode, First thing I
> > want to know, is it possible?
> >
> > I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> > http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_
> master_slave_3.3/en.html
> >
> > Could anyone please enlighten me and any workaround for this?
> >
> > Regards
> > VS
>
> > I need to setup two PGPool Servers in Master-Master mode, First thing I
> > want to know, is it possible?
> >
> > I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> > http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_
> master_slave_3.3/en.html
> >
> > Could anyone please enlighten me and any workaround for this?
>
> This is not an appropriate list. You should go to the Pgpool mailing
> list:
>
> https://www.pgpool.net/mailman/listinfo/pgpool-general
>
> Anyway... I am not sure what you mean by "Master-Master mode" but if
> that means you want to issue write queries by connecting to any of
> Pgpool-II port, it's already possible in the set up you have created.
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>


Re: [GENERAL] archive_command not being executed

2017-11-14 Thread Eric D
Hi Jeremy,


Thanks for the info on 9.3 vs 9.5.  While searching for Paul's suggestion I'd 
seen the same thing, but didn't "reply all" so my response didn't make it into 
the mailing list.


Regarding your question, as I understand the process, you need to get the WAL 
files being shipped over to the standby before you start streaming replication. 
 The sequence of events I think is supposed to happen is:


  1.  Start WAL file shipping from master->standby
  2.  Run pg_basebackup from master->standby
  3.  With a recovery.conf in place, start the standby.  The recovery.conf 
needs the connection info for the master for the streaming replication, but it 
also needs the location of the WAL files, to replay what's been going on during 
the pg_basebackup.

I don't think streaming replication has a way to pick up the activity covered 
in the WAL files.



From: Jeremy Schneider <schnei...@ardentperf.com>
Sent: Monday, November 13, 2017 3:56 PM
To: eric...@hotmail.com
Cc: PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

Hi Eric,

Thanks for using PostgreSQL!

On Fri, Nov 10, 2017 at 9:26 AM, Paul Jungwirth
<p...@illuminatedcomputing.com> wrote:
> Oh this has happened to me before. :-) On SB1 you need to set
> archive_mode to always (not on). Otherwise it is ignored when running as a
> standby.

It looks to me like this feature was not added until 9.5 and Eric is
running 9.3  :(

> On 11/10/2017 09:10 AM, Eric D wrote:
>> I have a standby db server (SB1) that will soon become the master.  SB1
>> is set up with streaming replication from the current master.  I'm
>> trying to set up a third server (SB2) as a slave/standby to SB1, so that
>> when SB1 becomes the master, there will be a standby for it.  First step
>> is to get WAL files shipped from SB1->SB2.

Eric,

>From my reading of the docs and commit logs, standby databases
couldn't archive their WALs until 9.5.

https://www.postgresql.org/message-id/cankgpbs7qgakgq-opzy0esam6+wue5mgpyehcgo_eoq7tjv...@mail.gmail.com
http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/
Postgres 9.5 feature highlight - archive_mode = 
always<http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/>
paquier.xyz
Postgres 9.5 feature highlight - archive_mode = always




There is a section in the 9.3 docs about cascading replication:
https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION
PostgreSQL: Documentation: 9.3: Log-Shipping Standby 
Servers<https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION>
www.postgresql.org
25.2. Log-Shipping Standby Servers. Continuous archiving can be used to create 
a high availability (HA) cluster configuration with one or more standby servers 
ready ...




It seems to me this is based on streaming replication only.  Looks
like it's not designed (in 9.3) to start archiving until it becomes a
primary.  "You will also need to set primary_conninfo in the
downstream standby to point to the cascading standby."

Are you able to configure a cascading replica by using streaming
replication on your 9.3 system, without WAL archiving on the standby?

-Jeremy

--
http://about.me/jeremy_schneider
[https://aboutme.imgix.net/background/users/j/e/r/jeremy_schneider_1364416900_46.jpg?q=80=1=format=max=250=140=0,0,1396,732]<http://about.me/jeremy_schneider>

Jeremy Schneider on about.me<http://about.me/jeremy_schneider>
about.me
I am an Engineer, consultant, and Trainer in the United States. Read my blog.





Re: [GENERAL] jsonb

2017-11-14 Thread John R Pierce

On 11/14/2017 2:30 PM, hmidi slim wrote:

I'm trying to use the function to_jsonb and create the name:
to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
But after that I used Objection.js ORM to get data using the query:
Product.query().where('id',1).then(prod => {console.log(prod)})
I think that the problem maybe with the usage of to_jsonb function, 
maybe I miss something. But when I fetch the data with the ORM I found 
that the type was a string and not a jsonb




never heard of your ORM... does it even know what postgres jsonb is ?   
do you know what actual SQL query that piece of ORMism generates ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread Johannes Graën
On 11/14/2017 11:30 PM, hmidi slim wrote:
> I'm trying to use the function to_jsonb and create the name:
> to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');

This query converts a string into a JSON object that consist of that
string. I guess what you intend to accomplish is rather:

select jsonb_build_object('key1', 'text1', 'key2', 'text2');




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread Merlin Moncure
On Tue, Nov 14, 2017 at 4:30 PM, hmidi slim  wrote:
> I'm trying to use the function to_jsonb and create the name:
> to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
> But after that I used Objection.js ORM to get data using the query:
> Product.query().where('id',1).then(prod => {console.log(prod)})
> I think that the problem maybe with the usage of to_jsonb function, maybe I
> miss something. But when I fetch the data with the ORM I found that the type
> was a string and not a jsonb

Still not quite following. My advice would be to:

#1) work out the SQL you want the database to be running and verify
the results are correct

#2) figure out out to get the ORM to send that SQL

If you need help figuring out that SQL the ORM is actually running,
try turning on statement logging in postgresql.conf and watching the
log.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-14 Thread Tatsuo Ishii
> Hello There,
> 
> I need to setup two PGPool Servers in Master-Master mode, First thing I
> want to know, is it possible?
> 
> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html
> 
> Could anyone please enlighten me and any workaround for this?
> 
> Regards
> VS

> I need to setup two PGPool Servers in Master-Master mode, First thing I
> want to know, is it possible?
> 
> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html
> 
> Could anyone please enlighten me and any workaround for this?

This is not an appropriate list. You should go to the Pgpool mailing
list:

https://www.pgpool.net/mailman/listinfo/pgpool-general

Anyway... I am not sure what you mean by "Master-Master mode" but if
that means you want to issue write queries by connecting to any of
Pgpool-II port, it's already possible in the set up you have created.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread hmidi slim
I'm trying to use the function to_jsonb and create the name:
to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
But after that I used Objection.js ORM to get data using the query:
Product.query().where('id',1).then(prod => {console.log(prod)})
I think that the problem maybe with the usage of to_jsonb function, maybe I
miss something. But when I fetch the data with the ORM I found that the
type was a string and not a jsonb


2017-11-14 23:09 GMT+01:00 Merlin Moncure :

> On Tue, Nov 14, 2017 at 3:32 PM, hmidi slim  wrote:
> > I have a column name of type 'jsonb' on my table named product. The
> format
> > of the column:
> > name: {"key1": "text1", "key2": "text2"}
> >
> > When I make a query to fetch data from the table I got this format:
> > name: '{"key1": "text1", "key2": "text2"}'
> >
> > Why does postgresql returns the name such as string type and not jsonb?
> is
> > it a bug or is there something else to add?
>
> not quite following.  Can you paste the query you are trying to
> execute along with the results vs. expectation?  thanks
>
> merlin
>


Re: [GENERAL] jsonb

2017-11-14 Thread Merlin Moncure
On Tue, Nov 14, 2017 at 3:32 PM, hmidi slim  wrote:
> I have a column name of type 'jsonb' on my table named product. The format
> of the column:
> name: {"key1": "text1", "key2": "text2"}
>
> When I make a query to fetch data from the table I got this format:
> name: '{"key1": "text1", "key2": "text2"}'
>
> Why does postgresql returns the name such as string type and not jsonb? is
> it a bug or is there something else to add?

not quite following.  Can you paste the query you are trying to
execute along with the results vs. expectation?  thanks

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Stephen Frost
Tom, all,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> =?iso-8859-1?Q?Bo_Thorbj=F8rn_Jensen?=  writes:
> > I have some additional info and a fix.
> > Firstly steps to reproduce:
> 
> Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
> around with default ACLs.  A simple example is

Yes, it's related to the work I did with pg_dump's ACL handling, because
we're no longer just always including the whole revoke/grant set of ACLs
for everything in the output.

> $ pg_dump -c -U postgres postgres | grep -i public
> DROP SCHEMA public;
> -- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
> CREATE SCHEMA public;
> ALTER SCHEMA public OWNER TO postgres;
> -- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
> COMMENT ON SCHEMA public IS 'standard public schema';
> -- Name: public; Type: ACL; Schema: -; Owner: postgres
> GRANT ALL ON SCHEMA public TO PUBLIC;
> 
> That's fine, but if I shove it through an archive file:

This works because I added into pg_dump.c a check based on if the output
is clean (and therefore the public schema is being recreated or not).

In hindsight, that wasn't really the right thing to do because it ends
up only working when pg_dump is run with -c and doesn't consider the
case where pg_dump is run without -c but pg_restore is.

> $ pg_dump -f p.dump -Fc -U postgres postgres
> 
> $ pg_restore -c p.dump | grep -i public

This doesn't work because pg_dump isn't run with -c, while pg_restore
is.  If the archive is created with pg_dump -c (as the above was), then
the results match up between the two runs.  Note also that if pg_dump is
run with -c then a pg_restore without -c would actually still include
the GRANT statement, which isn't really correct either.

That's obviously a change from what we had before and wasn't
intentional.

> This is *REALLY BAD*.  Quite aside from the restore being wrong,
> those two sequences should never ever give different results.
> Stephen, you put some filtering logic in the wrong place in pg_dump.

I do wish it was that simple.

Unfortunately, the public schema is just ridiculously special, both in
the way it's a 'user' object but is created by initdb and that it's got
special non-default ACLs on it and how it has explicit special code to
skip over it when a restore is happening, unless -c is used.

What I'm afraid we need to do here is basically continue to hack on that
code in pg_backup_archiver.c's _printTocEntry() to teach it to issue the
default GRANT ALL ON SCHEMA public TO PUBLIC; when we are processing the
TOC entry for CREATE SCHEMA public;.

That would make the recreation of the public schema when pg_dump or
pg_restore is being run with -c actually match how the public schema is
created by initdb, and the rest would end up falling into place, I
think.

One complication, however, is what happens when a user drops and
recreates the public schema.  If that's done, we'll end up not dumping
out the delta from the public schema's initial ACLs, which wouldn't be
correct if you're restoring into a newly initdb'd cluster.  I'm thinking
that we need to forcibly look at the delta from
public-as-installed-by-initdb and whatever-public-is-now, regardless of
if the public schema was recreated by the user or not, because on
restore we are expecting a newly initdb'd cluster with the public schema
as originally installed (or as installed by pg_dump/pg_restore following
the logic above).

I'll play around with this approach and see if things end up working out
in a better fashion with it.  Baking this knowledge into
pg_backup_archiver.c is certainly ugly, but handling of public has
always been hard-coded into that, and we even added more special
handling to that code 10 years ago to deal with the COMMENT on the
public schema, so this is really just more of the same.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Dingyuan Wang
ok, I just avoided it using sum(cnt::int) since cnt is small.

2017-11-15 00:25, Tom Lane:
> Dingyuan Wang  writes:
>> PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
>> 7.2.0, 64-bit
>>
>> (gdb) bt
>> #0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
>> ./build/../src/backend/utils/adt/numeric.c:4285
> 
> I think this is the same issue being discussed at
> 
> https://www.postgresql.org/message-id/flat/20171110185747.31519.28038%40wrigleys.postgresql.org
> 
>   regards, tom lane
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-14 Thread Laurenz Albe
rakeshkumar464 wrote:
> If pg_basebackup is run from a remote machine with compress option --gzip ,
> compress level 9,
> will the compression occur prior to the data being sent on the network or
> after it has been received
> at the remote machine.

That only means that the output TAR file will be compressed, it has
nothing to do with the data transfered from the server.

If you want to compress the data sent over the network, use
pg_basebackup over an SSL connection with SSL compression enabled.

Yours,
Laurenz Albe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Tom Lane
Dingyuan Wang  writes:
> PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
> 7.2.0, 64-bit
>
> (gdb) bt
> #0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
> ./build/../src/backend/utils/adt/numeric.c:4285

I think this is the same issue being discussed at

https://www.postgresql.org/message-id/flat/20171110185747.31519.28038%40wrigleys.postgresql.org

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Tom Lane
=?iso-8859-1?Q?Bo_Thorbj=F8rn_Jensen?=  writes:
> I have some additional info and a fix.
> Firstly steps to reproduce:

Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
around with default ACLs.  A simple example is

$ pg_dump -c -U postgres postgres | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';
-- Name: public; Type: ACL; Schema: -; Owner: postgres
GRANT ALL ON SCHEMA public TO PUBLIC;

That's fine, but if I shove it through an archive file:

$ pg_dump -f p.dump -Fc -U postgres postgres

$ pg_restore -c p.dump | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';

This is *REALLY BAD*.  Quite aside from the restore being wrong,
those two sequences should never ever give different results.
Stephen, you put some filtering logic in the wrong place in pg_dump.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Thomas Kellerer
Tom Lane schrieb am 14.11.2017 um 15:36:
>> is there any way (short of writing a function in an untrusted PL)
>> to determine the actual time zone (or time) of the server OS?
> 
> AFAIK that would only be true if some part of your client stack
> is issuing a SET TIMEZONE command.  (libpq will do that if it finds
> a PGTZ environment variable set, but not in response to plain TZ.)

Ah, interesting. I do that through JDBC, so apparently that's the part to 
blame. 
 
> If that's true, and you can't/don't want to change it, you could try
> 
> select reset_val from pg_settings where name = 'TimeZone';

Hmm, this does not seem to work. 

I am connected to a server with Asia/Bangkok but through JDBC 
that query still returns Europe/Berlin (which is my client's time zone)

So apparently the JDBC driver somehow "persists" this setting. 

I will take this to the JDBC mailing list then, thanks.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Tom Lane
Thomas Kellerer  writes:
> is there any way (short of writing a function in an untrusted PL) to 
> determine the actual time zone (or time) of the server OS? 

The default value of the timezone parameter is as close as you'll get
in modern versions of PG.

> "show timezone" always returns the client's time zone. 

AFAIK that would only be true if some part of your client stack
is issuing a SET TIMEZONE command.  (libpq will do that if it finds
a PGTZ environment variable set, but not in response to plain TZ.)

If that's true, and you can't/don't want to change it, you could try

select reset_val from pg_settings where name = 'TimeZone';

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-14 Thread Jose Maria Terry Jimenez

El 11/11/17 a las 0:48, DrakoRod escribió:

Oh!!


Jose Maria TJ wrote

You're wrong, that are gcc versions, not OS versions.

For example in my CentOS 6 Box

cat /etc/redhat-release
CentOS release 6.9 (Final)

gcc -v
[...trimmed...]
gcc versión 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)

You're right!. Is the GGC version, not the OS version

Great! I think that I compiling in a GGC 4.X version is good for most SO
distribution right?

Thanks!



You're welcome!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Bo Thorbjørn Jensen
I have some additional info and a fix.

Firstly steps to reproduce:

1.  create database:
CREATE DATABASE test WITH ENCODING='UTF8' OWNER=postgres CONNECTION LIMIT=-1;
-- here public has access to public

2. dump:
pg_dump -f testfile.dump -F c -h localhost -U postgres test

3. restore:
pg_restore -c -d testfile.dump -h localhost -U postgres test
-- here public no longer has access to schema public

It is easily fixable with:
GRANT ALL ON SCHEMA public TO public;

And the issue goes away.. (privilege stays after next dump/restore)

So. What am I missing?
Is this intentional functionality ?

Kind regards and again thank you for your time

Bo Thorbjørn Jensen


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Eric D
I'd come to the same conclusion Jeremy had about 9.5 being the first version 
with the 'always' option for archive_mode.   Looking at pg_receivexlog, that 
might work, but with me being a total noob I'm wary of the various steps I'd 
have to take in going from:


Master -> streaming replication to -> SB1 -> pg_receivexlog to -> SB2


to:


New Master (Old SB1) -> streaming replication to -> SB2


And whether or not the conversion from pg_receivexlog to normal streaming 
replication would maintain data integrity.  I need to skew this towards 
simplicity or I'll likely screw it up.


My current thought is to cut off master, promote SB1, set up WAL file shipping 
to SB2, start a pg_basebackup, make SB1 live, then run for a couple days with 
no backup as the pg_basebackup runs.   Far from ideal but at least I have gone 
through most of this before.



From: Michael Paquier <michael.paqu...@gmail.com>
Sent: Monday, November 13, 2017 6:01 PM
To: Jeremy Schneider
Cc: eric...@hotmail.com; PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

On Tue, Nov 14, 2017 at 8:56 AM, Jeremy Schneider
<schnei...@ardentperf.com> wrote:
> From my reading of the docs and commit logs, standby databases
> couldn't archive their WALs until 9.5.

pg_receivexlog is available in 9.3. You could leverage your archives
with it easily, by for example connecting it to a standby you'd like
to get the archives from.
--
Michael


Re: [GENERAL] PostgreSQL walsender process doesn't exist after "pg_ctl stop -m fast"

2017-11-13 Thread Michael Paquier
On Tue, Nov 14, 2017 at 12:05 AM, Tom Lane  wrote:
> y39chen  writes:
>> We encounter one problem that PostgreSQL walsender process doesn't exist
>> after "pg_ctl stop -m fast".
>> Uses PostgreSQL 9.6.2
>
> There was a fix in 9.6.4 that's at least related to this problem.
> It would be interesting to see if you can still reproduce it on
> current 9.6.

Commit that may matter here:
commit: e9d4aa594f2caa8c28d55c41c9926420b1efdb79
author: Tom Lane 
date: Fri, 30 Jun 2017 12:00:03 -0400
Fix walsender to exit promptly if client requests shutdown.

It's possible for WalSndWaitForWal to be asked to wait for WAL that doesn't
exist yet.  That's fine, in fact it's the normal situation if we're caught
up; but when the client requests shutdown we should not keep waiting.
The previous coding could wait indefinitely if the source server was idle.

In passing, improve the rather weak comments in this area, and slightly
rearrange some related code for better readability.

Back-patch to 9.4 where this code was introduced.

Discussion: https://postgr.es/m/14154.1498781...@sss.pgh.pa.us
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Michael Paquier
On Tue, Nov 14, 2017 at 8:56 AM, Jeremy Schneider
 wrote:
> From my reading of the docs and commit logs, standby databases
> couldn't archive their WALs until 9.5.

pg_receivexlog is available in 9.3. You could leverage your archives
with it easily, by for example connecting it to a standby you'd like
to get the archives from.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Jeremy Schneider
Hi Eric,

Thanks for using PostgreSQL!

On Fri, Nov 10, 2017 at 9:26 AM, Paul Jungwirth
 wrote:
> Oh this has happened to me before. :-) On SB1 you need to set
> archive_mode to always (not on). Otherwise it is ignored when running as a
> standby.

It looks to me like this feature was not added until 9.5 and Eric is
running 9.3  :(

> On 11/10/2017 09:10 AM, Eric D wrote:
>> I have a standby db server (SB1) that will soon become the master.  SB1
>> is set up with streaming replication from the current master.  I'm
>> trying to set up a third server (SB2) as a slave/standby to SB1, so that
>> when SB1 becomes the master, there will be a standby for it.  First step
>> is to get WAL files shipped from SB1->SB2.

Eric,

>From my reading of the docs and commit logs, standby databases
couldn't archive their WALs until 9.5.

https://www.postgresql.org/message-id/cankgpbs7qgakgq-opzy0esam6+wue5mgpyehcgo_eoq7tjv...@mail.gmail.com
http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/

There is a section in the 9.3 docs about cascading replication:
https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION

It seems to me this is based on streaming replication only.  Looks
like it's not designed (in 9.3) to start archiving until it becomes a
primary.  "You will also need to set primary_conninfo in the
downstream standby to point to the cascading standby."

Are you able to configure a cascading replica by using streaming
replication on your 9.3 system, without WAL archiving on the standby?

-Jeremy

-- 
http://about.me/jeremy_schneider


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På tirsdag 14. november 2017 kl. 00:44:11, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 2:05 PM, Andreas Joseph Krogh
  wrote:
 > When sorting on text, we're usually doing so using an multi-column index, 
like for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created 
ASC)". Will abbreviated keys help here?

 Yes, they'll help with that, even though the leading column might be
 low cardinality.
 
Nice to know, thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 2:05 PM, Andreas Joseph Krogh
 wrote:
> When sorting on text, we're usually doing so using an multi-column index, 
> like for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created 
> ASC)". Will abbreviated keys help here?

Yes, they'll help with that, even though the leading column might be
low cardinality.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På mandag 13. november 2017 kl. 22:28:40, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 12:48 PM, Andreas Joseph Krogh
  wrote:
 > Thanks.

 As the person that worked on abbreviated keys, I'd like to hear about
 how you get with this. How much faster is it for you?

 I don't usually get to hear about this, because most users don't
 notice that anything in particular gets faster, because there are many
 performance enhancements added to a release.
 
We haven't migrated any of our databases to v10 yet so I really can't tell. 
I'm evaluating ICU-usage as the last step before we decide moving to v10. Being 
a per-column setting that means a pg_dump/reload won't cut it (AFAIU), so I'm 
not sure we'll take that route as it involves much manual tweaking which we're 
really not interessted in spending time on.
 
When sorting on text, we're usually doing so using an multi-column index, like 
for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created ASC)". 
Will abbreviated keys help here?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 12:48 PM, Andreas Joseph Krogh
 wrote:
> Thanks.

As the person that worked on abbreviated keys, I'd like to hear about
how you get with this. How much faster is it for you?

I don't usually get to hear about this, because most users don't
notice that anything in particular gets faster, because there are many
performance enhancements added to a release.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På mandag 13. november 2017 kl. 21:46:08, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 12:40 PM, Peter Geoghegan  wrote:
 >> Do I have to explicitly specify collation when using ORDER by on that 
column for index and abbreviated keys to be used?
 >
 > Only if you didn't define the column with a per-column collation initially.

 BTW, if you specifically want to quickly verify whether or not
 abbreviated keys were used, you can do that by setting "trace_sort =
 on", and possibly setting "client_min_messages = LOG", too.

 There should be quite a bit of debug output from that that
 specifically mentions abbreviated keys.
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På mandag 13. november 2017 kl. 21:40:47, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 11:48 AM, Andreas Joseph Krogh
  wrote:
 > Ok, so I have to explicitly specify like this:
 >
 > create table test(id serial primary key, name varchar collate "nb_NO" not 
null);

 That doesn't look like an ICU locale. You may mean "nb-NO-x-icu". But
 otherwise, yes.
 
 
Ok, is there a way I can get a list of ICU-collations?
 
 
> Will ICU be used here as long as PG is compiled with ICU-suppoert, as the 
debian-packages are, or do I have to specify collation-provider?

 If you did initdb with a version with ICU support, the ICU collations
 should be there.

 > Do I have to explicitly specify collation when using ORDER by on that 
column for index and abbreviated keys to be used?

 Only if you didn't define the column with a per-column collation initially.
 
Ok, thanks.
 
Looking forward to this being a per-database setting so it's (hopefully) more 
transparent.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 12:40 PM, Peter Geoghegan  wrote:
>> Do I have to explicitly specify collation when using ORDER by on that column 
>> for index and abbreviated keys to be used?
>
> Only if you didn't define the column with a per-column collation initially.

BTW, if you specifically want to quickly verify whether or not
abbreviated keys were used, you can do that by setting "trace_sort =
on", and possibly setting "client_min_messages = LOG", too.

There should be quite a bit of debug output from that that
specifically mentions abbreviated keys.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 11:48 AM, Andreas Joseph Krogh
 wrote:
> Ok, so I have to explicitly specify like this:
>
> create table test(id serial primary key, name varchar collate "nb_NO" not 
> null);

That doesn't look like an ICU locale. You may mean "nb-NO-x-icu". But
otherwise, yes.

> Will ICU be used here as long as PG is compiled with ICU-suppoert, as the 
> debian-packages are, or do I have to specify collation-provider?

If you did initdb with a version with ICU support, the ICU collations
should be there.

> Do I have to explicitly specify collation when using ORDER by on that column 
> for index and abbreviated keys to be used?

Only if you didn't define the column with a per-column collation initially.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På mandag 13. november 2017 kl. 19:07:53, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 12:27 AM, Andreas Joseph Krogh
  wrote:
 > In PG-10, with ICU enabled, is abbreviated keys now enabled?

 Yes. ICU will use abbreviated keys on every platform, including Windows.

 > If so, using locale=nb_NO.UTF-8, do I have to use a ICU-specific locale to 
take advantage of abbreviated keys?

 You need to use an ICU collation. It must be a per-column collation,
 as you cannot currently use ICU for an entire database. (This
 limitation should be removed in the next release or two.)
 
Ok, so I have to explicitly specify like this:
 
create table test(id serial primary key, name varchar collate "nb_NO" not 
null);
  
Will ICU be used here as long as PG is compiled with ICU-suppoert, as the 
debian-packages are, or do I have to specify collation-provider?
 
Do I have to explicitly specify collation when using ORDER by on that column 
for index and abbreviated keys to be used?
 
Thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 12:27 AM, Andreas Joseph Krogh
 wrote:
> In PG-10, with ICU enabled, is abbreviated keys now enabled?

Yes. ICU will use abbreviated keys on every platform, including Windows.

> If so, using locale=nb_NO.UTF-8, do I have to use a ICU-specific locale to 
> take advantage of abbreviated keys?

You need to use an ICU collation. It must be a per-column collation,
as you cannot currently use ICU for an entire database. (This
limitation should be removed in the next release or two.)

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL walsender process doesn't exist after "pg_ctl stop -m fast"

2017-11-13 Thread Tom Lane
y39chen  writes:
> We encounter one problem that PostgreSQL walsender process doesn't exist
> after "pg_ctl stop -m fast".
> Uses PostgreSQL 9.6.2
 
There was a fix in 9.6.4 that's at least related to this problem.
It would be interesting to see if you can still reproduce it on
current 9.6.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sync the data's from catalog table

2017-11-13 Thread Dinesh kumar
Hi,

Whenever the postgres user is trying to modify the user account's password
column in pg_authid table, we need to maintain a trigger in catalog table
(pg_authid) where it pop up the the "password column has been restricted
and it should not be modified". Is there any possible for the above
scenario in postgres? if it there please guide me how to proceed on this.
and one more question, Can we restrict the postgres user to not modify the
pg_authid table.  Please share your thoughts on it.

Thanks


On Mon, Nov 13, 2017 at 2:12 PM, Laurenz Albe 
wrote:

> Dinesh kumar wrote:
> > How can I sync the data's from pg_authid to manually created table (user
> table) whenever the update or insert happens on pg_authid table.
>
> You cannot do this, because you cannot define triggers on catalog tables.
>
> The question is:
> Why do you want to do this? What are you trying to achieve?
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Adrien Nayrat
On 11/13/2017 09:27 AM, Andreas Joseph Krogh wrote:
>  
> In PG-10, with ICU enabled, is abbreviated keys now enabled?
>  

Hello,


I think yes :

src/backend/utils/adt/varlena.c

1876 /*
1877  * Unfortunately, it seems that abbreviation for non-C collations is
1878  * broken on many common platforms; testing of multiple versions of 
glibc
1879  * reveals that, for many locales, strcoll() and strxfrm() do not 
return
1880  * consistent results, which is fatal to this optimization.  While no
1881  * other libc other than Cygwin has so far been shown to have a 
problem,
1882  * we take the conservative course of action for right now and disable
1883  * this categorically.  (Users who are certain this isn't a problem on
1884  * their system can define TRUST_STRXFRM.)
1885  *
1886  * Even apart from the risk of broken locales, it's possible that there
1887  * are platforms where the use of abbreviated keys should be disabled 
at
1888  * compile time.  Having only 4 byte datums could make worst-case
1889  * performance drastically more likely, for example.  Moreover, macOS's
1890  * strxfrm() implementation is known to not effectively concentrate a
1891  * significant amount of entropy from the original string in earlier
1892  * transformed blobs.  It's possible that other supported platforms are
1893  * similarly encumbered.  So, if we ever get past disabling this
1894  * categorically, we may still want or need to disable it for 
particular
1895  * platforms.
1896  */
1897 #ifndef TRUST_STRXFRM
1898 if (!collate_c && !(locale && locale->provider == COLLPROVIDER_ICU))
1899 abbreviate = false;
1900 #endif


But I did not do any test to compare performances.

Regards,

-- 
Adrien NAYRAT

http://dalibo.com - http://dalibo.org



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] pg on Debian servers

2017-11-13 Thread Mark Morgan Lloyd

On 12/11/17 19:15, Karsten Hilbert wrote:

On Sat, Nov 11, 2017 at 01:03:18PM +, Mark Morgan Lloyd wrote:


Several legacy programs written in Delphi ground to a halt this morning,
which turned out to be because a Debian system had updated its copy of
PostgreSQL and restarted the server, which broke any live connections.

At least some versions of Delphi, not to mention other IDE/RAD tools with
database-aware components, don't automatically try to reestablish a database
session that's been interrupted. In any event, an unexpected server restart
(irrespective of all investment in UPSes etc.) has the potential of playing
havoc on a clustered system.

Is there any way that either the package maintainer or a site
administrator/programmer such as myself can mark the Postgres server
packages as "manual upgrade only" or similar? Or since I'm almost certainly
not the first person to be bitten by this, is there a preferred hack in
mitigation?


Apart from that (putting packages on hold), PostgreSQL
updates on Debian don't upgrade existing clusters
automatically. They do create a new cluster but the old one
is kept around and stays running, IIRC even on the very same
port.

(Having gone all the way from PG 7.1 to PG 10 on Debian :)


With the caveat that Debian has only comparatively-recently introduced 
unattended updates as the default... I think only with Stretch. If 
you're still on Jessie you can yet be saved :-)



What did

pg_lsclusters

say ?


I don't have it from the time of the problem, but currently it gives me

Ver Cluster Port Status OwnerData directory   Log file
9.6 main5432 online postgres /var/lib/postgresql/9.6/main 
/var/log/postgresql/postgresql-9.6-main.log


i.e. a single-server system, although I've since done a manual restart 
so that I could change some DIMMs.


However syslog and postgresql-9.6-main.log show me this:

Nov 11 06:27:38 postgres1 systemd[1]: Starting Daily apt upgrade and 
clean activities...

Nov 11 06:28:05 postgres1 systemd[1]: Reloading.
Nov 11 06:28:07 postgres1 systemd[1]: Reloading.
Nov 11 06:28:07 postgres1 systemd[1]: Stopped PostgreSQL RDBMS.
Nov 11 06:28:07 postgres1 systemd[1]: Stopping PostgreSQL Cluster 
9.6-main...

Nov 11 06:28:08 postgres1 systemd[1]: Stopped PostgreSQL Cluster 9.6-main.
Nov 11 06:28:10 postgres1 systemd[1]: Reloading.

2017-11-11 06:28:07.587 UTC [675] LOG:  received fast shutdown request
2017-11-11 06:28:07.587 UTC [675] LOG:  aborting any active transactions
[Session names here]
2017-11-11 06:28:07.607 UTC [730] LOG:  autovacuum launcher shutting down
[More session names here]
2017-11-11 06:28:07.680 UTC [727] LOG:  shutting down
2017-11-11 06:28:07.984 UTC [675] LOG:  database system is shut down
2017-11-11 06:28:13.039 UTC [11122] LOG:  database system was shut down 
at 2017-11-11 06:28:07 UTC
2017-11-11 06:28:13.081 UTC [11122] LOG:  MultiXact member wraparound 
protections are now enabled

2017-11-11 06:28:13.085 UTC [11126] LOG:  autovacuum launcher started
2017-11-11 06:28:13.085 UTC [11121] LOG:  database system is ready to 
accept connections
2017-11-11 06:28:13.371 UTC [11128] [unknown]@[unknown] LOG:  incomplete 
startup packet


All live applications saw that as a loss of database connectivity, yet 
when I was alerted by their squeals of anguish (MIDI on app servers has 
its uses :-) I found the database server running and accepting connections.



There must have been something additional at play.


The apps are written in Delphi, I admit not a very recent version and 
they're due to be converted to Lazarus which is an open-source and 
portable clone. I'll defend my choice of language since it is, 
basically, the best "4GL" you'll find.


However one flaw of Delphi etc. is that they assume that they can safely 
hold a database session open for an extended period. I can't speak for 
Delphi any more since it has, basically, priced itself out of our league 
particularly taking into account its lack of portability, but 
FPC/Lazarus appears to have something which is intended to reconnect a 
lost session, although it's so far unimplemented.


So I've got multiple options for fixing this at the application level: 
either fill in the unimplemented bit of the database control in the 
Lazarus Class Library, or prevent apps from holding database connections 
open. But the real problem, I feel, is that Debian is enabling 
unattended upgrades without checking with the user, and while an 
attended upgrade normally asks for confirmation before restarting a 
daemon an unattended one doesn't.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sync the data's from catalog table

2017-11-13 Thread Laurenz Albe
Dinesh kumar wrote:
> How can I sync the data's from pg_authid to manually created table (user 
> table) whenever the update or insert happens on pg_authid table.

You cannot do this, because you cannot define triggers on catalog tables.

The question is:
Why do you want to do this? What are you trying to achieve?

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating plattaform

2017-11-12 Thread John R Pierce

On 11/8/2017 11:38 AM, Valdir Kageyama wrote:


I need migrated the postgres from Linux on IBM Power to Oracle Linux 
on SPARC.


My doubt is possible copy the datafiles to new enviorement ? or I need 
using  other means of copying the data.

For exemples: pg_dump/pg_restore.



pretty sure you can't copy binary database files between architectures, 
as various data structures have different binary representations.


sure, pg_dump  -Fc | pg_restore, that works fine across architectures.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Andres Freund  writes:
> we could really do better than just wonder whether our signal to
> shutdown was received or not.  There probably should be a quite short
> timeout for the server to change status, and then a much longer one for
> that shutdown to finish.

While I don't want to just raise the timeout, I could get behind a more
thorough rethinking of the behavior there.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple unnests in query

2017-11-12 Thread Tom Lane
Aron Widforss  writes:
> Is this first query expected behavior? If so, what is the rationale?

The short answer is "because it's always worked that way".  You
might find the last half of section 37.4.8 illuminating:

https://www.postgresql.org/docs/devel/static/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET

but if you're on a pre-v10 release, pay close attention to what it says
about the difference between v10 and pre-v10 behavior.

> I would have expected nine rows returned (as in my second example).

Your second example has approximately nothing to do with your first.
It has only one SRF in the SELECT list, so there's not much doubt
about what ought to happen.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Andres Freund
On 2017-11-12 14:26:42 -0500, Tom Lane wrote:
> Christoph Berg  writes:
> > The default systemd timeout seems to be 90s. I have already changed
> > the systemd timeout to infinity (start) and 1h (stop), so only the
> > default pg_ctl timeout remains (60s), which I'd rather not override
> > unilaterally.
> 
> > That said, isn't 60s way too small for shutting down larger clusters?
> > And likewise for starting?
> 
> Well, that's tied into the fact that pg_ctl doesn't disturb the server's
> state if it gives up waiting.  If it did, we would certainly use a larger
> timeout or none at all.

Hm. So partially that's also related to the fact that we didn't have a
good way to know whether the server reacted to the shutdown request or
not. With the infrastructure from

commit f13ea95f9e473a43ee4e1baeb94daaf83535d37c
Author: Tom Lane 
Date:   2017-06-28 17:31:24 -0400

Change pg_ctl to detect server-ready by watching status in postmaster.pid.

we could really do better than just wonder whether our signal to
shutdown was received or not.  There probably should be a quite short
timeout for the server to change status, and then a much longer one for
that shutdown to finish.


> I don't feel a big need to change that default,
> but if you have a surrounding script that is going to take adverse action
> after a timeout then you need to use a larger value ...

Didn't we have to fiddle with this a bunch in the regression tests, to
get things to work properly on slow animals? If we had to do that, other
people had to do so as well. Not the friendliest experience...

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg on Debian servers

2017-11-12 Thread rob stone


On Sat, 2017-11-11 at 14:30 +0100, Magnus Hagander wrote:
> 
> 
> 
> The init.d script is not used with systemd.
> 
>  
> 
Hello Magnus,

Many months ago on a bog standard Debian set-up did a re-boot and ended
up with postmasters running for 9.2, 9.4, 9.5 and 9.6 all started one
after the other. There was a script in init.d which read thru
/usr/lib/postgresql and it started running Postgres for each version it
found. Fortunately, all listening on different ports.

The fix was to disable that script as well as the systemd service.

Doing the upgrade to 10 in a few weeks. Will let you know how it goes.

I assume you are aware of this DSA:-


Debian Security Advisory DSA-4029-1

---

Package: postgresql-common
CVE ID : CVE-2017-8806

It was discovered that the pg_ctlcluster, pg_createcluster and
pg_upgradecluster commands handled symbolic links insecurely which
could result in local denial of service by overwriting arbitrary files.

For the oldstable distribution (jessie), this problem has been fixed
in version 165+deb8u3.

For the stable distribution (stretch), this problem has been fixed in
version 181+deb9u1.


Cheers,
Rob




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Christoph Berg  writes:
> The default systemd timeout seems to be 90s. I have already changed
> the systemd timeout to infinity (start) and 1h (stop), so only the
> default pg_ctl timeout remains (60s), which I'd rather not override
> unilaterally.

> That said, isn't 60s way too small for shutting down larger clusters?
> And likewise for starting?

Well, that's tied into the fact that pg_ctl doesn't disturb the server's
state if it gives up waiting.  If it did, we would certainly use a larger
timeout or none at all.  I don't feel a big need to change that default,
but if you have a surrounding script that is going to take adverse action
after a timeout then you need to use a larger value ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Christoph Berg
Re: Tom Lane 2017-11-12 <20802.1510513...@sss.pgh.pa.us>
> Agreed, but I think Peter has a point: why is there a timeout at all,
> let alone one as short as 30 seconds?  Since systemd doesn't serialize
> service starts unnecessarily, there seems little value in giving up
> quickly.  And we know that cases such as crash recovery may take more
> than that.

The default systemd timeout seems to be 90s. I have already changed
the systemd timeout to infinity (start) and 1h (stop), so only the
default pg_ctl timeout remains (60s), which I'd rather not override
unilaterally.

https://anonscm.debian.org/cgit/pkg-postgresql/postgresql-common.git/tree/systemd/postgresql@.service#n18

That said, isn't 60s way too small for shutting down larger clusters?
And likewise for starting?

Christoph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg on Debian servers

2017-11-12 Thread Karsten Hilbert
On Sat, Nov 11, 2017 at 01:03:18PM +, Mark Morgan Lloyd wrote:

> Several legacy programs written in Delphi ground to a halt this morning,
> which turned out to be because a Debian system had updated its copy of
> PostgreSQL and restarted the server, which broke any live connections.
> 
> At least some versions of Delphi, not to mention other IDE/RAD tools with
> database-aware components, don't automatically try to reestablish a database
> session that's been interrupted. In any event, an unexpected server restart
> (irrespective of all investment in UPSes etc.) has the potential of playing
> havoc on a clustered system.
> 
> Is there any way that either the package maintainer or a site
> administrator/programmer such as myself can mark the Postgres server
> packages as "manual upgrade only" or similar? Or since I'm almost certainly
> not the first person to be bitten by this, is there a preferred hack in
> mitigation?

Apart from that (putting packages on hold), PostgreSQL
updates on Debian don't upgrade existing clusters
automatically. They do create a new cluster but the old one
is kept around and stays running, IIRC even on the very same
port.

(Having gone all the way from PG 7.1 to PG 10 on Debian :)

What did

pg_lsclusters

say ?

There must have been something additional at play.

Regards,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Christoph Berg  writes:
> Re: Peter J. Holzer 2017-11-12 <20171112173559.m6chmbyf4vz6f...@hjp.at>
>> Wouldn't it be better to remove the timeout?

> If you don't want to block, don't depend on the database service. That
> question is independent from the timeout.

Agreed, but I think Peter has a point: why is there a timeout at all,
let alone one as short as 30 seconds?  Since systemd doesn't serialize
service starts unnecessarily, there seems little value in giving up
quickly.  And we know that cases such as crash recovery may take more
than that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Combine multiple text search configuration

2017-11-12 Thread Johannes Graën
Hi,


On 2017-11-06 09:17, hmidi slim wrote:
> Hi,
> I want to know if I can combine multiple text search configurations when
> I tried to use FTS.
> Is there any options like this:
> *to_tsvector(['english', 'french'], document)*
> *
> *
> Trying to create a new text configuration:
> *Create text search configuration test (copy=simple)*
> *Alter text search configuration test*
> *add mapping for asciiword with english_stem,french_stem*
> *
> *
> This query doesn't work. How can I combine multiple text search
> configurations if I need more than one into my query to search a word?

what about using two indexes, one for each language? If your documents
can either be English OR French, the English OR the French vector should
match an English OR French tsquery.

It is not clear to me how combining two stemmers should practically work
since each word can only have one stem. If you have multilingual
documents or texts with code switching, you could also try combining the
two vectors both for the documents and the query:

(to_tsvector('english', document) || to_tsvector('french', document)) @@
(to_tsquery('english', query) || to_tsquery('french', query))



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Combine multiple text search configuration

2017-11-12 Thread Johannes Graën
On 2017-11-07 08:27, hmidi slim wrote:
> Hi, 
> Thank for your proposition but when to use this query : 
> (to_tsvector('english', document) || to_tsvector('french', document)) @@
> (to_tsquery('english', query) || to_tsquery('french', query))
> I think that the performance decrease and not a good solution for big
> amount of data. Is it?

You have more lexems when you combine two languages, but not twice as
many as there will be some overlap. That means your index will also be
be bigger than a single language index. Anyhow I would expect this
variant to perform better than querying two single columns
simultaneously. Maybe one of the FTS developers could comment on this?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >