Re: PostgreSQL Kerberos Authentication

2018-01-30 Thread Peter Eisentraut
On 1/30/18 11:13, HIRTZ Jorge Alberto TENARIS wrote:
> [postgres@hostname data]$ psql  -h hostname -U usern...@domain.com postgres
> 
> psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code
> may provide more information
> 
> GSSAPI continuation error: Server not found in Kerberos database

Check that your DNS resolves everything correctly.

You can find some ideas about this error in the internet.  It's not a
problem specific to PostgreSQL.  It looks like you have things set up
correctly.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: session_replication_role meaning?

2018-01-30 Thread Peter Eisentraut
On 1/30/18 02:48, Luca Ferrari wrote:
> Hi all,
> now this should be trivial, but I cannot udnerstand what is the
> purpose of session_replication_role
> 
> or better, when I should use it in a way different from 'origin'.

The documentation was recently updated in the master branch, so maybe
you will find this explanation a bit more detailed:
https://www.postgresql.org/docs/devel/static/runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE

> I've a logical replication setup and both master and client
> connections default to 'origin', so it has to be specified manually.

The global setting for an instance is not affected by whether you are
replicating.  The replication system is supposed to set the parameter
when it is applying changes, e.g.,
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/logical/worker.c;h=eedc3a8816bc2f157e62a664bcc57b5f87530be9;hb=958fe549884928cd3bdf009993e9a05df5fd6cee#l1521

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg10 logical replication set schema

2018-01-30 Thread Peter Eisentraut
On 1/30/18 08:47, Gabriel Furstenheim Milerud wrote:
>     create table test_table_replication (a int);
>     create publication test_table_replication_pub for table
> test_table_replication;

>   create table some_schema.test_table_replication (a int);
>   create subscription test_table_replication
>         connection 'host=localhost dbname=postgres user=standby_user
> pass=pass port=$master_port'
> 
> And I get the error relation "public.test_table_replication" does not exist.
> 
> Is it possible to tell the subscription that it should use the table
> some_schema.test_table_replication instead of
> public.test_table_replication?

No.

> I can't find any reference in the docs
> about tables or schemas in subscriptions.

Here:
https://www.postgresql.org/docs/10/static/logical-replication-subscription.html

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: How to Optimize pg_trgm Performance

2018-01-30 Thread Ivan E. Panchenko

Hi Igal,

29.01.2018 02:42, Igal @ Lucee.org пишет:


I want to use pg_trgm for auto-suggest functionality.  I created a 
Materialized View with the information that I need, with the relevant 
columns being (keywords text, rank int). keywords is the column from 
which I build the tri-grams, and rank is some popularity factor so 
that popular results will show up higher than less popular results 
given the same tri-gram distance.


I want to return results in the order of [distance], [distance_word], 
[rank].  The input comes from the user and is not known in advance.  
My query is as follows:


    SELECT title
    ,id
    ,(input <-> keywords) AS distance
        ,(input <<-> keywords) AS distance_word
    ,rank
    FROM  (VALUES (cast('red pill' AS text))) consts(input)
    ,mv_autosuggest
    ORDER BY 3, 4, 5
    LIMIT 20;

This gives me pretty good results, but it takes too long and is not 
likely to scale well.


I have created two indexes but neither seem to be used:

CREATE INDEX mv_autosuggest_keywords_tgrm_gist ON 
staging.mv_autosuggest USING gist (keywords gist_trgm_ops);


CREATE INDEX mv_autosuggest_keywords_tgrm_gin ON 
staging.mv_autosuggest USING gin (keywords gin_trgm_ops);


This is the result of explain analyze:

QUERY PLAN |
---|
Limit  (cost=356.41..356.46 rows=20 width=51) (actual 
time=163.132..163.135 rows=20 
loops=1)   |
  ->  Sort  (cost=356.41..372.96 rows=6619 width=51) (actual 
time=163.130..163.131 rows=20 
loops=1)    |
    Sort Key: (('red pill'::text <-> mv_autosuggest.keywords)), 
(('red pill'::text <<-> mv_autosuggest.keywords)), mv_autosuggest.rank |

    Sort Method: top-N heapsort  Memory: 28kB |
    ->  Seq Scan on mv_autosuggest  (cost=0.00..180.29 rows=6619 
width=51) (actual time=0.263..161.289 rows=6619 loops=1)  |

Planning time: 0.139 ms |
Execution time: 163.174 ms |

How can I improve the performance here?


This can be improved if you use sort only by distances
(try ORDER BY 3,4
or ORDER BY 3
or ORDER BY 4

Than you should get  plan  like

Index Scan using mv_autosuggest_keywords_tgrm_gist on mv_autosuggest
 Order By: ((keywords <-> 'red pill'::text) AND (keywords <->> 
'red pill'::text))


Which means that KNN  index search is enabled : 
http://www.sai.msu.su/~megera/postgres/talks/pgcon-2010-1.pdf


If you want to sort it also by rank, you can make a two-level 
construction like:


SELECT * FROM (
   SELECT  ORDER BY 3,4 LIMIT   /* make some empirical 
redundant limit here */

) foo ORDER BY 5 LIMIT 


Regards,
Ivan Panchenko





Thank you,

Igal Sapir
Lucee Core Developer
Lucee.org 





Re: Working with JSONB data having node lists

2018-01-30 Thread David G. Johnston
On Tue, Jan 30, 2018 at 2:47 PM, geoff hoffman  wrote:

> JSONB fields are very attractive for our current use, particularly as
> straight key-value pairs in the JSONB data;
>
> but we are having trouble finding documentation on how to query lists (of
> scalars or objects) in nodes of the JSONB data.
>

​https://www.postgresql.org/docs/10/static/functions-json.html​


>  '{"first”:"Phil","last”:"Peters”,"subscriptions”:[101,202,303]}')
> ​​
>
>
> How do I craft a query to find all subscribers to program 202?
>

​​SELECT ​data->'subscriptions' ? '202'​

The docs speak of "top-level keys" but that also includes array element
values.
 ​

>  '{"first”:"Bob","last”:"Baker”,"downloads":[{"date":"2018-01-01
> 00:00:00","pubid”:123},{"date":"2018-02-02 00:00:00","pubid”:456}]}')
>
> How do I craft a query to find all contacts who have downloaded pubid 123?
>

If you can do this one without unnesting the downloads array I do not know
how.  Having done that: (WHERE dlarray->'publd' = '123)

David J.
​


Re: Alter view with dependence without drop view!

2018-01-30 Thread Thomas Kellerer

Elson Vaz schrieb am 30.01.2018 um 14:40:

Hello!

I want make change in one view that have dependence view's, so when i try to 
make change i'm block because of this, what is the best solution??



This can easily be dealt with when using a schema management tool.

We use Liquibase for this and the main migration script simply drops all views 
before running any migration (so that table columns can be removed or renamed 
without having to think about view dependencies).

Once the table migrations are done, all views are re-created automatically.

The Liquibase migration script and the SQL source for the views are stored 
together in Git.

Thomas







Re: ERROR: invalid memory alloc request size 1073741824

2018-01-30 Thread Tomas Vondra


On 01/30/2018 10:43 PM, Jan Wieck wrote:
> 
> 
> On Tue, Jan 30, 2018 at 12:35 PM, Stefan Blanke
> mailto:stefan.bla...@framestore.com>> wrote:
> 
> Hello,
> 
> We've tripped over an error when doing a "COPY.. TO STDOUT WITH
> BINARY" query.
> 
> "ERROR:  invalid memory alloc request size 1073741824"
> (exactly 1GB)
> 
> 
> I have my money on a corrupted TOAST entry. Is this happening on
> trustworthy hardware or beige box with no ECC or RAID?
> 

I'll bet you it's not that. It's quite unlikely that would fail with
exactly 1GB request size. It seems much more like a buffer that we keep
to be power of 2. The question is which one.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Alter view with dependence without drop view!

2018-01-30 Thread Ken Tanzer
>
> I want make change in one view that have dependence view's, so when i try
> to make change i'm block because of this, what is the best solution??


Just to be clear, there are some changes you can make, some you can't.
Basically you can't change the definition of existing columns in the view.
But if you want to change the logic that generates those columns, or add
new columns at the end, you can do that with CREATE OR REPLACE VIEW ...



On Tue, Jan 30, 2018 at 10:12 AM, Melvin Davidson 
wrote:

>
>
> On Tue, Jan 30, 2018 at 12:48 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Tue, Jan 30, 2018 at 8:34 AM, bto...@computer.org <
>> bto...@broadstripe.net> wrote:
>>
>>>
>>> When this procedure got old, I started using a script created using
>>> pg_dump and pg_restore, as initially outlined here:
>>>
>>>
>> ​Yeah, the short answer is PostgreSQL doesn't make it possible to edit
>> "middle" views without having the code on hand for all dependent views​ so
>> you can recreate them.  You either maintain those views and order manually
>> or you rely on pg_dump to figure it out for you (the former, with version
>> control, is highly recommended).
>>
>>
I agree it's best to keep the definitions of the view yourself--among other
things it lets you keep comments.  But you can also generate a
perfectly-functional definition of a view with \d+.



> *Just a side note, it is not a good practice to create views based on
> other views. *
>
> *Multiple reasons are stated here:*
>
>
>
> *https://dba.stackexchange.com/questions/5487/is-nested-view-a-good-database-design
> *
>
> *To summarize, you suffer performance degradation, columns names and the
> tables referenced become obscured and you incur the problems you are now
> experiencing.*
>
>
That link you pointed to includes arguments both for and against.  It seems
to me that performance weighs against nested views, clarity can cut both
ways, and nested views can help centralize and encapsulate business logic,
avoiding needs for reuse and maintaining consistency.  Whether they are a
good idea or not depends on your goals, priorities and specific situation.

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Working with JSONB data having node lists

2018-01-30 Thread geoff hoffman
JSONB fields are very attractive for our current use, particularly as straight 
key-value pairs in the JSONB data;

but we are having trouble finding documentation on how to query lists (of 
scalars or objects) in nodes of the JSONB data.

~~~

I have the table as follows:

CREATE TABLE public.contacts
(
id integer NOT NULL DEFAULT nextval('contacts_id_seq'::regclass),
uuid uuid NOT NULL DEFAULT gen_random_uuid(),
vertical_id integer NOT NULL,
inboundlog_id integer NOT NULL,
email character varying(255) COLLATE pg_catalog."default" NOT NULL,
data jsonb NOT NULL,
created_at timestamp(0) without time zone,
updated_at timestamp(0) without time zone,
CONSTRAINT contacts_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;

~~~

I have a record as follows:

INSERT INTO contacts
(uuid, vertical_id, inboundlog_id, email, data)
VALUES
(gen_random_uuid(), 1, 1, ‘p...@site.com', 
 '{"first”:"Phil","last”:"Peters”,"subscriptions”:[101,202,303]}')


How do I craft a query to find all subscribers to program 202?

~~~

I have another record as follows:

INSERT INTO contacts
(uuid, vertical_id, inboundlog_id, email, data)
VALUES
(gen_random_uuid(), 1, 1, ‘b...@domain.com', 
 '{"first”:"Bob","last”:"Baker”,"downloads":[{"date":"2018-01-01 
00:00:00","pubid”:123},{"date":"2018-02-02 00:00:00","pubid”:456}]}')

How do I craft a query to find all contacts who have downloaded pubid 123?

TIA -
Geoff





Re: ERROR: invalid memory alloc request size 1073741824

2018-01-30 Thread Jan Wieck
On Tue, Jan 30, 2018 at 12:35 PM, Stefan Blanke <
stefan.bla...@framestore.com> wrote:

> Hello,
>
> We've tripped over an error when doing a "COPY.. TO STDOUT WITH BINARY"
> query.
>
> "ERROR:  invalid memory alloc request size 1073741824"
> (exactly 1GB)
>

I have my money on a corrupted TOAST entry. Is this happening on
trustworthy hardware or beige box with no ECC or RAID?


Regards, Jan





>
> So a palloc() call is failing on the AllocSizeIsValid() check.
>
> Does anyone know if this a safety catch we are tripping with a bad query -
> or whether this check is something that should never be hit (i.e. a bug).
>
> This has been some discussion before about making a change before the
> 9.5beta1:
>
> https://www.postgresql.org/message-id/flat/9A28C8860F777E439
> AA12E8AEA7694F8010F6F3F%40BPXM15GP.gisp.nec.co.jp
>
> https://www.postgresql.org/message-id/flat/CAKJS1f_sxPyBHF6%
> 3DnEYwPJdKKDNjVDP91b_EJApfyGTei%3DViDw%40mail.gmail.com
>
> postgresql 9.5.4 (2PGDG.rhel6)
> work_mem = 256MB
>
> Thanks,
> Stefan
>
>


-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: ERROR: invalid memory alloc request size 1073741824

2018-01-30 Thread Tomas Vondra

On 01/30/2018 06:35 PM, Stefan Blanke wrote:
> Hello,
> 
> We've tripped over an error when doing a "COPY.. TO STDOUT WITH BINARY"
> query.
> 
> "ERROR:  invalid memory alloc request size 1073741824"
> (exactly 1GB)
> 
> So a palloc() call is failing on the AllocSizeIsValid() check.
> 
> Does anyone know if this a safety catch we are tripping with a bad
> query - or whether this check is something that should never be hit
> (i.e. a bug).
> 

That very much depends on where exactly the failure happens - it might
be both (or relying on an assumption that changed over). It's not clear
if it fails during query execution or formatting the output (I seem to
remember there were/are issues with rows containing multiple large bytea
values, for example).

Can you attach gdb to the backend, set breakpoint on that elog and share
the backtrace? Then we can investigate further.

In case you don't have experience with doing that, here's a howto on the
community wiki:

https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

> This has been some discussion before about making a change before the
> 9.5beta1:
> 
> https://www.postgresql.org/message-id/flat/9A28C8860F777E439AA12E8AEA7694F8010F6F3F%40BPXM15GP.gisp.nec.co.jp
> 
> https://www.postgresql.org/message-id/flat/CAKJS1f_sxPyBHF6%3DnEYwPJdKKDNjVDP91b_EJApfyGTei%3DViDw%40mail.gmail.com
> 

Both of those threads are about a hashjoin issue, which should have been
 fixed in 9.5. Moreover, you haven't shared the query and it's query
plan, so it's unclear if it's doing hash joins at all. That would be
useful for investigating this issue, BTW.

> 
> postgresql 9.5.4 (2PGDG.rhel6)
> work_mem = 256MB
> 

FWIW you're missing more than a year of bugfixes


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Streaming replication: replicant server not starting (9.4.4, Win 2008)

2018-01-30 Thread Ibrahim Edib Kokdemir
Hi Tim,
You have to enable hot_standby=on parameter on the replica server. See the
below link.
https://wiki.postgresql.org/wiki/Hot_Standby

Regards,
Ibrahim.

2018-01-30 21:16 GMT+03:00 Tim Bowden :

> I've inherited a PG 9.4.4 install on Win 2008 that I'm wanting to
> stream from (abt 80Gb on disk).  Everything seems to be working from
> the "master" side.
>
> I have an AMI of the master host (a vmware instance in our soon to
> close data centre) that has been spun up in AWS (replicant).  In it's
> "default" configuration (ie, exactly the same as master) it worked
> fine.
>
> I ran pg_basebackup on the master (from memory- details on work pc):
> pg_basebackup -D "e:\\mybasebackup\\" -F t -R -X f -z -c fast \
>  -h  -U 
>
> Base backup is created fine.  I've copied it up to AWS and extracted it
> into the correct location (after ensuring it is empty) on the replicant
> windows host (tar -xzvf base.tgz from a linux box with the pg data
> drive mounted).  File metadata is changed during the extraction (ie,
> file timestamps- could this possibly be an issue?) but permissions on
> the win2008 replicant host have been set & double checked.
>
> When I try to start the pg server on replicant, the log streaming from
> master works (and keeps working till the server is rebooted), but the
> server can't start for some reason.
>
> Copy of pg log file on replicant:
>
> 2018-01-30 17:18:02 AWST LOG:  database system was shut down in
> recovery at 2018-01-30 17:16:59 AWST
> 2018-01-30 17:18:02 AWST LOG:  entering standby mode
> 2018-01-30 17:18:02 AWST LOG:  redo starts at C26/84018A30
> 2018-01-30 17:18:02 AWST LOG:  consistent recovery state reached at
> C26/8401C6D0
> 2018-01-30 17:18:02 AWST LOG:  invalid record length at C26/8401C6D0
> 2018-01-30 17:18:03 AWST LOG:  started streaming WAL from primary at
> C26/8400 on timeline 1
> 2018-01-30 17:18:03 AWST FATAL:  the database system is starting up
> 2018-01-30 17:18:04 AWST FATAL:  the database system is starting up
> 2018-01-30 17:18:05 AWST FATAL:  the database system is starting up
> 2018-01-30 17:18:06 AWST FATAL:  the database system is starting up
> 2018-01-30 17:18:07 AWST FATAL:  the database system is starting up
> 2018-01-30 17:18:08 AWST FATAL:  the database system is starting up
>
> The last line is repeated for about a minute till it gives up.  It
> leaves behind a bunch of processes that keep streaming log files as
> master creates new wal records.  This keeps going till the replicant
> box is rebooted (easiest way to clean up the processes and free up port
> 5432).
>
> I can keep rebooting the replicant host and on retrying to start the pg
> server, it will apply any new wal files that have been streamed, then
> fail to start the server but keep streaming again.
>
> The Windows service start command for PG:
>
> "C:\Program Files (x86)\PostgreSQL\9.4.4\bin\pg_ctl.exe" runservice -N
> "postgresql-9.4" -D "E:\PostgreSQL\9.4.4\data" -w
>
> Config files:
>
> Master postgresql.conf:
>
> dynamic_shared_memory_type = windows
> wal_level = hot_standby # Was "archive", but we'd like to do
> hot...
> archive_mode = on
> archive_command = 'copy "%p"
> "e:\\PostgreSQL\\9.4.4\\wal_archive\\%f"'  # Windows
> max_wal_senders = 8
> wal_keep_segments = 900 # Sick of losing old segments while I fix this
> max_replication_slots = 8
> log_line_prefix = '%t '
> log_timezone = 'Australia/Perth'
> datestyle = 'iso, dmy'
> timezone = 'Australia/Perth'
> lc_messages = 'English_Australia.1252'
> lc_monetary = 'English_Australia.1252'
> lc_numeric = 'English_Australia.1252'
> lc_time = 'English_Australia.1252'
> default_text_search_config = 'pg_catalog.english'
> listen_addresses = '*'
> port = 5432
> max_connections = 300
> shared_buffers = 500MB
> work_mem = 32MB
> maintenance_work_mem = 128MB
> checkpoint_segments = 100
> random_page_cost = 2.0
> effective_cache_size = 1500MB
> log_destination = 'stderr'
> logging_collector = on
> log_min_duration_statement = 500
> log_line_prefix = '%t '
>
> Replicant postgresql.conf (essentially a copy from master with minimal
> changes):
>
> dynamic_shared_memory_type = windows
> wal_level = archive
> max_standby_streaming_delay = 30s
> wal_receiver_status_interval = 30s
> log_line_prefix = '%t '
> log_timezone = 'Australia/Perth'
> datestyle = 'iso, dmy'
> timezone = 'Australia/Perth'
> lc_messages = 'English_Australia.1252'
> lc_monetary = 'English_Australia.1252'
> lc_numeric = 'English_Australia.1252'
> lc_time = 'English_Australia.1252'
> default_text_search_config = 'pg_catalog.english'
> listen_addresses = '*'
> port = 5432
> max_connections = 300
> shared_buffers = 500MB
> work_mem = 32MB
> maintenance_work_mem = 128MB
> checkpoint_segments = 100
> random_page_cost = 2.0
> effective_cache_size = 1500MB
> log_destination = 'stderr'
> logging_collector = on
> log_min_duration_statement = 500
> log_line_prefix = '%t '
>
> replicant recovery.conf:
>
> standby_mode = 'on'
> primary_conninfo = 'host= user=
> password= connect_timeou

Streaming replication: replicant server not starting (9.4.4, Win 2008)

2018-01-30 Thread Tim Bowden
I've inherited a PG 9.4.4 install on Win 2008 that I'm wanting to
stream from (abt 80Gb on disk).  Everything seems to be working from
the "master" side.

I have an AMI of the master host (a vmware instance in our soon to
close data centre) that has been spun up in AWS (replicant).  In it's
"default" configuration (ie, exactly the same as master) it worked
fine.

I ran pg_basebackup on the master (from memory- details on work pc):
pg_basebackup -D "e:\\mybasebackup\\" -F t -R -X f -z -c fast \
 -h  -U 

Base backup is created fine.  I've copied it up to AWS and extracted it
into the correct location (after ensuring it is empty) on the replicant
windows host (tar -xzvf base.tgz from a linux box with the pg data
drive mounted).  File metadata is changed during the extraction (ie,
file timestamps- could this possibly be an issue?) but permissions on
the win2008 replicant host have been set & double checked.

When I try to start the pg server on replicant, the log streaming from
master works (and keeps working till the server is rebooted), but the
server can't start for some reason.

Copy of pg log file on replicant:

2018-01-30 17:18:02 AWST LOG:  database system was shut down in
recovery at 2018-01-30 17:16:59 AWST
2018-01-30 17:18:02 AWST LOG:  entering standby mode
2018-01-30 17:18:02 AWST LOG:  redo starts at C26/84018A30
2018-01-30 17:18:02 AWST LOG:  consistent recovery state reached at
C26/8401C6D0
2018-01-30 17:18:02 AWST LOG:  invalid record length at C26/8401C6D0
2018-01-30 17:18:03 AWST LOG:  started streaming WAL from primary at
C26/8400 on timeline 1
2018-01-30 17:18:03 AWST FATAL:  the database system is starting up
2018-01-30 17:18:04 AWST FATAL:  the database system is starting up
2018-01-30 17:18:05 AWST FATAL:  the database system is starting up
2018-01-30 17:18:06 AWST FATAL:  the database system is starting up
2018-01-30 17:18:07 AWST FATAL:  the database system is starting up
2018-01-30 17:18:08 AWST FATAL:  the database system is starting up

The last line is repeated for about a minute till it gives up.  It
leaves behind a bunch of processes that keep streaming log files as
master creates new wal records.  This keeps going till the replicant
box is rebooted (easiest way to clean up the processes and free up port
5432).

I can keep rebooting the replicant host and on retrying to start the pg
server, it will apply any new wal files that have been streamed, then
fail to start the server but keep streaming again.

The Windows service start command for PG: 

"C:\Program Files (x86)\PostgreSQL\9.4.4\bin\pg_ctl.exe" runservice -N
"postgresql-9.4" -D "E:\PostgreSQL\9.4.4\data" -w

Config files:

Master postgresql.conf:

dynamic_shared_memory_type = windows
wal_level = hot_standby # Was "archive", but we'd like to do
hot...
archive_mode = on
archive_command = 'copy "%p"
"e:\\PostgreSQL\\9.4.4\\wal_archive\\%f"'  # Windows
max_wal_senders = 8
wal_keep_segments = 900 # Sick of losing old segments while I fix this
max_replication_slots = 8
log_line_prefix = '%t '
log_timezone = 'Australia/Perth'
datestyle = 'iso, dmy'
timezone = 'Australia/Perth'
lc_messages = 'English_Australia.1252'
lc_monetary = 'English_Australia.1252'
lc_numeric = 'English_Australia.1252'
lc_time = 'English_Australia.1252'
default_text_search_config = 'pg_catalog.english'
listen_addresses = '*'
port = 5432
max_connections = 300
shared_buffers = 500MB
work_mem = 32MB
maintenance_work_mem = 128MB
checkpoint_segments = 100
random_page_cost = 2.0
effective_cache_size = 1500MB
log_destination = 'stderr'
logging_collector = on
log_min_duration_statement = 500
log_line_prefix = '%t '

Replicant postgresql.conf (essentially a copy from master with minimal
changes):

dynamic_shared_memory_type = windows
wal_level = archive
max_standby_streaming_delay = 30s
wal_receiver_status_interval = 30s
log_line_prefix = '%t '
log_timezone = 'Australia/Perth'
datestyle = 'iso, dmy'
timezone = 'Australia/Perth'
lc_messages = 'English_Australia.1252'
lc_monetary = 'English_Australia.1252'
lc_numeric = 'English_Australia.1252'
lc_time = 'English_Australia.1252'
default_text_search_config = 'pg_catalog.english'
listen_addresses = '*'
port = 5432
max_connections = 300
shared_buffers = 500MB
work_mem = 32MB
maintenance_work_mem = 128MB
checkpoint_segments = 100
random_page_cost = 2.0
effective_cache_size = 1500MB
log_destination = 'stderr'
logging_collector = on
log_min_duration_statement = 500
log_line_prefix = '%t '

replicant recovery.conf:

standby_mode = 'on'
primary_conninfo = 'host= user=
password= connect_timeout=10 application_name=replicant_3'
primary_slot_name = 'replicant_3'
recovery_min_apply_delay = 10
#restore_command = 'copy
"e:\\PostgreSQL\\9.4.4\\basebackup\\wal_archive\\%f" "%p" ' 
trigger_file = 'recovery_done.txt'

The restore_command was uncommented at first and worked fine. All new
updates now are provided by streaming.

Any ideas?  I'm tearing my hair out with this. The boxes are about as
identical as you can get, apart from t

Re: Alter view with dependence without drop view!

2018-01-30 Thread Melvin Davidson
On Tue, Jan 30, 2018 at 12:48 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Jan 30, 2018 at 8:34 AM, bto...@computer.org <
> bto...@broadstripe.net> wrote:
>
>>
>> When this procedure got old, I started using a script created using
>> pg_dump and pg_restore, as initially outlined here:
>>
>>
> ​Yeah, the short answer is PostgreSQL doesn't make it possible to edit
> "middle" views without having the code on hand for all dependent views​ so
> you can recreate them.  You either maintain those views and order manually
> or you rely on pg_dump to figure it out for you (the former, with version
> control, is highly recommended).
>
> I could see it being possible to program the server to be more helpful
> here - by say allowing it to drop but remember view definitions and the
> re-create them from the remembered versions by name - but no one has seen
> the motivation to do so; I suspect partially in light of the fact that
> "version control" is a recommended practice.
>
> David J.
>
>

*Just a side note, it is not a good practice to create views based on other
views. *

*Multiple reasons are stated here:*



*https://dba.stackexchange.com/questions/5487/is-nested-view-a-good-database-design
*

*To summarize, you suffer performance degradation, columns names and the
tables referenced become obscured and you incur the problems you are now
experiencing.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


RE: PostgreSQL Kerberos Authentication

2018-01-30 Thread HIRTZ Jorge Alberto TENARIS
Thanks Poul,

According to official documentation parameters like ldap, and suffix in 
pg_hba.conf are for LDAP Authentication not for Kerberos/GSSAPI. In fact the 
authentication through LDAP works fine in our environment but not for Kerberos…

Do you know if the principal in Active Directory KDC must be in uppercase or 
lowercase? POSTGRES or postgres? Just to confirm.

Thanks
Jorge





From: EXTERNAL:Poul Kristensen [mailto:bcc5...@gmail.com]
Sent: martes, 30 de enero de 2018 01:50 p.m.
To: HIRTZ Jorge Alberto TENARIS 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: PostgreSQL Kerberos Authentication

you need til tell Postgresql/pg_hba.conf  the AD kerberos server name ldap = 
kerberos.domain.com   and suffix
@domain.com

Then create the users(is in fact a role) as the owner of a database.
Hereafter the user could just write psql after login and after password 
auhtentication the user/role is logged into the database.

It has been testet and works!

Hope it is usefull.

regards
Poul

2018-01-30 17:13 GMT+01:00 HIRTZ Jorge Alberto TENARIS 
mailto:jhi...@tenaris.com>>:
Hello All,

I am trying to configure PostgreSQL9.6 (On Centos 7.4) with Kerberos (Active 
Directory) via GSSAPI authentication and I’m getting the following error:

[postgres@hostname data]$ psql  -h hostname -U 
usern...@domain.com postgres
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
GSSAPI continuation error: Server not found in Kerberos database

I did the following configuration:

1.- Create KeyTab in Active Directory:
ktpass -out postgres_instance.keytab -princ 
postgres/hostnamename.domain@domain.com
 -mapUser svcPostgres -pass  -crypto all -ptype KRB5_NT_PRINCIPAL

2.- Copy the keytab to Linux Server on $PGDATA and change the privileges to 
postgres:postgres
3.- Configure postgresql.conf
krb_server_keyfile = '//data/postgres_instance.keytab

4.- Configure /etc/krb5.conf

5.- Request a ticket to the KDC server using kinit (this work OK!)

[postgres@hostname ~]$ klist
Ticket cache: KEYRING:persistent:26:krb_ccache_AO0Y1kx
Default principal: usern...@domain.com

Valid starting   Expires  Service principal
01/30/2018 11:01:59  01/30/2018 21:01:59  
krbtgt/domain@domain.com
renew until 02/06/2018 11:01:55


6.- Configure pg_hba.conf
hostallall
0.0.0.0/0  gss include_realm=1
7.- Create user in PG to test:
create user “usern...@domain.com” WITH SUPERUSER;

8.- Testing
[postgres@hostname data]$ psql  -h hostname -U 
usern...@domain.com postgres
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
GSSAPI continuation error: Server not found in Kerberos database

I tried generate the Keytab with “postgres” and “POSTGRES” user as a SPN but I 
get the same error.

Any suggestion is welcome!

Thanks in advance for your help!

Jorge




--
Med venlig hilsen / Best regards
Poul Kristensen
Linux-OS/Virtualizationexpert and Oracle DBA


Re: Alter view with dependence without drop view!

2018-01-30 Thread David G. Johnston
On Tue, Jan 30, 2018 at 8:34 AM, bto...@computer.org  wrote:

>
> When this procedure got old, I started using a script created using
> pg_dump and pg_restore, as initially outlined here:
>
>
​Yeah, the short answer is PostgreSQL doesn't make it possible to edit
"middle" views without having the code on hand for all dependent views​ so
you can recreate them.  You either maintain those views and order manually
or you rely on pg_dump to figure it out for you (the former, with version
control, is highly recommended).

I could see it being possible to program the server to be more helpful here
- by say allowing it to drop but remember view definitions and the
re-create them from the remembered versions by name - but no one has seen
the motivation to do so; I suspect partially in light of the fact that
"version control" is a recommended practice.

David J.


Re: Alter view with dependence without drop view!

2018-01-30 Thread bto...@computer.org
One way I have approached this problem is: 

1) Use PgAdmin attempt the change. 

2) Examine the error report PgAdmin displays that identifies which dependent 
views are preventing your progress. 

3) Wrap your original DDL from step 1 within the DROP and CREATE DDL associated 
with the closest dependent view. 

4) Return to step 1 and repeat until step 1 succeeds. 

With multiple iterations of this procedure, you will incrementally grow a DDL 
script that drops dependent views in the correct order, eliminating 
dependencies, and then recreate them in the proper order, respecting 
dependencies. 



When this procedure got old, I started using a script created using pg_dump and 
pg_restore, as initially outlined here: 

https://www.postgresql.org/message-id/55c3f0b4.5010...@computer.org 

and with a correction noted here: 

https://www.postgresql.org/message-id/0456dfda-4623-1331-7dca-e3cff914357b%40computer.org
 




-- B 


- Original Message -


From: "Elson Vaz"  
To: pgsql-general@lists.postgresql.org 
Sent: Tuesday, January 30, 2018 8:40:45 AM 
Subject: Alter view with dependence without drop view! 

Hello! 

I want make change in one view that have dependence view's, so when i try to 
make change i'm block because of this, what is the best solution?? 

thank you!! 

best regard 

Elson Vaz 






ERROR: invalid memory alloc request size 1073741824

2018-01-30 Thread Stefan Blanke

Hello,

We've tripped over an error when doing a "COPY.. TO STDOUT WITH BINARY" 
query.


"ERROR:  invalid memory alloc request size 1073741824"
(exactly 1GB)

So a palloc() call is failing on the AllocSizeIsValid() check.

Does anyone know if this a safety catch we are tripping with a bad query 
- or whether this check is something that should never be hit (i.e. a bug).


This has been some discussion before about making a change before the 
9.5beta1:


https://www.postgresql.org/message-id/flat/9A28C8860F777E439AA12E8AEA7694F8010F6F3F%40BPXM15GP.gisp.nec.co.jp

https://www.postgresql.org/message-id/flat/CAKJS1f_sxPyBHF6%3DnEYwPJdKKDNjVDP91b_EJApfyGTei%3DViDw%40mail.gmail.com

postgresql 9.5.4 (2PGDG.rhel6)
work_mem = 256MB

Thanks,
Stefan



Re: pgAdmin 4 loading shapefiles

2018-01-30 Thread Adrian Klaver

On 01/30/2018 03:32 AM, Findlay, Thomas C wrote:

Hello,

How do you load GIS shapefiles into a pgAdmin database, using pgAdmin 4?


AFAIK there is no pgAdmin database, you are loading into a Postgres 
database.




I have done this in pgAdmin 3 using the PostGIS Shapefile loader, but 
can’t seem to find this in version 4.


?:
https://gis.stackexchange.com/questions/217456/shapefile-loader-in-pgadmin-4#2250797


Thank you,

Thomas




--
Adrian Klaver
adrian.kla...@aklaver.com



Many Backends stuck in wait event IPC/ParallelFinish

2018-01-30 Thread Steven Winfield
Hi,

We just had an incident on one of our non-production databases where 14 
unrelated queries were all hung in wait event IPC / ParallelFinish. We had 
systematically called pg_cancel/terminate_backend on all other backends except 
these (and the autovacuum process mentioned below) to make sure there wasn't 
some other resource that they were deadlocked on.

We attached gdb to a number of the backends, and found their backtraces to look 
like this:

#0  0x7f9ea3e77903 in __epoll_wait_nocancel () from /lib64/libc.so.6
#1  0x0077cb5e in WaitEventSetWait ()
#2  0x0077d149 in WaitLatch ()
#3  0x004f1d75 in WaitForParallelWorkersToFinish ()
#4  0x006294e7 in ExecParallelFinish ()
#5  0x0063a57d in ExecShutdownGather ()
...
#6  0x00629978 in ExecShutdownNode ()   <-- Then zero or more of
#7  0x00676c01 in planstate_tree_walker ()  <-- this pair
...
#10 0x00629925 in ExecShutdownNode ()
#11 0x0062494e in standard_ExecutorRun ()
#12 0x7f9e99d73f5d in pgss_ExecutorRun () from 
/remote/install/sw/external/20180117-4-64/lib/pg_stat_statements.so
#13 0x007a5c24 in PortalRunSelect ()
#14 0x007a7316 in PortalRun ()
#15 0x007a2b49 in exec_simple_query ()
#16 0x007a4157 in PostgresMain ()
#17 0x0047926f in ServerLoop ()
#18 0x007200cc in PostmasterMain ()
#19 0x0047af97 in main ()

We also sent one of the backends a SIGABRT, so we have a core dump to play 
with. The only other backend running at the time was an autovacuum process, 
which may also have been hung - it didn't have a wait event in 
pg_stat_activity, but I didn't get a chance to strace it or attach gdb as the 
database restarted itself after we sent the SIGABRT.

The host is running Postgres v10.1 on RHEL7.4.

Any ideas what could have caused this, or what we could do to investigate this 
further?

Thanks,
Steve.


Re: PostgreSQL Kerberos Authentication

2018-01-30 Thread Poul Kristensen
you need til tell Postgresql/pg_hba.conf  the AD kerberos server name ldap
= kerberos.domain.com   and suffix@domain.com

Then create the users(is in fact a role) as the owner of a database.
Hereafter the user could just write psql after login and after password
auhtentication the user/role is logged into the database.

It has been testet and works!

Hope it is usefull.

regards
Poul

2018-01-30 17:13 GMT+01:00 HIRTZ Jorge Alberto TENARIS :

> Hello All,
>
>
>
> I am trying to configure PostgreSQL9.6 (On Centos 7.4) with Kerberos
> (Active Directory) via GSSAPI authentication and I’m getting the following
> error:
>
>
>
> [postgres@hostname data]$ psql  -h hostname -U usern...@domain.com
> postgres
>
> psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may
> provide more information
>
> GSSAPI continuation error: Server not found in Kerberos database
>
>
>
> I did the following configuration:
>
>
>
> 1.- Create KeyTab in Active Directory:
>
> ktpass -out postgres_instance.keytab -princ postgres/hostnamename.domain.
> c...@domain.com -mapUser svcPostgres -pass  -crypto all -ptype
> KRB5_NT_PRINCIPAL
>
>
>
> 2.- Copy the keytab to Linux Server on $PGDATA and change the privileges
> to postgres:postgres
>
> 3.- Configure postgresql.conf
>
> krb_server_keyfile = '//data/postgres_instance.keytab
>
>
>
> 4.- Configure /etc/krb5.conf
>
>
>
> 5.- Request a ticket to the KDC server using kinit (this work OK!)
>
>
>
> [postgres@hostname ~]$ klist
>
> Ticket cache: KEYRING:persistent:26:krb_ccache_AO0Y1kx
>
> Default principal: usern...@domain.com
>
>
>
> Valid starting   Expires  Service principal
>
> 01/30/2018 11:01:59  01/30/2018 21:01:59  krbtgt/domain@domain.com
>
> renew until 02/06/2018 11:01:55
>
>
>
>
>
> 6.- Configure pg_hba.conf
>
> hostallall
> 0.0.0.0/0  gss include_realm=1
>
> 7.- Create user in PG to test:
>
> create user “usern...@domain.com” WITH SUPERUSER;
>
>
>
> 8.- Testing
>
> [postgres@hostname data]$ psql  -h hostname -U usern...@domain.com
> postgres
>
> psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may
> provide more information
>
> GSSAPI continuation error: Server not found in Kerberos database
>
>
>
> I tried generate the Keytab with “postgres” and “POSTGRES” user as a SPN
> but I get the same error.
>
>
>
> Any suggestion is welcome!
>
>
>
> Thanks in advance for your help!
>
>
>
> Jorge
>
>
>



-- 
Med venlig hilsen / Best regards
Poul Kristensen
Linux-OS/Virtualizationexpert and Oracle DBA


PostgreSQL Kerberos Authentication

2018-01-30 Thread HIRTZ Jorge Alberto TENARIS
Hello All,

I am trying to configure PostgreSQL9.6 (On Centos 7.4) with Kerberos (Active 
Directory) via GSSAPI authentication and I'm getting the following error:

[postgres@hostname data]$ psql  -h hostname -U usern...@domain.com postgres
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
GSSAPI continuation error: Server not found in Kerberos database

I did the following configuration:

1.- Create KeyTab in Active Directory:
ktpass -out postgres_instance.keytab -princ 
postgres/hostnamename.domain@domain.com -mapUser svcPostgres -pass 
 -crypto all -ptype KRB5_NT_PRINCIPAL

2.- Copy the keytab to Linux Server on $PGDATA and change the privileges to 
postgres:postgres
3.- Configure postgresql.conf
krb_server_keyfile = '//data/postgres_instance.keytab

4.- Configure /etc/krb5.conf

5.- Request a ticket to the KDC server using kinit (this work OK!)

[postgres@hostname ~]$ klist
Ticket cache: KEYRING:persistent:26:krb_ccache_AO0Y1kx
Default principal: usern...@domain.com

Valid starting   Expires  Service principal
01/30/2018 11:01:59  01/30/2018 21:01:59  krbtgt/domain@domain.com
renew until 02/06/2018 11:01:55


6.- Configure pg_hba.conf
hostallall
0.0.0.0/0  gss include_realm=1
7.- Create user in PG to test:
create user "usern...@domain.com" WITH SUPERUSER;

8.- Testing
[postgres@hostname data]$ psql  -h hostname -U usern...@domain.com postgres
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
GSSAPI continuation error: Server not found in Kerberos database

I tried generate the Keytab with "postgres" and "POSTGRES" user as a SPN but I 
get the same error.

Any suggestion is welcome!

Thanks in advance for your help!

Jorge



Re: Information on savepoint requirement within transctions

2018-01-30 Thread David G. Johnston
On Tue, Jan 30, 2018 at 8:25 AM, Rakesh Kumar 
wrote:

> >
> > I'm not sure about the terminology here, though, because the Transaction
> > Tutorial (https://www.postgresql.org/docs/9.6/static/tutorial-
> transactions.html)
> > speaks of "aborted" transactions, while you use the term "failed" here.
>
> Purely from a user point of view, shouldn't "aborted" mean a ROLLBACK
> issues by the application
> due to a violation of a business rule, whereas "failed" should mean as a
> ROLLBACK issues by
> PG due to constraint violation or like disk full or whatever.
>

​I was using failed because I hadn't done sufficient research and wasn't
aware of "aborted" being used in this context.  The error in psql itself
says "current transaction is aborted ..."

There is no distinction as to why the statements failed and the transaction
is in an aborted state as far as a transaction is concerned.​

David J.


Re: Information on savepoint requirement within transctions

2018-01-30 Thread Rakesh Kumar
> 
> I'm not sure about the terminology here, though, because the Transaction
> Tutorial 
> (https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html)
> speaks of "aborted" transactions, while you use the term "failed" here.

Purely from a user point of view, shouldn't "aborted" mean a ROLLBACK issues by 
the application
due to a violation of a business rule, whereas "failed" should mean as a 
ROLLBACK issues by
PG due to constraint violation or like disk full or whatever.


Re: session_replication_role meaning?

2018-01-30 Thread Jan Wieck
On Tue, Jan 30, 2018 at 3:36 AM, Laurenz Albe 
wrote:

> Luca Ferrari wrote:
> > now this should be trivial, but I cannot udnerstand what is the
> > purpose of session_replication_role
> > or better, when I should use it in a way different from 'origin'.
>
> It is used to enable or disable triggers.
>
> By default, tables are created with all triggers enabled, which means
> that they fire with the default setting "session_replication_role =
> origin".
>
> You can change "session_replication_role" to "replica" to disable the
> firing
> of triggers (unless they are set ENABLE REPLICA or ENABLE ALWAYS).
> This is done by the logical replication apply worker, but you can also
> use it to bypass triggers, e.g. to speed up operation, if you know what
> you are doing.
>
> What is confusing is that there are three settings for
> "session_replication_role",
> but the two settings "local" and "origin" have the same meaning.
> Maybe that was meant to change at some point, but I see no explanation in
> the original discussion.
>

All of the above does also apply to referential integrity triggers. That
means that under session_replication_role='replica' you replication system
can replicate things out of order with respect to foreign keys. It also
means that if you don't replicate the primary key table you can get the
target database inconsistent.

The setting of 'local' has indeed the same meaning for everything in stock
PostgreSQL. The Slony log and deny-access triggers react to it by
suppressing their actions. An application working under 'local' can modify
the origin without the changes being replicated and modify the replica
without the deny-access trigger aborting the transaction. The Slony engine
uses that mode when running SQL scripts through the EXECUTE DDL feature.
That way you can perform bulk operations like pruning without the
individual row changes being replicated.

The setting of 'replica' is very important if you have triggers that for
example do auditing or stuff like stamping created and last update
timestamps or session users. You certainly don't want to overwrite the real
last update timestamp or session user with the replication engine user and
time.


Regards, Jan



>
> Yours,
> Laurenz Albe
>
>


-- 
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info


Re: PG Sharding

2018-01-30 Thread Thomas Boussekey
Using citusdb enterprise, you can replicate the table shards.

Here is the link to the documentation:
https://docs.citusdata.com/en/v7.2/reference/user_defined_functions.html#replicate-table-shards

Regards,
Thomas


2018-01-30 12:18 GMT+01:00 Matej :

> As already said. It's missing 2 level sharding and is restricted with
> SPOF.
>
> BR
>
> Matej
>
> 2018-01-30 12:05 GMT+01:00 Rakesh Kumar :
>
>>
>>
>>
>> >We are looking for multi tenancy but at scale. That's why the sharding
>> and partitioning. It depends how you look at the distributed part.
>>
>> Citusdb.
>>
>
>


pg10 logical replication set schema

2018-01-30 Thread Gabriel Furstenheim Milerud
Hi,
I've been following
https://hackernoon.com/postgresql-logical-replication-86df5b51cc5a to set
up logical replication.

I've created the table in master and publication

create table test_table_replication (a int);
create publication test_table_replication_pub for table
test_table_replication;

In the standby I attempt to create a table and a subscription

  create table some_schema.test_table_replication (a int);
  create subscription test_table_replication
connection 'host=localhost dbname=postgres user=standby_user
pass=pass port=$master_port'

And I get the error relation "public.test_table_replication" does not exist.

Is it possible to tell the subscription that it should use the table
some_schema.test_table_replication instead of
public.test_table_replication? I can't find any reference in the docs about
tables or schemas in subscriptions.

Thanks

Gabriel Fürstenheim


pgadmin 4 shapefile loader

2018-01-30 Thread Findlay, Thomas C
Hello,

How do you load GIS shapefiles into a pgAdmin database, using pgAdmin 4?

I have done this in pgAdmin 3 using the PostGIS Shapefile loader, but can't 
seem to find this in version 4.

Thank you,
Thomas


Alter view with dependence without drop view!

2018-01-30 Thread Elson Vaz
Hello!

I want make change in one view that have dependence view's, so when i try
to make change i'm block because of this, what is the best solution??

thank you!!

best regard

Elson Vaz


Re: CannotAcquireResourceException in Junit

2018-01-30 Thread rob stone


On Tue, 2018-01-30 at 15:41 +0530, Abhra Kar wrote:
> Hi,
>Sorry for late reply. Below options I tried --- 
> 1>pg_terminate_backend --> Successful execution.
> 2> Increase max_connection to 900 in
> /var/lib/pgsql/9.5/data/postgresql.conf .After modify restart psql
> service--- 
> service postgresql-9.5 start
> service postgresql-9.5 stop
> 
> Then execute select * from pg_stat_activity it shows 2 connections.
> 
> 
> 3>set cp3p0 options    
> com.mchange.v2.c3p0.ComboPooledDataSource cpds = new
> ComboPooledDataSource();
>   cpds.setMaxPoolSize(1000);
> cpds.setMaxStatements(0);
> cpds.setInitialPoolSize(50);
> cpds.setAcquireIncrement(5);
> cpds.setAcquireRetryAttempts(5);
> 
> 
> But the problem still persist.Please provide suggestion.
> 
> Regards,
> Abhra
> 



The stack trace you sent with your original post shows hibernate
crashing as it has no connection string.
You are using hibernate as the go between the RDBMS and your app.
The problem is more than likely in your c3p0 package.
I haven't a clue what it is supposed to do, but my guess is that the
faulty code is there, not with your configuration parameters.

My 0.02 worth.
HTH,
Rob



pgAdmin 4 loading shapefiles

2018-01-30 Thread Findlay, Thomas C
Hello,

How do you load GIS shapefiles into a pgAdmin database, using pgAdmin 4?

I have done this in pgAdmin 3 using the PostGIS Shapefile loader, but can't 
seem to find this in version 4.

Thank you,
Thomas


Re: PG Sharding

2018-01-30 Thread Matej
As already said. It's missing 2 level sharding and is restricted with SPOF.

BR

Matej

2018-01-30 12:05 GMT+01:00 Rakesh Kumar :

>
>
>
> >We are looking for multi tenancy but at scale. That's why the sharding
> and partitioning. It depends how you look at the distributed part.
>
> Citusdb.
>


Re: PG Sharding

2018-01-30 Thread Rakesh Kumar







>We are looking for multi tenancy but at scale. That's why the sharding and 
>partitioning. It depends how you look at the distributed part. 


Citusdb.




Re: CannotAcquireResourceException in Junit

2018-01-30 Thread Abhra Kar
Hi,
   Sorry for late reply. Below options I tried ---
1>pg_terminate_backend --> Successful execution.
2> Increase max_connection to 900 in
/var/lib/pgsql/9.5/data/postgresql.conf .After modify restart psql
service---

service postgresql-9.5 start

service postgresql-9.5 stop


Then execute select * from pg_stat_activity it shows 2 connections.



3>set cp3p0 options 
com.mchange.v2.c3p0.ComboPooledDataSource cpds = new
ComboPooledDataSource();
  cpds.setMaxPoolSize(1000);
cpds.setMaxStatements(0);
cpds.setInitialPoolSize(50);
cpds.setAcquireIncrement(5);
cpds.setAcquireRetryAttempts(5);


But the problem still persist.Please provide suggestion.

Regards,
Abhra

On Thu, Jan 25, 2018 at 10:28 AM, Rob Sargent  wrote:

>
> > On Jan 24, 2018, at 9:32 PM, Abhra Kar  wrote:
> >
> > Thanks Rob.
> >
> > OS -- RHEL 6.7(Santiago)
> > Postgres Version -- 9.5
> > Location --/usr/pgsql-9.5/bin/psql
> >
> > Can you please guide me , how to stop all those connection manually(From
> psql prompt or linux shell or any .sh file with in postgres directory)
> >
> > Regards,
> > Abhra
> >
> > On Thu, Jan 25, 2018 at 9:36 AM, Rob Sargent 
> wrote:
> >
> > > On Jan 24, 2018, at 9:57 AM, Abhra Kar  wrote:
> > >
> > >
> > > Hi,
> > >  I am getting the following error while running bunch of Junit
> test cases through “ant test” command
> > >  Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException:
> A ResourcePool could not acquire a resource from its primary factory or
> source.
> > > PFA full stack trace.
> > >
> > > After execution of 300 test cases I got this error. The setup() method
> code snippet is ---
> > >
> > > @Before
> > > public void setUp() throws Exception {
> > > entityManager = entityManagerFactory.
> createEntityManager();//javax.persistence.EntityManager
> > > entityManager.getTransaction().begin();
> > > }
> > >
> > > this method ran for all 300 test cases but didn’t get error .
> > >
> > > Another 900 test cases are remaining. I tried running single test
> cases from those test cases, getting same error for all.
> > > What is root cause of this error.
> > >
> > > Thanks and Regards,
> > > Abhra
> > > 
> > Apparently Merlin is busy elsewhere.  You need to name the OS, the
> postgres version, and the stack you’re using.
> >
> > But my guess is you’re not closing those connections and have hit a
> configuration limit for maximum connection.
> >
> >
> >
> Please don’t top post in the forum.
>
> Something like this from https://stackoverflow.com/
> questions/5408156/how-to-drop-a-postgresql-database-if-
> there-are-active-connections-to-it
>
>  You might want a bunch of
> select pg_terminate_backend();
>
>
> WARNING: this kills all connections.
> SELECT pg_terminate_backend(pg_stat_activity.pid)
> FROM
>  pg_stat_activity
> WHERE pg_stat_activity.datname = 'TARGET_DB'
> AND pid <> pg_backend_pid();
>
>


Re: PG Sharding

2018-01-30 Thread Matej
We are looking for multi tenancy but at scale. That's why the sharding and
partitioning. It depends how you look at the distributed part.

BR

Matej

29. jan. 2018 17.50 je oseba "Rakesh Kumar" 
napisala:

>
>
> > On Jan 29, 2018, at 09:34 , Matej  wrote:
> >
> > Hi Everyone.
> >
> > We are looking at a rather large fin-tech installation. But as
> scalability requirements are high we look at sharding of-course.
> >
> > I have looked at many sources for Postgresql sharding, but we are a
> little confused as to shared with schema or databases or both.
> >
> >
> > So far our understanding:
> >
> > SCHEMA.
> >
> > PROS:
> > - seems native to PG
> > - backup seems easier
> > - connection pooling seems easier, as you can use same connection
> between shard.
> >
> > CONS:
> > - schema changes seems litlle more complicated
> > - heard of backup and maintenance problems
> > - also some caching  problems.
> >
> > DATABASE:
> >
> > PROS:
> > - schema changes litlle easier
> > - backup and administration seems more robust
> >
> > CONS:
> > - heard of vacuum problems
> > - connection pooling is hard, as 100 shards would mean 100 pools
> >
> >
> > So what is actually the right approach? If anyone could  shed some light
> on my issue.
>
> From your description it seems your requirement is more of multi tenancy
> in a non distributed env, rather than distributed Sharding env.
>
>
>


Re: Information on savepoint requirement within transctions

2018-01-30 Thread Robert Zenz
On 30.01.2018 03:07, David G. Johnston wrote:
 > ​So, my first pass at this.

Nice, thank you.

 > + These are of particular use for client software to use when executing
 > + user-supplied SQL statements and want to provide try/catch behavior
 > + where failures are ignored.

Personally, I'd reword this to something like this:

 > These are of particular use for client software which is executing
 > user-supplied SQL statements and wants to provide try/catch behavior
 > with the ability to continue to use the transaction after a failure.

Or maybe something like this:

 > These are of particular use for client software which requires
 > fine-grained support over failure behavior within a transaction.
 > They allow to provide a try/catch behavior with the ability
 > to continue to use a transaction after a failure.

Also I'd like to see something like this in the docs at roughly the same 
position:

 > If a failure occurs during a transaction, the transaction enters
 > an aborted state. An aborted or failed transaction cannot be used
 > anymore to issue more commands, ROLLBACK or ROLLBACK TO must be used
 > to regain control of the aborted transaction. A commit issued while
 > the transaction is aborted is automatically converted into a
 > .

I'm not sure about the terminology here, though, because the Transaction
Tutorial (https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html)
speaks of "aborted" transactions, while you use the term "failed" here.

Re: session_replication_role meaning?

2018-01-30 Thread Laurenz Albe
Luca Ferrari wrote:
> now this should be trivial, but I cannot udnerstand what is the
> purpose of session_replication_role
> or better, when I should use it in a way different from 'origin'.

It is used to enable or disable triggers.

By default, tables are created with all triggers enabled, which means
that they fire with the default setting "session_replication_role = origin".

You can change "session_replication_role" to "replica" to disable the firing
of triggers (unless they are set ENABLE REPLICA or ENABLE ALWAYS).
This is done by the logical replication apply worker, but you can also
use it to bypass triggers, e.g. to speed up operation, if you know what
you are doing.

What is confusing is that there are three settings for 
"session_replication_role",
but the two settings "local" and "origin" have the same meaning.
Maybe that was meant to change at some point, but I see no explanation in
the original discussion.

Yours,
Laurenz Albe



Re: session_replication_role meaning?

2018-01-30 Thread Achilleas Mantzios

On 30/01/2018 09:48, Luca Ferrari wrote:

Hi all,
now this should be trivial, but I cannot udnerstand what is the
purpose of session_replication_role

or better, when I should use it in a way different from 'origin'.
I've a logical replication setup and both master and client
connections default to 'origin', so it has to be specified manually.
What is its meaning?


This has an effect on dictating how triggers are fired.
For instance if you have built your proprietary replication mechanism based on triggers (e.g. Slony, DBmirror) then you might want at certain occasions (e.g. reading xactions from a master) to avoid 
firing triggers in order e.g. to prevent bounce back messages to the originator. If your replication trigger is a simply enabled trigger, then by setting session_replication_role to replica will 
prevent the trigger from firing. This could be useful in the code where you execute SQL originating from the master, in order not to send those back and cause an endless loop.


Thanks,
Luca



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Information on savepoint requirement within transctions

2018-01-30 Thread Laurenz Albe
David G. Johnston wrote:
> > > It may be worth updating the docs here...
> > 
> So, my first pass at this.  I'm probably going a bit outside what would 
> normally be covered in the SQL
> Command section but it does feel right at first blush.
> 
> Also attached; not compiled.
> 
> As a bug fix I've updated the description of "COMMIT" here since it can cause 
> a ROLLBACK to be issued and that isn't documented.
> 
> "(pseudo) sub-transaction" seemed like a reasonable choice of terminology to 
> introduce rather than just "mark".
> Having it mentioned in context in the BEGIN docs, instead of just a "see 
> also", should aid in understanding
> how the whole transaction system fits together.  The advanced features of the 
> tutorial cover this to some degree
> (I didn't re-read it prior to writing this up) but I'm inclined to believe 
> people wanting to understand transactions,
> lacking a top-level chapter on the topic, will know of BEGIN and start their 
> discovery there.

I think that it is a good idea to explain the behavior of aborted transactions.
Shouldn't that go to the hackers list though?

I don't like the term "pseudo sub-transaction".  What's pseudo about it?

> +
> +  
> +psql makes use of savepoints to implment its
> +ON_ERROR_ROLLBACK behavior.
> +  

s/implment/implement/

Yours,
Laurenz Albe