Re: [GENERAL] ERROR: functions in index expression must be marked IMMUTABLE

2017-02-26 Thread Adrian Klaver

On 02/26/2017 09:42 AM, Tom Lane wrote:

Adrian Klaver <adrian.kla...@aklaver.com> writes:

On 02/26/2017 08:50 AM, Tom Lane wrote:

I'm not entirely sure why the OP feels he needs an index on this
expression.  If he's willing to restrict the column to have the
exact format '-MM-DD', then a regular textual index would sort
the same anyway.  Perhaps what's needed is just to add a CHECK
constraint verifying that the column has that format.



The OP is trying to create an index on the value of a jsonb key. Would
the above still apply or am I misunderstanding the reference to column?


Sure, I was using "column" loosely to refer to the meta->>'birthdate'
expression.


Alright, thanks. It is just that with array/hstore/json(b) I see a table 
in a column in a table and I need to be clear in my mind what is being 
referred to.





The below works:
test=> create index docs_birthdate_idx ON docs using btree
((meta->>'birthdate'));
CREATE INDEX
So if the text values of 'birthdate' are consistent the index would work
without the cast?


Yeah, seems to me you could do things like
... WHERE meta->>'birthdate' > '2017-02-26'
and it would Just Work, though I'd admit there's a deficiency of sanity
checking for the RHS constant in this example.

        regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: functions in index expression must be marked IMMUTABLE

2017-02-26 Thread Adrian Klaver

On 02/26/2017 08:50 AM, Tom Lane wrote:

Geoff Winkless <pgsqlad...@geoff.dj> writes:

On 26 February 2017 at 16:09, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

On 02/26/2017 07:56 AM, Geoff Winkless wrote:

On 26 February 2017 at 10:09, Sven R. Kunze <srku...@mail.de
<mailto:srku...@mail.de>>wrote:

# create index docs_birthdate_idx ON docs using btree
(((meta->>'birthdate')::date));
ERROR:  functions in index expression must be marked IMMUTABLE



​Date functions are inherently not immutable because of timezones.



​ Isn't the point that casting to ::timestamp will still keep the
timezone?  Hence casting to "without timezone".


There are multiple reasons why the text-to-datetime conversion functions
are not immutable:

* some of them depend on the current timezone (but I don't believe date_in
does);

* all of them depend on the current datestyle setting, eg to resolve
'02/03/2017';

* all of them accept strings with time-varying values, such as 'now'
or 'today'.

You could get around the second and third points with to_timestamp(),
but since the only variant of that is one that yields timestamptz and
hence is affected by the timezone setting, it's still not immutable.

I'm not entirely sure why the OP feels he needs an index on this
expression.  If he's willing to restrict the column to have the
exact format '-MM-DD', then a regular textual index would sort
the same anyway.  Perhaps what's needed is just to add a CHECK
constraint verifying that the column has that format.


The OP is trying to create an index on the value of a jsonb key. Would 
the above still apply or am I misunderstanding the reference to column?


or

The below works:

test=> create index docs_birthdate_idx ON docs using btree 
((meta->>'birthdate'));

CREATE INDEX

So if the text values of 'birthdate' are consistent the index would work 
without the cast?




        regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: functions in index expression must be marked IMMUTABLE

2017-02-26 Thread Adrian Klaver

On 02/26/2017 08:15 AM, Geoff Winkless wrote:

On 26 February 2017 at 16:09, Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>>wrote:

On 02/26/2017 07:56 AM, Geoff Winkless wrote:
> On 26 February 2017 at 10:09, Sven R. Kunze <srku...@mail.de 
<mailto:srku...@mail.de>
> <mailto:srku...@mail.de <mailto:srku...@mail.de>>>wrote:
>
> >>># create index docs_birthdate_idx ON docs using btree
> (((meta->>'birthdate')::date));
> ERROR:  functions in index expression must be marked IMMUTABLE
>
> So, what is the problem here?
>
>
> ​Date functions are inherently not immutable because of timezones. Your
> solution of using to_timestamp doesn't help because it automatically
> returns a value in WITH TIMESTAMP. Do you get anywhere by using
> "::timestamp without time zone" instead, as suggested here?

​Of course I meant "WITH TIMEZONE" here, finger slippage.


That does not work either:

test=> create index docs_birthdate_idx ON docs using btree 
(((meta->>'birthdate')::timestamptz));

ERROR:  functions in index expression must be marked IMMUTABLE



​

My attempts at working the OP's problem passed through that:

​​Apologies, I don't have that reply in the thread in my mailbox.


No apologies needed I had not posted my attempts at that point. It was 
more me thinking out loud.



​

test=> create index docs_birthdate_idx ON docs using btree
(((meta->>'birthdate')::timestamp));
ERROR:  functions in index expression must be marked IMMUTABLE


​ Isn't the point that casting to ::timestamp will still keep the
timezone?  Hence casting to "without timezone".

This works:

test=> create index docs_birthdate_idx ON docs using btree
((meta->>'birthdate'));
CREATE INDEX

It is the act of casting that fails. Other then the OP's own
suggestion of creating
a function that wraps the operation and marks it immutable I don't
have a solution at
this time


​I can imagine that without a cast, depending on the way birthdate is
stored, it may behave differently to a cast index for ordering.

Geoff



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: functions in index expression must be marked IMMUTABLE

2017-02-26 Thread Adrian Klaver
On 02/26/2017 07:56 AM, Geoff Winkless wrote:
> On 26 February 2017 at 10:09, Sven R. Kunze <srku...@mail.de
> <mailto:srku...@mail.de>>wrote:
> 
> >>># create index docs_birthdate_idx ON docs using btree
> (((meta->>'birthdate')::date));
> ERROR:  functions in index expression must be marked IMMUTABLE
> 
> So, what is the problem here?
> 
> 
> ​Date functions are inherently not immutable because of timezones. Your
> solution of using to_timestamp doesn't help because it automatically
> returns a value in WITH TIMESTAMP. Do you get anywhere by using
> "::timestamp without time zone" instead, as suggested here?

My attempts at working the OP's problem passed through that:

test=> create index docs_birthdate_idx ON docs using btree 
(((meta->>'birthdate')::timestamp));
ERROR:  functions in index expression must be marked IMMUTABLE

This works:

test=> create index docs_birthdate_idx ON docs using btree 
((meta->>'birthdate'));
CREATE INDEX

It is the act of casting that fails. Other then the OP's own suggestion of 
creating
a function that wraps the operation and marks it immutable I don't have a 
solution at
this time.

> 
> https://www.postgresql.org/message-id/4E039D16.20704%40pinpointresearch.com
> 
> Geoff
> 
> ​


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cavium ThunderX Processors used for PostgreSQL?

2017-02-25 Thread Adrian Klaver

On 02/25/2017 08:33 AM, Arya F wrote:

I was shopping around for a dedicated server and I noticed a plan which
uses 2X Cavium ThunderX processors which gives me a total of 96 cores.

I use PostgreSQL + PgBouncer which accepts many connections at a time. I
have my current one to accept maximum connections of 1000, but it never
goes above 200 active connections but the traffic to the system is
always increasing and I want to have the hardware to handle it.

It's the first time I see the Cavium ThunderX name. How do these compare
to a machine that has 2 × E5-2640 v3? I noticed the Cavium ThunderX is a
lot cheaper, but it's not a known name.


Probably because it is an ARM processor trying to break into the high 
end server market. A search on Cavium ThunderX found a lot of 
references. The most recent benchmark I could find was:


https://www.servethehome.com/exclusive-first-cavium-thunderx-dual-48-core-96-core-total-arm-benchmarks/



What would I get better results with 2X Cavium ThunderX processors with
96 cores or 2 × E5-2640 v3 with 16 cores?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New 9.6.2 installation lacks /usr/lib/postgresql/

2017-02-25 Thread Adrian Klaver

On 02/25/2017 02:52 PM, Rich Shepard wrote:

  Just installed postgresql-9.6.2 on Slackware-14.2/x86_64 and initialized
the database. However, there's no /usr/lib/postgresql/ directory with its
subdirectories, and another application I'm building wants that location
during configuration.


What application?



  Postgres was installed via the SlackBuilds.org build script which I've
used for many years so I don't know where to start looking for the reason
there's no postgres lib/ directory.


There is, it is just not at /usr/lib/postgresql. Search for plpgsql.so

As to the build you will either need to specify where the lib/ is in the 
configuration process or create a symlink .




  Diagnostic help appreciated.

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: GMT FATAL: remaining connection slots are reserved for non-replication superuser connections, but I'm using pgBouncer for connection pooling

2017-02-25 Thread Adrian Klaver

On 02/25/2017 07:29 AM, lisandro wrote:

Thanks for the quick answer.

superuser_reserved_connections is set to 3

Actually, it's not set (the line is commented) but the default
for superuser_reserved_connections is 3:
https://www.postgresql.org/docs/9.2/static/runtime-config-connection.html#GUC-SUPERUSER-RESERVED-CONNECTIONS


So much for that idea.

See more comments inline below.



2017-02-25 12:17 GMT-03:00 Adrian Klaver-4 [via PostgreSQL] <[hidden
email] >:

On 02/25/2017 04:19 AM, lisandro wrote:

> Hi there! Please tell me if this isn't the place to post my
question, I'm new
> in the list.
>
> I'm using PostgreSQL 9.3, I have around 150 databases, and I use
pgBouncer


Just to be clear all 150 databases are on on one Postgres 
server/instance, correct?



> for connection pooling.
> My server is a VPS with 8cpus and 24gb of RAM.
>
> My current postgreSQL configuration (resumed) is this:
>
> listen_addresses = '*'
> port = 6543
> max_connections = 250
> shared_buffers = 2GB
> effective_cache_size = 6GB
> work_mem = 10485kB
> maintenance_work_mem = 512MB
> checkpoint_segments = 32
> checkpoint_completion_target = 0.7
> wal_buffers = 16MB
> default_statistics_target = 100
What is superuser_reserved_connections set to?

>
>
> In the other hand, my pgBouncer configuration (resumed) is this:
>
> listen_addr = localhost
> listen_port = 5432
> pool_mode = transaction
> server_reset_query = DISCARD ALL
> max_client_conn = 1
> default_pool_size = 10
> min_pool_size = 2
> server_idle_timeout = 30
>
>
> However, for the last couple of months (total db number has been
increasing)
> I have these sporadic errors where pgbouncer can't connect to
postgresql.
> They occurr every day with variable frequency. Every time the
error appears,
> it does in a different database. Even in those where the activity
is almost


Well max_connections is server wide so the connection that exceeds that 
could come from trying to connect any of the databases



> none.
>
> Every time the error is triggered, I check the total connections
number and
> it never goes beyond ~130.
> This is how I check, from psql:
> select count(*) from pg_stat_activity;
>
> Also I check for inactive connections with this:
> select count(*) from pg_stat_activity where (state = 'idle in
transaction')
> and xact_start is not null;
> ... but this number is always low, ~8 idle connections.


The question is are you looking at a reality that is different then the 
one that triggered the FATAL message?


The message is saying at some point the connections are exceeding:

max_connections(250) - superuser_reserved_connections(3) = 247

I would believe Postgres is correct on that, so it is a matter of 
finding out what is triggering the message.


Have you logged into the pgBouncer Admin to see what it reports:
http://pgbouncer.github.io/usage.html
Admin console

Are the logs below following the same event?

I ask because the timestamps differ by ~1 minute.



>
>
>
> When the error triggers, I check the postgresql log and I see this:
>
> 2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:47 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:48 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
> 2017-02-25 09:13:49 GMT FATAL:  remaining connection slots are
reserved for
> non-replication superuser connections
>
>
>
> And if I check the pgbouncer log I see this:
>
> 2017-02-25 09:12:37.354 4080 LOG Stats: 24 req/s, in 387979 b/s,
out 2657772
> b/s,query 146363 us
> 2017-02-25 09:13:37.355 4080 LOG Stats: 23 req/s, in 382191 b/s,
out 2594329
&g

Re: [GENERAL] GMT FATAL: remaining connection slots are reserved for non-replication superuser connections, but I'm using pgBouncer for connection pooling

2017-02-25 Thread Adrian Klaver

On 02/25/2017 04:19 AM, lisandro wrote:

Hi there! Please tell me if this isn't the place to post my question, I'm new
in the list.

I'm using PostgreSQL 9.3, I have around 150 databases, and I use pgBouncer
for connection pooling.
My server is a VPS with 8cpus and 24gb of RAM.

My current postgreSQL configuration (resumed) is this:

listen_addresses = '*'
port = 6543
max_connections = 250
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 10485kB
maintenance_work_mem = 512MB
checkpoint_segments = 32
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100


What is superuser_reserved_connections set to?




In the other hand, my pgBouncer configuration (resumed) is this:

listen_addr = localhost
listen_port = 5432
pool_mode = transaction
server_reset_query = DISCARD ALL
max_client_conn = 1
default_pool_size = 10
min_pool_size = 2
server_idle_timeout = 30


However, for the last couple of months (total db number has been increasing)
I have these sporadic errors where pgbouncer can't connect to postgresql.
They occurr every day with variable frequency. Every time the error appears,
it does in a different database. Even in those where the activity is almost
none.

Every time the error is triggered, I check the total connections number and
it never goes beyond ~130.
This is how I check, from psql:
select count(*) from pg_stat_activity;

Also I check for inactive connections with this:
select count(*) from pg_stat_activity where (state = 'idle in transaction')
and xact_start is not null;
... but this number is always low, ~8 idle connections.



When the error triggers, I check the postgresql log and I see this:

2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:31 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:46 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:47 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:48 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections
2017-02-25 09:13:49 GMT FATAL:  remaining connection slots are reserved for
non-replication superuser connections



And if I check the pgbouncer log I see this:

2017-02-25 09:12:37.354 4080 LOG Stats: 24 req/s, in 387979 b/s, out 2657772
b/s,query 146363 us
2017-02-25 09:13:37.355 4080 LOG Stats: 23 req/s, in 382191 b/s, out 2594329
b/s,query 144827 us
2017-02-25 09:14:29.687 4080 ERROR S: login failed: FATAL: remaining
connection slots are reserved for non-replication superuser connections
2017-02-25 09:14:37.355 4080 LOG Stats: 28 req/s, in 383614 b/s, out 2596947
b/s,query 124098 us
2017-02-25 09:14:44.985 4080 ERROR S: login failed: FATAL: remaining
connection slots are reserved for non-replication superuser connections
2017-02-25 09:14:46.290 4080 ERROR S: login failed: FATAL: remaining
connection slots are reserved for non-replication superuser connections
2017-02-25 09:15:37.355 4080 LOG Stats: 26 req/s, in 378113 b/s, out 2717657
b/s,query 164167 us


What am I missing? I will appreciate any tip or suggestion.
Thanks in advance!



--
View this message in context: 
http://www.postgresql-archive.org/GMT-FATAL-remaining-connection-slots-are-reserved-for-non-replication-superuser-connections-but-I-m-g-tp5946245.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not translate host name

2017-02-24 Thread Adrian Klaver

On 02/24/2017 01:37 PM, Tom Ekberg wrote:

I'm running postgres 9.6.2 (also happened on 9.3.14) and have a cron job that 
runs hourly that runs a program that does mostly postgres SELECTs on a 
different host. Occasionally I get email (not hourly) from the cron daemon that 
contains a stack trace that ends with this:

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host 
name "db3.labmed.uw.edu" to address: Name or service not known


As Steve said this is falling over before it ever gets to the Postgres 
server.




It has happened about 9 times so far this month. I have one of our network 
people
look into this but there is no real answer. I could use the IP address but I'd 
rather not.
This problem only happens on one host. I moved the data from db2 to db3.
I was getting similar emails regarding db2 which runs the older postgres.


The above has me confused. Earlier you mention db3.labmed.uw.edu which 
is a host name. You say the problem is only on one host and then mention 
it also happened on db2. So db2 as in db2.labmed.uw.edu which would 
imply more then one host is affected or db2 as a second instance of 
Postgres on db3.labmed.uw.edu?


Also what was the network people's 'no real answer'?

If there was no answer, what is the gist of the network setup between 
the machine that runs the cron jobs and the machine(s) that run the 
database servers?




Any ideas on how to proceed?

Tom Ekberg
Senior Computer Specialist, Lab Medicine
University of Washington Medical Center
1959 NE Pacific St, MS 357110
Seattle WA 98195
work: (206) 598-8544
email: tekb...@uw.edu







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign key references a unique index instead of a primary key

2017-02-23 Thread Adrian Klaver
On 02/23/2017 03:40 AM, Ivan Voras wrote:
> Hello,
> 
> I've inherited a situation where:
> 
>   * a table has both a primary key and a unique index on the same field. 
>   * at some time, a foreign key was added which references this table
> (actually, I'm not sure about the sequence of events), which has
> ended up referencing the unique index instead of the primary key.
> 
> Now, when I've tried dropping the unique index, I get an error that the
> foreign key references this index (with a hint I use DROP...CASCADE).
> 
> This drop index is a part of an automated plpgsql script which deletes
> duplicate indexes, so I'm interested in two things:
> 
>  1. How to detect if a foreign key depends on an index I'm about to
> drop, so I can skip it

Trap the error and move on?:

https://www.postgresql.org/docs/9.6/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

So something like:

drop index skill_code_u ;
ERROR:  cannot drop index skill_code_u because constraint skill_code_u on table 
skill_codes requires it
HINT:  You can drop constraint skill_code_u on table skill_codes instead.


CREATE OR REPLACE FUNCTION public.exception_test()
 RETURNS void
 LANGUAGE plpgsql
AS $function$
DECLARE
state_text varchar;
BEGIN

DROP INDEX skill_code_u;
EXCEPTION
WHEN OTHERS THEN
GET STACKED DIAGNOSTICS   state_text =  RETURNED_SQLSTATE;
RAISE NOTICE '%', state_text;

END;
$function$

Where OTHERS is a special catchall condition.

select exception_test();
NOTICE:  2BP01
 exception_test 


Looking up 2BP01 here:

https://www.postgresql.org/docs/9.6/static/errcodes-appendix.html

shows that it is:

dependent_objects_still_exist

You could narrow the exception to:

EXCEPTION
WHEN dependent_objects_still_exist THEN


>  2. Is there a way to get around this situation, maybe modify the
> pg_constraint table or other tables to reference the index / primary
> key I want

I don't know if that would be wise, it would seem to skip the step where the FK 
verifies that the column it is pointing at actually has unique values. In 
general
the idea of directly modifying system tables makes me nervous.

> 
> ?
> 
> This is on PostgreSQL 9.3.
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Latest PostgreSQL on Raspbian Jessie

2017-02-23 Thread Adrian Klaver

On 02/23/2017 06:08 AM, Ertan Küçükoğlu wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: Thursday, February 23, 2017 3:55 PM
To: Ertan Küçükoğlu <ertan.kucuko...@1nar.com.tr>;
pgsql-general@postgresql.org
Subject: Re: [GENERAL] Latest PostgreSQL on Raspbian Jessie


Hi,

Sorry about not indenting. Have to use Outlook as e-mail app.

I cannot show any work at the moment, I am just researching right now,
before I start actual job. I have a single form making connection to local
PostgreSQL and remote one. Local seems to be OK in all respects. Remote is a
problem.

Below you can see some psql output.

postgres@raspberrypi:~$ psql -d test -U postgres -p 5432 -h 192.168.1.105
Password for user postgres:
DEBUG:  CommitTransaction
DEBUG:  name: unnamed; blockState:   STARTED; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
psql (9.4.10, server 9.6.1)
WARNING: psql major version 9.4, server major version 9.6.
 Some psql features might not work.
Type "help" for help.

test=# select count(*) from sale;
DEBUG:  StartTransactionCommand
DEBUG:  StartTransaction
DEBUG:  name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
DEBUG:  CommitTransactionCommand
DEBUG:  CommitTransaction
DEBUG:  name: unnamed; blockState:   STARTED; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
 count

 176588
(1 row)

test=#

Btw, I am surprised that psql can make a connection *and* runs a query just
fine.


Yeah with the caveat:

"Some psql features might not work."

The older version of psql does not 'know' about new features in 9.6.



My main problem with my application is to run a query. Connection seems to
be OK. Trying to run a query and my application simply freeze.


So the issue is with the Postgres library that Lazarus/FreePascal is using.

What would that library be?



PgAdmin3 gives lots of error messages some objects missing, assertion
failures, etc. After all these messages PgAdmin3 seems to establish a
connection to PostgreSQL 9.6 database on Windows OS. However, PgAdmin3
cannot run a query, too. When I try to do a select, PgAdmin3 freeze. Waiting
only kills my application, or PgAdmin3. No log messages that I can find of.


Not all that surprising as pgAdmin3 is no longer supported:

https://www.pgadmin.org/download/source.php

As you found out, I don't think the last version of pgAdmin3 is 
compatible with 9.6.




Thanks.
-Ertan







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Latest PostgreSQL on Raspbian Jessie

2017-02-23 Thread Adrian Klaver

On 02/23/2017 01:56 AM, Ertan Küçükoğlu wrote:

Hello,

I could not decide which forum is more appropriate. I end up posting my
question here.

For a small but important Project, I need to develop a GUI application on
Raspberry Pi with PostgreSQL using Lazarus. PostgreSQL 9.4.10 server will be
running on Raspberry Pi 3 Model B and another PostgreSQL 9.6.2 will be
running on a Windows OS. Lazarus installed on Raspberry Pi is version 1.7
(built from latest subversion trunk sources) using with FreePascal 3.1.1
(also built from latest subversion trunk sources).

Raspberry Pi application will save some data in PostgreSQL running on Pi and
very same application also needs to save some *other* data on PostgreSQL
9.6.2 running on Windows OS. Both OSes will be running in same LAN.

Saving everything on Raspberry Pi and later copying using another
application/method is not allowed by design.

I could not find 9.6.x version of the PostgreSQL to install on Raspberry Pi
in packages. Most recent I can install using APT package system is 9.4.10. I
do not mind that it is rather old.

My problem starts when I try to reach a PostgreSQL 9.6.2 using 9.4.10 pglib
library (I think).


Can you show what is you are doing?

If not what happens if you use the Postgres command line client(psql) to 
connect to the 9.6.2 database?



- Connection may hang and application stops responding.
- If connection succeeds a simple select * from query hangs and application
stops responding and after a while terminates for good.

I installed PgAdmin3 on Raspberry Pi to make some tests. I get lots of
warnings and some error messages when I try to connect that database on
Windows OS.


What are the warnings/errors?



I am new to PostgreSQL and am not sure if there may be some connection
problems from old version libraries to new version PostgreSQL servers. My
limited little experience says I need same version on both systems for best
stability.

I would like to know;
1- Is it possible to install 9.6.2 with same version pglib on Raspberry Pi?
If possible to do on Raspberry Pi, I think I can build my own version from
sources assuming I am provided a good detailed how to document for doing so.
2- If above is not possible, is it possible to build just the latest version
pglib and overcome my connection problem?
3- If I need to select number 2 solution above, is that latest version
library will talk to old version without any problems?


Yes:

aklaver@arkansas:~$ psql -d postgres -U aklaver -p 5434
Password for user aklaver:
psql (9.6.2, server 9.4.11)
Type "help" for help.

postgres=> select 'test';
 ?column?
--
 test
(1 row)




4- I am always open to other suggestions.

Thanks.

Regards,
Ertan Küçükoğlu







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Latest PostgreSQL on Raspbian Jessie

2017-02-23 Thread Adrian Klaver
On 02/23/2017 02:57 AM, Moreno Andreo wrote:
> Il 23/02/2017 10:56, Ertan Küçükoğlu ha scritto:
>> I would like to know;
>> 1- Is it possible to install 9.6.2 with same version pglib on
>> Raspberry Pi?
> In subject you call it as Raspbian Jessie, so I assume it's somewhat
> related to Debian Jessie.
> Debian repository does not (at last, not in the last few weeks) have
> Postgresql 9.5 and 9.6.
> To get these via apt, you should add the Postgres Development Group
> (PGDG) repo in your sources.list
> 
> https://www.tqhosting.com/kb/446/How-to-install-PostgreSQL-95-on-Debian-8-Jessie.html

I think the issue will be the architecture:

https://wiki.postgresql.org/wiki/Apt

Architectures: amd64 (64-bit x86), i386 (32-bit x86), ppc64el (little-endian 
64-bit POWER; not on wheezy/precise) 

The Raspberry Pi is ARM.

> 
> 
> (it's for Postgresql 9.5 and Debian Jessie, but if my assumption is
> right, it would fit your case, since PGDG contains 9.6 also)
> 
>> If possible to do on Raspberry Pi, I think I can build my own version
>> from
>> sources assuming I am provided a good detailed how to document for
>> doing so.
>> 2- If above is not possible, is it possible to build just the latest
>> version
>> pglib and overcome my connection problem?
>> 3- If I need to select number 2 solution above, is that latest version
>> library will talk to old version without any problems?
>> 4- I am always open to other suggestions.
>>
>> Thanks.
>>
>> Regards,
>> Ertan Küçükoğlu
>>
>>
>>
>>
> 
> 
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Errors...

2017-02-22 Thread Adrian Klaver

On 02/22/2017 09:16 AM, Jerry LeVan wrote:






There is a setting in the Mac System Preferences that will allow you to turn
off smart quotes.


Aah, Apple trying to be 'helpful'. Assumes everyone wants to output 
print copy everywhere.




I did this and the program is acting properly once more :)

Thanks for the tip.

Jerry





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Errors...

2017-02-22 Thread Adrian Klaver

On 02/22/2017 07:32 AM, Jerry LeVan wrote:



On Feb 22, 2017, at 9:20 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 02/22/2017 07:09 AM, Jerry LeVan wrote:

Sorry I accidentally posted this to pgsql-general-owners earlier today…

How can this happen ( from the postgresql.log file)

LOG:  statement: select * from knives where manufacturer=‘Boker’
ERROR:  column "‘boker’" does not exist at character 41

Some background:
I am trying to move from 32 bit apps to 64 bit apps. I still
have a couple of 32 bit apps that talk to postgresql.

env:
 MacOS Sierra 10.12.3

 postgresql 9.6.2 compiled as a 64 bit program. I also
 compiled it as a 32 bit program and glued the 64 bit
 and 32 bit libpq libraries together and replaced the
 installed 64 bit libpq library with the fat version.

 python 2.7.10 Apple’s version

 psycopg2 2.6.1

 wxpython '3.0.3.0.b20160914’ installed from the wxpython.org site.

My main interaction with my database is a wxpython app. I type
sql into a text window and can send the contents of the window
or the selection or the line containing the cursor to postgresql

It appears that every single quote ( ‘ ) gets translated somehow to


No:

test=# select ‘Boker’;
ERROR:  column "‘boker’" does not exist
LINE 1: select ‘Boker’;

test=# select 'Boker';
?column?
--
Boker
(1 row)

It is because they are ‘ ’ not ' '.

So you need to find out what is causing your program to introduce ‘ ’.


The mail program is displaying the straight quote as a curly quote…
In the terminal program I see a straight quote.

Perhaps wxpython is somehow confusing things… tain’t clear on
how to test.


I have seen this issue when I cut and pasted data from word 
processing/spreadsheet programs and not paid attention to the fact they 
have settings that convert ' --> ’ and ''  --> ‘‘ ’’.










--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Errors...

2017-02-22 Thread Adrian Klaver

On 02/22/2017 07:32 AM, Jerry LeVan wrote:



On Feb 22, 2017, at 9:20 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 02/22/2017 07:09 AM, Jerry LeVan wrote:

Sorry I accidentally posted this to pgsql-general-owners earlier today…

How can this happen ( from the postgresql.log file)

LOG:  statement: select * from knives where manufacturer=‘Boker’
ERROR:  column "‘boker’" does not exist at character 41

Some background:
I am trying to move from 32 bit apps to 64 bit apps. I still
have a couple of 32 bit apps that talk to postgresql.

env:
 MacOS Sierra 10.12.3

 postgresql 9.6.2 compiled as a 64 bit program. I also
 compiled it as a 32 bit program and glued the 64 bit
 and 32 bit libpq libraries together and replaced the
 installed 64 bit libpq library with the fat version.

 python 2.7.10 Apple’s version

 psycopg2 2.6.1

 wxpython '3.0.3.0.b20160914’ installed from the wxpython.org site.

My main interaction with my database is a wxpython app. I type
sql into a text window and can send the contents of the window
or the selection or the line containing the cursor to postgresql

It appears that every single quote ( ‘ ) gets translated somehow to


No:

test=# select ‘Boker’;
ERROR:  column "‘boker’" does not exist
LINE 1: select ‘Boker’;

test=# select 'Boker';
?column?
--
Boker
(1 row)

It is because they are ‘ ’ not ' '.

So you need to find out what is causing your program to introduce ‘ ’.


The mail program is displaying the straight quote as a curly quote…
In the terminal program I see a straight quote.


I thought the issue was with the wxPython program not the terminal, or 
are you saying they are the same thing?


So you are see this(with straight quotes):

test=# select "'Boker'";
ERROR:  column "'Boker'" does not exist
LINE 1: select "'Boker'";

So why is the mail program not converting the double quotes to 
typographical quotes also? :


ERROR:  column "‘boker’" does not exist at character 41

or converting the single quotes I sent?




Perhaps wxpython is somehow confusing things… tain’t clear on
how to test.




a double quote ( “ ). SQL statements that do not use quotes return
properly and I can display the results in a grid.

I have modified one of the test programs in the psycopg2 distribution
that sends the selection shown in  the log statement and it works
properly. i.e. I can send the select statement and retrieve the results
from a the psycopg2 python program.

I can run basically the same wxpthon program on my fedora system without
any problems (talking to the mac ).

Any suggestions would be appreciated.

Jerry




--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange Errors...

2017-02-22 Thread Adrian Klaver

On 02/22/2017 07:09 AM, Jerry LeVan wrote:

Sorry I accidentally posted this to pgsql-general-owners earlier today…

How can this happen ( from the postgresql.log file)

LOG:  statement: select * from knives where manufacturer=‘Boker’
ERROR:  column "‘boker’" does not exist at character 41

Some background:
I am trying to move from 32 bit apps to 64 bit apps. I still
have a couple of 32 bit apps that talk to postgresql.

env:
  MacOS Sierra 10.12.3

  postgresql 9.6.2 compiled as a 64 bit program. I also
  compiled it as a 32 bit program and glued the 64 bit
  and 32 bit libpq libraries together and replaced the
  installed 64 bit libpq library with the fat version.

  python 2.7.10 Apple’s version

  psycopg2 2.6.1

  wxpython '3.0.3.0.b20160914’ installed from the wxpython.org site.

My main interaction with my database is a wxpython app. I type
sql into a text window and can send the contents of the window
or the selection or the line containing the cursor to postgresql

It appears that every single quote ( ‘ ) gets translated somehow to


No:

test=# select ‘Boker’;
ERROR:  column "‘boker’" does not exist
LINE 1: select ‘Boker’;

test=# select 'Boker';
 ?column?
--
 Boker
(1 row)

It is because they are ‘ ’ not ' '.

So you need to find out what is causing your program to introduce ‘ ’.


a double quote ( “ ). SQL statements that do not use quotes return
properly and I can display the results in a grid.

I have modified one of the test programs in the psycopg2 distribution
that sends the selection shown in  the log statement and it works
properly. i.e. I can send the select statement and retrieve the results
from a the psycopg2 python program.

I can run basically the same wxpthon program on my fedora system without
any problems (talking to the mac ).

Any suggestions would be appreciated.

Jerry




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiply ON CONFLICT ON CONSTRAINT

2017-02-22 Thread Adrian Klaver

On 02/21/2017 02:49 PM, Arnold Somogyi wrote:

Ccing list.


I want OR.


I do not think that is possible, then again I have not used this feature 
enough to know everything that is possible.






On Tue, Feb 21, 2017 at 12:33 AM, Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

On 02/17/2017 04:53 PM, Arnold Somogyi wrote:
> Hi,
>
> I wonder if there is a way to add more then one ON CONSTRAINT value.
> I have many different unique constraints on my table and I would like to
> catch and handle two of them with ON CONFLICT ON CONSTRAINT. And the
> rest errors will be handled by the client application.
>
> INSERT INTO table_1 (id, name, value, user_id, description)
> VALUES (1, 'name', 'value', null, null)
> ON CONFLICT ON CONSTRAINT *table1_pkey, table1_name_key* DO UPDATE
> SET value = EXCLUDED.value, user_id = EXCLUDED.user_id, description =
> EXCLUDED.description

https://www.postgresql.org/docs/9.6/static/sql-insert.html
<https://www.postgresql.org/docs/9.6/static/sql-insert.html>

"where conflict_target can be one of:

( { index_column_name | ( index_expression ) } [ COLLATE
collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ]
ON CONSTRAINT constraint_name
"

I read that as only one constraint_name.

The question then becomes whether you want:

table1_pkey, table1_name_key

    to OR or AND?

>
> Regards,
> Arnold


--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move rows from one database to other

2017-02-22 Thread Adrian Klaver

On 02/22/2017 04:51 AM, Thomas Güttler wrote:

I have other concerns: atomar transaction. Movement should happen
completely or not all.
I don't think you can do this reliable (atomic transaction) with
"copy table_name".


You can if you wrap it in a transaction:


I want to **move** the data. The data should get deleted on the
satellite after transfer.


Well the replication suggestion is out.



I don't know how to delete the data which was copied, since inserts can
happen during the copy statement.


However you end up doing this I think you will probably need some sort 
of flag on the rows on the satellites. It could be a timestamp field of 
when the rows where inserted on the satellite or a boolean 
field(copied). First instinct is to use an insert timestamp and a 
tracking table that stores the last timestamp used to move rows, where 
the timestamp is only written on a successful transfer. To improve the 
chances of successful transfer more smaller transfer batches rather then 
larger transfers.




Regards,
  Thomas Güttler






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloat indexes - opinion

2017-02-21 Thread Adrian Klaver
On 02/21/2017 03:41 PM, Patrick B wrote:
> 2017-02-22 11:11 GMT+13:00 Patrick B <patrickbake...@gmail.com
> <mailto:patrickbake...@gmail.com>>:
> 
> 2017-02-22 10:59 GMT+13:00 Adrian Klaver <adrian.kla...@aklaver.com
> <mailto:adrian.kla...@aklaver.com>>:
> 
> On 02/21/2017 01:44 PM, Patrick B wrote:
> > Hi guys,
> >
> > I've got a lot of bloat indexes on my 4TB database.
> >
> > Let's take this example:
> >
> > Table: seg
> > Index: ix_filter_by_tree
> > Times_used: 1018082183
> > Table_size: 18 GB -- wrong. The table is mostly on pg_toast 
> table.
> > Its real size is 2TB
> 
> How do you know one number is right and the other is wrong?
> 
> 
> 
> 1. on that table (seg) i store binary data. It is impossible to have
> only 18GB of it.
> 2. 
> 
> SELECT schema_name, 
> 
>pg_size_pretty(sum(table_size)::bigint),
> 
>(sum(table_size) /
> pg_database_size(current_database())) * 100
> 
> FROM (
> 
>   SELECT pg_catalog.pg_namespace.nspname as schema_name,
> 
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
> 
>   FROM   pg_catalog.pg_class
> 
>  JOIN pg_catalog.pg_namespace ON relnamespace =
> pg_catalog.pg_namespace.oid
> 
> ) t
> 
> GROUP BY schema_name
> 
> ORDER BY schema_name
> 
> 
> pg_toast2706 GB82.6211283887724086 <-- this belongs to the seg
> table.
> 
>  
> 
> 
> Have you looked at the functions here?:
> 
> https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT
> 
> <https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT>
> 
> > Index_size: 17 GB
> > Num_writes 16245023
> > Index definition: CREATE INDEX ix_filter_by_tree ON seg USING 
> btree
> > (full_path varchar_pattern_ops) WHERE (full_path IS NOT NULL)
> >
> >
> >
> > What is the real impact of a bloat index? If I reindex it, queries 
> will
> > be faster?
> >
> > Thanks
> > Patrick
> 
> 
> 
> 
> I ran the query before and after the reindex, and it seems it did not
> help on performance.
> 
> *The query I used:*
> 
> explain analyze select * from seg where full_path = '/userfile/123';

The table schema would be useful.

> 
> 
> *Before reindex:*
> 
> Index Scan using ix_filter_by_tree on seg  (cost=0.00..144.87
> rows=215 width=8) (actual time=0.047..0.047 rows=1 loops=1)
>   Index Cond: (full_path = '/userfile/123')
> Total runtime: 0.059 ms
> (3 rows)
> 
> 
> *After reindex:*
> 
> Index Scan using ix_filter_by_tree on seg  (cost=0.00..141.83
> rows=220 width=8) (actual time=0.021..0.021 rows=1 loops=1)
>   Index Cond: (full_path = '/userfile/123')
> Total runtime: 0.036 ms
> (3 rows)

Not showing the complete explain analyze makes the above not all that 
enlightening.
 
> 
> 
> Note that the '*/cost/*' is pretty much the same.
> 
> *My question is:*
> If I have a bloat index. Why do I need to reindex it if I got none
> performance improvements? 

Because it is an indication that you may not have index bloat?

Not sure a runtime of 0.036 to 0.036 ms over a 2TB table is symptomatic of a 
problem.

Might be worth taking a look at:

https://www.postgresql.org/docs/9.6/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW

> 
> Cheers
> Patrick
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOTIFY command impact

2017-02-21 Thread Adrian Klaver

On 02/21/2017 02:43 PM, Rob Brucks wrote:

That's a great thought!

Comparing between systems (one where I don't issue "notify" and one where I do every 
minute) yields the same thing:  one 8k file named "".

If that's truly where notifications are stored, then it looks like I should be 
good, at least for storage.  But I wonder if that file is only used to store 
notify commands during shutdown/startup?


From the source of async.c:

During start or reboot, clean out the pg_notify directory.

I would guess the  file is like a WAL file it is a pre-initialized 
file filled with 0(?)'s




Or if there are any considerations for memory usage…

--Rob

On 2/21/17, 4:38 PM, "Adrian Klaver" <adrian.kla...@aklaver.com> wrote:

On 02/21/2017 02:19 PM, Rob Brucks wrote:
> I did find a post a while back saying they were discarded, but I wanted
> to double-check.
>
>
>
> I performed some tests to see if listens worked AFTER the notify was
> issued, they were not.  This leads me to believe that the messages are
> discarded when a listen does not yet exist.

Seems the thing to do would be to monitor the size of :

$PG_DATA/pg_notify/

>
>
>
> --Rob
>
>
>
> *From: *"David G. Johnston" <david.g.johns...@gmail.com>
> *Date: *Tuesday, February 21, 2017 at 3:38 PM
> *To: *Adrian Klaver <adrian.kla...@aklaver.com>
> *Cc: *Rob Brucks <rob.bru...@rackspace.com>,
> "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
> *Subject: *Re: [GENERAL] NOTIFY command impact
>
>
>
> On Tue, Feb 21, 2017 at 2:17 PM, Adrian Klaver
> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>
> On 02/21/2017 01:07 PM, Rob Brucks wrote:
>
>
> Do you see any long-term problems with constantly issuing "NOTIFY"
> commands every 30 seconds without an associated "LISTEN" command?
>
>
>
> Depending on how long 'long term' is:
>
> https://www.postgresql.org/docs/9.6/static/sql-notify.html
>
> "There is a queue that holds notifications that have been sent but
> not yet processed by all listening sessions
>
>
>
> ​Its not clear in the OP that this is the case (it seems to be) but the
> documentation is non-specific as to what happens when "# of listeners" =
> 0;  I suspect that said messages are created and then immediately
    > discarded - though apparently they do make it over to the standby server
> ​ too - and likely also immediately discarded there as well.
>
>
>
> David J.
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOTIFY command impact

2017-02-21 Thread Adrian Klaver

On 02/21/2017 02:19 PM, Rob Brucks wrote:

I did find a post a while back saying they were discarded, but I wanted
to double-check.



I performed some tests to see if listens worked AFTER the notify was
issued, they were not.  This leads me to believe that the messages are
discarded when a listen does not yet exist.


Seems the thing to do would be to monitor the size of :

$PG_DATA/pg_notify/





--Rob



*From: *"David G. Johnston" <david.g.johns...@gmail.com>
*Date: *Tuesday, February 21, 2017 at 3:38 PM
*To: *Adrian Klaver <adrian.kla...@aklaver.com>
*Cc: *Rob Brucks <rob.bru...@rackspace.com>,
"pgsql-general@postgresql.org" <pgsql-general@postgresql.org>
*Subject: *Re: [GENERAL] NOTIFY command impact



On Tue, Feb 21, 2017 at 2:17 PM, Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

On 02/21/2017 01:07 PM, Rob Brucks wrote:


Do you see any long-term problems with constantly issuing "NOTIFY"
commands every 30 seconds without an associated "LISTEN" command?



Depending on how long 'long term' is:

https://www.postgresql.org/docs/9.6/static/sql-notify.html

"There is a queue that holds notifications that have been sent but
not yet processed by all listening sessions



​Its not clear in the OP that this is the case (it seems to be) but the
documentation is non-specific as to what happens when "# of listeners" =
0;  I suspect that said messages are created and then immediately
discarded - though apparently they do make it over to the standby server
​ too - and likely also immediately discarded there as well.



David J.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloat indexes - opinion

2017-02-21 Thread Adrian Klaver
On 02/21/2017 01:44 PM, Patrick B wrote:
> Hi guys,
> 
> I've got a lot of bloat indexes on my 4TB database.
> 
> Let's take this example:
> 
> Table: seg
> Index: ix_filter_by_tree
> Times_used: 1018082183
> Table_size: 18 GB -- wrong. The table is mostly on pg_toast table.
> Its real size is 2TB

How do you know one number is right and the other is wrong?

Have you looked at the functions here?:
https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT

> Index_size: 17 GB
> Num_writes 16245023
> Index definition: CREATE INDEX ix_filter_by_tree ON seg USING btree
> (full_path varchar_pattern_ops) WHERE (full_path IS NOT NULL)
> 
> 
> 
> What is the real impact of a bloat index? If I reindex it, queries will
> be faster?
> 
> Thanks
> Patrick


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOTIFY command impact

2017-02-21 Thread Adrian Klaver

On 02/21/2017 01:07 PM, Rob Brucks wrote:

Hi All,



I just wanted to check with you guys to make sure that constantly
issuing "NOTIFY" commands without corresponding "LISTEN" commands will
not cause any harm or excessive work for the PostgreSQL cluster. Nothing
that would put my cluster at risk.



The reason I ask is because I was trying to implement a reliable method
of monitoring replication lag for streaming replication on 9.2+ systems
using the following SQL on slaves:

select extract(epoch from now() - pg_last_xact_replay_timestamp());



This SQL provides me with a time-based measure of replication lag
instead of a byte-based measure. Time-based lag measurement is more
meaningful for us in time-sensitive applications.



During my testing I noticed that if the database went "quiet" (no update
activity on the master) for a period of time, then the last replay
timestamp remained unchanged. Having little or no update activity
after-hours is very common on our smaller systems.



This made the monitoring of replication lag inconsistent because,
despite the slave being "caught up" with the master, it was reporting an
increasing time lag.  And I didn't want our DBAs to get false alerts
from our monitoring.



So I went on the hunt for a method of forcing replay to occur without
actually performing any database updates. I also did not want to grant
any kind of update capability on the database to my monitoring role, for
tighter security.



I discovered that the monitoring role, despite not having any update
permissions, could successfully issue a "NOTIFY" command to a bogus
channel and that this command actually forced the log to replay on the
slave, updating the replay timestamp. This seems like a viable solution
to my problem.



My plan is to have the monitoring role issue a "NOTIFY" every 30 seconds
to ensure the timestamp is updated at least that frequently. But there
will not be an associated "LISTEN" for these notifications.



However, I don't want to cause any problems for the PostgreSQL cluster
itself by having messages inserted with nobody to listen for them, which
is why I'm posting here.



Do you see any long-term problems with constantly issuing "NOTIFY"
commands every 30 seconds without an associated "LISTEN" command?


Depending on how long 'long term' is:

https://www.postgresql.org/docs/9.6/static/sql-notify.html

"There is a queue that holds notifications that have been sent but not 
yet processed by all listening sessions. If this queue becomes full, 
transactions calling NOTIFY will fail at commit. The queue is quite 
large (8GB in a standard installation) and should be sufficiently sized 
for almost every use case. However, no cleanup can take place if a 
session executes LISTEN and then enters a transaction for a very long 
time. Once the queue is half full you will see warnings in the log file 
pointing you to the session that is preventing cleanup. In this case you 
should make sure that this session ends its current transaction so that 
cleanup can proceed."






Thank you,

Rob Brucks




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Adrian Klaver

On 02/21/2017 08:06 AM, Thomas Güttler wrote:



Am 21.02.2017 um 15:27 schrieb William Ivanski:

You can try OmniDB: http://www.omnidb.com.br/en_index.aspx

OmniDB has a Convert feature, where you can set a data transfer, even
if the target table exists.


I am unsure if omnidb is the right tool here.

I don't need a GUI. The movement of the rows should happen in background.


Given the versions of Postgres you are using there is logical replication:

https://www.postgresql.org/docs/9.5/static/protocol-replication.html

https://2ndquadrant.com/en/resources/pglogical/
"AGGREGATE - Accumulate changes from sharded database servers into a 
Data Warehouse"


I have not used this capability yet, so others would have to comment on 
its applicability.




Regards,
  Thomas Güttler




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Adrian Klaver

On 02/21/2017 07:53 AM, Thomas Güttler wrote:


Am 21.02.2017 um 15:12 schrieb Adrian Klaver:

On 02/21/2017 12:53 AM, Thomas Güttler wrote:

I want to move table rows from one database to an central database.


You actually talking about moving from ~100 databases to the central
database, correct?



Both run PostgreSQL.


Are all the Postgres instances the same version and what is the
version or versions?


Yes, all run postgres, but the version can be different (but not much).
Satellite-DB 9.5 and 9.6 and central 9.6.






My use case looks like this:

There are N satellite databases in different data centers. N is about
100 at the moment.

There is one central database.

I need a way to reliably move rows from the satellite databases to the
central one


Two ways I can think of:

https://www.postgresql.org/docs/9.6/static/dblink.html

https://www.postgresql.org/docs/9.6/static/postgres-fdw.html



Since  postgres_fdw is newer, I would focus on it, right?

If I understood it correctly, then there would be N (about 100) tables
in the central DB.

What happens if there is a network outage (for about 3 minutes) during
accessing a remote table?


I misunderstood your original intent, I thought this was a one time 
process to move data to the central database. Given that it is to be a 
continuous process a FDW may not be the answer, one of the reasons being 
the above question. You will be denied the data in the remote table 
during the outage. Also not sure what you will be doing with the data in 
the central database and how often? In any case it will involve reaching 
out to all the satellites each time you want to query the latest data. 
Looks more like some kind of push mechanism from the satellites to the 
central database is in order. Then once the data is on the central 
database it is 'captured'.  A question that comes to mind is if there is 
a problem should the data transfer from one or more satellites lag that 
of the others?






Is there a Primary Key on the satellite tables or some way of
determining unique rows?


The concrete schema is not specified up to now. But I guess UUID as
primary key would be the best fit.
Or am I wrong?



Is there any existing overlap between the data in the central database
and the satellite databases?


No, there won't be overlaps. Every satellite system creates its own rows.


How much data are you talking about moving from each database?
How active are the satellite databases?


100k rows per day per satellite. Each row has only few bytes.

Moving of rows should happen every ten minutes.


 - inserts can happen during syncing.


Can UPDATEs happen?


No, rows get created and moved and later deleted.

Thank you Adrian for your questions. It helped me to narrow down my
problem.

Regards,
  Thomas





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Adrian Klaver

On 02/21/2017 12:53 AM, Thomas Güttler wrote:

I want to move table rows from one database to an central database.

Both run PostgreSQL.





How to solve this with PostgreSQL?


Should have added earlier. This is a specific case of the more general 
case of ETL(Extract/Transform/Load). There are a host of tools out there 
that do this. For instance I use Python and the following is available:


http://petl.readthedocs.io/en/latest/index.html

What tool you choose comes down to what you are comfortable with:

1) Writing your own programs/scripts and in what language?

2) Using a GUI that sets things up for you.



Regards,
  Thomas Güttler







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move rows from one database to other

2017-02-21 Thread Adrian Klaver

On 02/21/2017 12:53 AM, Thomas Güttler wrote:

I want to move table rows from one database to an central database.


You actually talking about moving from ~100 databases to the central 
database, correct?




Both run PostgreSQL.


Are all the Postgres instances the same version and what is the version 
or versions?




My use case looks like this:

There are N satellite databases in different data centers. N is about
100 at the moment.

There is one central database.

I need a way to reliably move rows from the satellite databases to the
central one


Two ways I can think of:

https://www.postgresql.org/docs/9.6/static/dblink.html

https://www.postgresql.org/docs/9.6/static/postgres-fdw.html



Example

The rows of host1 look like this:

 host1, 2017-02-21, abc
 host1, 2017-02-20, def
 host1, 2017-02-19, ghi

The rows of host2 look like this:

 host2, 2017-02-21, foo
 host2, 2017-02-20, bar
 host2, 2017-02-19, blu

After syncing, all lines which were transferred should be deleted on the
satellite databases.

The central table should look like this (it has the same schema)

 host1, 2017-02-21, abc
 host1, 2017-02-20, def
 host1, 2017-02-19, ghi
 host2, 2017-02-21, foo
 host2, 2017-02-20, bar
 host2, 2017-02-19, blu


Is there a Primary Key on the satellite tables or some way of 
determining unique rows?


Is there any existing overlap between the data in the central database 
and the satellite databases?






I don't want to code this myself, since there a tons of possible race
conditions:


How much data are you talking about moving from each database?

How active are the satellite databases?



 - inserts can happen during syncing.


Can UPDATEs happen?


 - Network can break during syncing.
 - inserts into the central table can break (e.g. disk full): No loss at
the satellite database must happen.
 - ...

How to solve this with PostgreSQL?

Regards,
  Thomas Güttler







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiply ON CONFLICT ON CONSTRAINT

2017-02-20 Thread Adrian Klaver
On 02/17/2017 04:53 PM, Arnold Somogyi wrote:
> Hi,
> 
> I wonder if there is a way to add more then one ON CONSTRAINT value.
> I have many different unique constraints on my table and I would like to
> catch and handle two of them with ON CONFLICT ON CONSTRAINT. And the
> rest errors will be handled by the client application.
> 
> INSERT INTO table_1 (id, name, value, user_id, description)
> VALUES (1, 'name', 'value', null, null)
> ON CONFLICT ON CONSTRAINT *table1_pkey, table1_name_key* DO UPDATE
> SET value = EXCLUDED.value, user_id = EXCLUDED.user_id, description =
> EXCLUDED.description

https://www.postgresql.org/docs/9.6/static/sql-insert.html

"where conflict_target can be one of:

( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ 
opclass ] [, ...] ) [ WHERE index_predicate ]
ON CONSTRAINT constraint_name
"

I read that as only one constraint_name.

The question then becomes whether you want:

table1_pkey, table1_name_key

to OR or AND?

> 
> Regards,
> Arnold


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-18 Thread Adrian Klaver

On 02/17/2017 11:54 PM, Michael Paquier wrote:

On Sat, Feb 18, 2017 at 1:32 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:

Yes it can. Truncate has been rollbackable for a while now.


Per the docs:
https://www.postgresql.org/docs/9.6/static/sql-truncate.html
"TRUNCATE is transaction-safe with respect to the data in the tables:
the truncation will be safely rolled back if the surrounding
transaction does not commit."
In short yes a transaction doing a truncate can be rollbacked.



I think the part that confuses people into thinking it can not be 
rollbacked is this:


"TRUNCATE is not MVCC-safe. After truncation, the table will appear 
empty to concurrent transactions, if they are using a snapshot taken 
before the truncation occurred. See Section 13.5 for more details."



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL mirroring from RPM install to RPM install-revisited

2017-02-17 Thread Adrian Klaver

On 02/17/2017 12:34 PM, Richard Brosnahan wrote:

Thanks for the response Adrian,

Both servers are pretty much identical.

uname -a
master
Linux devtmbm178 2.6.32-642.6.2.el6.x86_64 #1 SMP Tue Oct 25 13:37:48
PDT 2016 x86_64 x86_64 x86_64 GNU/Linux

slave
Linux devtmbm176 2.6.32-642.11.1.el6.x86_64 #1 SMP Tue Nov 15 09:40:59
PST 2016 x86_64 x86_64 x86_64 GNU/Linux

Since the last message, I've downgraded PostgreSQL to 9.4.1 on the
slave, using
rpm -Uvh --oldpackage [file names]

I had wisely kept copies of the rpm files for PostgreSQL 9.4.1 for OEL6
and used those. rpm did the downgrade without issue, and I tested the
9.4.1 PostgreSQL installation. The minimal testing I did after the
install worked fine. initdb, start the server, psql, etc.

I then stopped the new slave PostgreSQL instance, and proceeded with the
instructions for creating a slave.
I again used pg_basebackup

postgres $ pg_basebackup -D /var/lib/pgsql/9.4/data
--write-recovery-conf -h devtmbm178.unix.gsm1900.org -U pgrepuser -p 5432 -W


NOTICE:  pg_stop_backup complete, all required WAL segments have been
archived


This executed without incident.


After verifying, and modifying postgresql.conf, recovery.conf I
attempted to start postgresql. This was again, not successful.


postgres $ pg_ctl start

server starting

-bash-4.1$ < 2017-02-17 12:13:53.176 PST >FATAL:  incorrect checksum in
control file


postgres $ pg_controldata

WARNING: Calculated CRC checksum does not match value stored in file.

Either the file is corrupt, or it has a different layout than this program

is expecting.  The results below are untrustworthy.


Segmentation fault (core dumped)


Now I'm really unhappy. Same server architecture, same PostgreSQL
versions. No joy!


Well something is different about the two Postgres instances. I have 
lost track of where they came from, but can you parse out the compile 
options to each. Suspicion is one is compiled with:


https://www.postgresql.org/docs/9.4/static/install-procedure.html

--disable-integer-datetimes


and one is not, which is the default. You can usually use pg_controldata 
to find that:


Date/time type storage:   64-bit integers

Can you use pg_controldata or is it still seg faulting?

If not that then some other compile option.

Just had another thought.

Is there more then one version of Postgres installed on the slave server?





--

Richard Brosnahan



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL mirroring from RPM install to RPM install-revisited

2017-02-17 Thread Adrian Klaver

On 02/16/2017 04:39 PM, Richard Brosnahan wrote:

Hi all,

Way back in December I posted a question about mirroring from an RPM
installed PostgreSQL (binary) to a source built PostgreSQL, with the
same version (9.4.1 --> 9.4.1). Both servers are running OEL6.


I went back to the previous threads and I could not find if you ever 
said whether the two systems are using the same hardware architecture or 
not? Vincent Veyron asked but I can't find a response.




I won't copy the entire thread from before, as the situation has changed
a bit. The biggest changes are that I have root on the slave,
temporarily, and I've installed PostgreSQL on the slave using yum (also
binary).

I've followed all the instructions found here:

https://www.postgresql.org/docs/9.4/static/warm-standby.html#STREAMING-REPLICATION


The slave is running PostgreSQL 9.4.11 and was installed using yum.
It runs fine after I've run initdb and set things up. The master was
also installed from rpm binaries, but the installers used Puppet. That
version is 9.4.1. Yes, I know I should be using the exact same version,
but I couldn't find 9.4.1 in the PostgreSQL yum repo.


When I replace its data directory as part of the mirroring instructions,
using pg_basebackup, PostgreSQL won't start. I used pg_basebackup.


I get a checksum error, from pg_ctl.

2016-12-15 08:27:14.520 PST >FATAL: incorrect checksum in control file


Previously, Tom Lane suggested I try this:

You could try using pg_controldata to compare the pg_control contents;

it should be willing to print field values even if it thinks the checksum

is bad. It would be interesting to see (a) what the master's

pg_controldata prints about its pg_control, (b) what the slave's

pg_controldata prints about pg_control from a fresh initdb there, and

(c) what the slave's pg_controldata prints about the copied pg_control.


For Tom's requests (a and b), I can provide good output from
pg_controldata from the master with production data, and from the slave
right after initdb. I'll provide that on request.


for Tom's request (c) I get this from the slave, after data is copied.

$ pg_controldata

WARNING: Calculated CRC checksum does not match value stored in file.

Either the file is corrupt, or it has a different layout than this program

is expecting.  The results below are untrustworthy.


Segmentation fault (core dumped)


With this new installation on the slave, same result. core dump


Tom Lane then suggested:

$ gdb path/to/pg_controldata

gdb> run /apps/database/postgresql-data

(wait

for it to report segfault)

gdb> bt


Since I now have gdb, I can do that:

$ gdb /usr/pgsql-9.4/bin/pg_controldata

-bash: gdb: command not found

-bash-4.1$ gdb /usr/pgsql-9.4/bin/pg_controldata

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-90.el6)

Copyright (C) 2010 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.  Type "show copying"

and "show warranty" for details.

This GDB was configured as "x86_64-redhat-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>...

Reading symbols from /usr/pgsql-9.4/bin/pg_controldata...(no debugging
symbols found)...done.

Missing separate debuginfos, use: debuginfo-install
postgresql94-server-9.4.11-1PGDG.rhel6.x86_64

(gdb) run /var/lib/pgsql/9.4/data

Starting program: /usr/pgsql-9.4/bin/pg_controldata /var/lib/pgsql/9.4/data

WARNING: Calculated CRC checksum does not match value stored in file.

Either the file is corrupt, or it has a different layout than this program

is expecting.  The results below are untrustworthy.



Program received signal SIGSEGV, Segmentation fault.

0x0033d20a3a15 in __strftime_internal () from /lib64/libc.so.6

(gdb) bt

#0  0x0033d20a3a15 in __strftime_internal () from /lib64/libc.so.6

#1  0x0033d20a5a36 in strftime_l () from /lib64/libc.so.6

#2  0x004015c7 in ?? ()

#3  0x0033d201ed1d in __libc_start_main () from /lib64/libc.so.6

#4  0x00401349 in ?? ()

#5  0x7fffe518 in ?? ()

#6  0x001c in ?? ()

#7  0x0002 in ?? ()

#8  0x7fffe751 in ?? ()

#9  0x7fffe773 in ?? ()

#10 0x in ?? ()

(gdb)


pg_controldata shouldn't be core dumping.


Should I give up trying to use 9.4.1 and 9.4.11 as master/slave?

My options appear to be

1 upgrade the master to 9.4.11, which will be VERY DIFFICULT given its
Puppet install, and the difficulty I have getting root access to our
servers.

2 Downgrade the slave. This is easier than option 1, but I would need to
find a yum repo that has that version.

3 Make what I have work, somehow.

Any assistance would be greatly appreciated!

--

Richard Brosnahan




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-gener

Re: [GENERAL] "Database does not exist" weirdness

2017-02-17 Thread Adrian Klaver

On 02/16/2017 11:18 PM, Michael Tyson wrote:

Hi folks,

Please excuse the question if I'm missing something stupid, but I seem to be stuck. I've 
created a postgres database, via an Elixir project setup, and it's showing up via psql's 
\l command, but I'm seeing "database does not exist" messages.

Transcript of a session showing this follows below.

Anything I should be looking at, here? What am I missing?


In addition to what has already been asked, I am assuming from the 
system prompts this is happening on a Raspberry Pi, correct?


If so what is being used for storage and have there been any issues with 
said storage, eg someone pulling a SD card out at the wrong time?





Many thanks in advance,
Michael



pi@raspi ~ $ sudo -u postgres psql
psql (9.4.10)
Type "help" for help.

postgres=# \l
 List of databases
 Name |  Owner   | Encoding |   Collate   |Ctype| Access 
privileges
--+--+--+-+-+---
 testdb | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =Tc/postgres
 +
  |  |  | | | 
postgres=CTc/postgres+
  |  |  | | | 
testdb=CTc/postgres
 postgres | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
 template0| postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres   
   +
  |  |  | | | 
postgres=CTc/postgres
 template1| postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres   
   +
  |  |  | | | 
postgres=CTc/postgres
(4 rows)

postgres=# \q
pi@raspi ~ $ sudo -u postgres psql testdb
psql: FATAL:  database "testdb" does not exist
pi@raspi ~ $ sudo -u postgres createdb testdb
createdb: database creation failed: ERROR:  duplicate key value violates unique 
constraint "pg_database_datname_index"
DETAIL:  Key (datname)=(testdb) already exists.








Sent from my iPhone







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access privileges /yyyy -- role that granted this privilege.

2017-02-17 Thread Adrian Klaver

On 02/17/2017 05:44 AM, Jean-Michel Scheiwiler wrote:

Hello,

I delve into access privileges and I have a problem (or a
miscomprehension) when i type \l, \dn+ or \dp with the / "role that
granted this privilege"  part.

( https://www.postgresql.org/docs/current/static/sql-grant.html )

\l for instance

[postgres:~]$psql
psql (9.6.2)
Type "help" for help.

postgres=# create role superman login superuser;
CREATE ROLE
postgres=# create role user01 login ;
CREATE ROLE
postgres=# create role user02 login ;
CREATE ROLE
postgres=# create database db001;
CREATE DATABASE
postgres=# \l db001
  List of databases
 Name  |  Owner   | Encoding |   Collate   |Ctype| Access
privileges
---+--+--+-+-+---
 db001 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(1 row)

postgres=# grant connect on database db001 to user01 ;
GRANT
postgres=# \l db001
List of databases
 Name  |  Owner   | Encoding |   Collate   |Ctype|   Access
privileges
---+--+--+-+-+---
 db001 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres
+
   |  |  | | |
postgres=CTc/postgres+
   |  |  | | | user01=c/postgres
(1 row)

postgres=# \q
[postgres:~]$psql -U superman postgres
psql (9.6.2)
Type "help" for help.

postgres=# grant connect on database db001 to user02;
GRANT
postgres=# \l db001
List of databases
 Name  |  Owner   | Encoding |   Collate   |Ctype|   Access
privileges
---+--+--+-+-+---
 db001 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres
+
   |  |  | | |
postgres=CTc/postgres+
   |  |  | | |
user01=c/postgres+
   |  |  | | | user02=c/postgres
(1 row)


I thought i would get   user02=c/superman but instead i
get user02=c/postgres => I don't get the "role that granted this
privilege" but i get the owner of the database.


As a practical matter it does not matter as postgres and superman are 
both superusers, still for an explanation of why it happens:


https://www.postgresql.org/docs/9.6/static/sql-grant.html

"If a superuser chooses to issue a GRANT or REVOKE command, the command 
is performed as though it were issued by the owner of the affected 
object. In particular, privileges granted via such a command will appear 
to have been granted by the object owner. (For role membership, the 
membership appears to have been granted by the containing role itself.)"




The problem is the same with schemas or tables access privileges.

Can you help me figure this out ?


What are you trying to achieve?



Thank you in advance

Jean-Michel Scheiwiler



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Load multiple CSV file in Postgres using COPY

2017-02-17 Thread Adrian Klaver

On 02/17/2017 05:55 AM, Murtuza Zabuawala wrote:

Thank you All for your suggestions, But I was looking for solution
around COPY command only.


Sort of a cheat:

https://www.postgresql.org/docs/9.6/static/sql-copy.html

PROGRAM

A command to execute. In COPY FROM, the input is read from standard 
output of the command, and in COPY TO, the output is written to the 
standard input of the command.


Note that the command is invoked by the shell, so if you need to 
pass any arguments to shell command that come from an untrusted source, 
you must be careful to strip or escape any special characters that might 
have a special meaning for the shell. For security reasons, it is best 
to use a fixed command string, or at least avoid passing any user input 
in it.





--
Regards,
Murtuza Zabuawala
EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
The Enterprise PostgreSQL Company

On Fri, Feb 17, 2017 at 3:06 PM, Alexander Shchapov
<alexande...@gmail.com <mailto:alexande...@gmail.com>> wrote:

You might want to look into pgloader: http://pgloader.io/

On Fri, Feb 17, 2017 at 7:26 AM, Murtuza Zabuawala
<murtuza.zabuaw...@enterprisedb.com
<mailto:murtuza.zabuaw...@enterprisedb.com>> wrote:
> Hi,
>
> Is there any way to load multiple CSV files at once using single COPY
> command?
>
> I have scenario where I have to load multiple files,
>
> COPY prdxgdat FROM 'Z:/data-2016-04-01.csv' WITH DELIMITER ',' CSV
HEADER Y
> COPY prdxgdat FROM 'Z:/data-2016-04-02.csv' WITH DELIMITER ',' CSV
HEADER Y
> COPY prdxgdat FROM 'Z:/data-2016-04-03.csv' WITH DELIMITER ',' CSV
HEADER Y
> COPY prdxgdat FROM 'Z:/data-2016-04-04.csv' WITH DELIMITER ',' CSV
HEADER Y
> COPY prdxgdat FROM 'Z:/data-2016-04-05.csv' WITH DELIMITER ',' CSV
HEADER Y
> COPY prdxgdat FROM 'Z:/data-2016-04-06.csv' WITH DELIMITER ',' CSV
HEADER Y
> ..
> ..
> ..
> ..
> COPY prdxgdat FROM 'Z:/data-2016-04-50.csv' WITH DELIMITER ',' CSV
HEADER Y
>
> 50 files -> 50 COPY command, In my use case I think this is not a
good way
> to load data, Can you suggest any better way to do this?
>
> I can always write external script (eg: shell script) but is there
any other
> way to do this using single COPY command?
>
> --
> Regards,
> Murtuza Zabuawala
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



--
Alexander Shchapov





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] disk writes within a transaction

2017-02-16 Thread Adrian Klaver

On 02/16/2017 11:33 AM, 2xlp - ListSubscriptions wrote:

Can someone enlighten me to how postgres handles disk writing?  I've read some 
generic remarks about buffers, but that's about it.

We have a chunk of code that calls Postgres in a less-than-optimal way within a 
transaction block.  I'm wondering where to prioritize fixing it, as the traffic 
on the wire isn't an issue.

Basically the code looks like this:

begin;
update foo set foo.a='1' where foo.bar = 1;
...
update foo set foo.b='2' where foo.bar = 1;
...
update foo set foo.c='3' where foo.bar = 1;
commit;

If the updates are likely to be a memory based operation, consolidating them 
can wait.  If they are likely to hit the disk, I should schedule refactoring 
this code sooner than later.


I would suggest taking a look at:

https://www.postgresql.org/docs/9.6/static/wal-configuration.html



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Service configuration file and password security

2017-02-16 Thread Adrian Klaver

On 02/16/2017 05:57 AM, JP Jacoupy wrote:

Hello,

This might seem a pretty novice question but I can't find an answer.

Can the password be stored in an encrypted way inside a service
configuration file?


To be clear you are talking about this, correct?:

https://www.postgresql.org/docs/9.6/static/libpq-pgservice.html

If so then no as I understand it. Assuming you have password 
authentication set up to md5, libpq takes care of doing the md5-hash 
before sending it to the server. You do have the following options:


1) https://www.postgresql.org/docs/9.6/static/libpq-pgpass.html

2) https://www.postgresql.org/docs/9.6/static/libpq-envars.html

PGPASSWORD behaves the same as the password connection parameter. Use of 
this environment variable is not recommended for security reasons, as 
some operating systems allow non-root users to see process environment 
variables via ps; instead consider using the ~/.pgpass file (see Section 
32.15).


PGPASSFILE specifies the name of the password file to use for lookups. 
If not set, it defaults to ~/.pgpass (see Section 32.15).


2) And coming in version 10:
http://paquier.xyz/postgresql-2/postgres-10-pgpassfile-connection/



--
Jacoupy Jean-Philippe


Sent from ProtonMail <https://protonmail.ch>, encrypted email based in
Switzerland.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-16 Thread Adrian Klaver

On 02/16/2017 08:45 AM, Tim Bellis wrote:

Thank you all - that's really useful :-)

The other query that gets blocked behind the vacuum is the below (truncated).

This query is generated by jdbc in this method:
org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getIndexInfo(AbstractJdbc2DatabaseMetaData.java:4023)


Is JDBC doing anything else before issuing this?



Even though this is a read only query, is it also expected to be blocked behind 
the vacuum? Is there a way of getting indexes for a table which won't be 
blocked behind a vacuum?


Table 13.2 here:

https://www.postgresql.org/docs/9.5/static/explicit-locking.html

shows the conflicts with SHARE UPDATE EXCLUSIVE(vacuum).

pg_locks:

https://www.postgresql.org/docs/9.5/static/view-pg-locks.html

shows locks being held. So next time it happens I would take a look and 
see if you can work backwards from there.


You could directly access the index information using:

https://www.postgresql.org/docs/9.6/static/catalog-pg-index.html
https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html



Thank you all again,

Tim

SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM,   ct.relname AS TABLE_NAME, NOT 
i.indisunique AS NON_UNIQUE,   NULL AS INDEX_QUALIFIER, ci.relname AS INDEX_NAME,   
CASE i.indisclustered WHEN true THEN 1ELSE CASE am.amname   WHEN 'hash' 
THEN 2  ELSE 3END   END AS TYPE,   (i.keys).n AS ORDINAL_POSITION,   
pg_catalog.pg_get_indexdef(ci.oid, (i.keys).n, false) AS COLUMN_NAME,   CASE 
am.amcanorder WHEN true THEN CASE i.indoption[(i.keys).n - 1] & 1   
WHEN 1 THEN 'D'   ELSE 'A' END ELSE NULL   END AS ASC_OR_DESC,   
ci.reltuples AS CARDINALITY,   ci.relpages AS PAGES,   
pg_catalog.pg_get_expr(i.indpred, i.indrelid) AS FILTER_CONDITION FROM 
pg_catalog.pg_class ct   JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = 
n.oid)   JOIN (SELECT i.indexrelid, i.indrelid, i.indoption,   
i.indisunique, i.indisclustered, i.indpred,   i.indexprs,   
information_schema._pg_expandarray(i.indkey) AS keys FROM 
pg_catalog.pg_index i) i ON (ct.oid = i.ind


This query is cut off so cannot say whether it is the issue or not.






--
Adrian Klaver
adrian.kla...@aklaver.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using ctid in delete statement

2017-02-16 Thread Adrian Klaver

On 02/16/2017 07:42 AM, pinker wrote:

Adrian Klaver-4 wrote

Exactly, they do not have it whereas:

https://www.postgresql.org/docs/9.6/static/sql-select.html#SQL-FOR-UPDATE-SHARE


Still not much. The documentation could be more verbose on this topic. I can
only presume that since there is an example with select:
SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss WHERE col1 = 5;
it's propably possible, but there is no information when the lock is
released (on commit like in oracle?) especially if there is no explicit
BEGIN/END clause like in this case.


From above section:

For more information on each row-level lock mode, refer to Section 13.3.2.

which takes you to:

https://www.postgresql.org/docs/9.6/static/explicit-locking.html#LOCKING-ROWS
"
FOR UPDATE

FOR UPDATE causes the rows retrieved by the SELECT statement to be 
locked as though for update. This prevents them from being locked, 
modified or deleted by other transactions until the current transaction 
ends. That is, other transactions that attempt UPDATE, DELETE, SELECT 
FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY 
SHARE of these rows will be blocked until the current transaction ends; 
conversely, SELECT FOR UPDATE will wait for a concurrent transaction 
that has run any of those commands on the same row, and will then lock 
and return the updated row (or no row, if the row was deleted). Within a 
REPEATABLE READ or SERIALIZABLE transaction, however, an error will be 
thrown if a row to be locked has changed since the transaction started. 
For further discussion see Section 13.4.


The FOR UPDATE lock mode is also acquired by any DELETE on a row, 
and also by an UPDATE that modifies the values on certain columns. 
Currently, the set of columns considered for the UPDATE case are those 
that have a unique index on them that can be used in a foreign key (so 
partial indexes and expressional indexes are not considered), but this 
may change in the future.

"

Which has:

"For further discussion see Section 13.4.":

https://www.postgresql.org/docs/9.6/static/applevel-consistency.html

And from there links to more information.





Oracle documentation is much more clear about it:
You can also use SELECT FOR UPDATE to lock rows that you do not want to
update, as in Example 9-6.
<http://docs.oracle.com/database/122/LNPLS/static-sql.htm#LNPLS00609>







--
View this message in context: 
http://postgresql.nabble.com/Using-ctid-in-delete-statement-tp5944434p5944733.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using ctid in delete statement

2017-02-16 Thread Adrian Klaver
On 02/16/2017 06:52 AM, pinker wrote:
> Adrian Klaver-4 wrote
>> https://www.postgresql.org/docs/9.6/static/sql-truncate.html
>>
>> https://www.postgresql.org/docs/9.6/static/sql-delete.html
> 
> There is nothing about FOR UPDATE clause on those pages...

Exactly, they do not have it whereas:

https://www.postgresql.org/docs/9.6/static/sql-select.html#SQL-FOR-UPDATE-SHARE


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using ctid in delete statement

2017-02-16 Thread Adrian Klaver

On 02/16/2017 02:04 AM, pinker wrote:

Thank you Tom for clarification.
Does it mean that FOR UPDATE clause works with other operations as well?
i.e. TRUNCATE, DELETE?


https://www.postgresql.org/docs/9.6/static/sql-truncate.html

https://www.postgresql.org/docs/9.6/static/sql-delete.html





--
View this message in context: 
http://postgresql.nabble.com/Using-ctid-in-delete-statement-tp5944434p5944658.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alternate way of xpath

2017-02-16 Thread Adrian Klaver

On 02/16/2017 04:33 AM, dhaval jaiswal wrote:

I have the following situation.

PostgreSQL is  not configured with the option   --with-libxml

Having one text column where i am trying to run the following command
which is failing as expected its not configure with libxml. However, is
there any alternate way through which i can achieve this.

select xpath('///Name/text()', column1_xml::xml) from test;


Pull the text out and use another language to process:

https://en.wikipedia.org/wiki/XPath#Implementations

either externally or in a Postgres pl* function.



ERROR:  unsupported XML feature
DETAIL:  This functionality requires the server to be built with libxml
support.
HINT:  You need to rebuild PostgreSQL using --with-libxml.



Sent from Outlook <http://aka.ms/weboutlook>




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum stuck for hours, blocking queries

2017-02-15 Thread Adrian Klaver

On 02/15/2017 09:30 AM, Tim Bellis wrote:

I have a postgres 9.3.4 database table which (intermittently but reliably) gets 
into a state where queries get blocked indefinitely (at least for many hours) 
behind an automatic vacuum. I was under the impression that vacuum should never 
take any blocking locks for any significant period of time, and so would like 
help resolving the issue.

The process blocking the query is:
postgres 21985 11304 98 Feb13 ?1-14:20:52 postgres: autovacuum worker process 
  
which is running the query
autovacuum: VACUUM public.

The query being blocked is:
ALTER TABLE  ALTER COLUMN  DROP DEFAULT
(But I have seen this previously with other queries being blocked. I used the 
SQL in https://wiki.postgresql.org/wiki/Lock_Monitoring to determine which 
queries were blocked)



Other ALTER TABLE queries?

If so I believe this might apply:

https://www.postgresql.org/docs/9.5/static/explicit-locking.html

SHARE UPDATE EXCLUSIVE

Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW 
EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode 
protects a table against concurrent schema changes and VACUUM runs.


Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX 
CONCURRENTLY, and ALTER TABLE VALIDATE and other ALTER TABLE variants 
(for full details see ALTER TABLE).



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/15/2017 09:45 AM, Shawn Thomas wrote:

Which would you recommend?  Leave the data directory in place and
re-install PG or copy it to somewhere else, delete it and then
re-install PG?


I would copy the data directory somewhere else for safe keeping leaving 
the original in place. Then reinstall Postgres, the install should leave 
the original directory alone and you will be ready to go. Should there 
be an oops you will have the copy as backup.




-Shawn


On Feb 15, 2017, at 9:36 AM, Magnus Hagander <mag...@hagander.net
<mailto:mag...@hagander.net>> wrote:

On Wed, Feb 15, 2017 at 6:28 PM, Shawn Thomas
<thoma...@u.washington.edu <mailto:thoma...@u.washington.edu>> wrote:

Well that would make more sense of things.  I had removed and
re-installed the postresql-common package:

https://packages.debian.org/jessie/postgresql-common
<https://packages.debian.org/jessie/postgresql-common>

and thought that it would leave the main PG package in place.  But
perhaps I was wrong.  I’ll follow Tom’s advice and just re-install
everything (saving the old data directory) and hope the new
installation can use the old data data directory.


If you removed it and then installed it, then the removal would remove
all dependent packages and if you then only intalled that one and not
the dependencies that would explain it.

If you had run a reinstall on it, then it would've kept them around.



One question about this approach though:  the Debian package
installation automatically initializes the new data directory and
starts PG.  If I shut it down and copy the old data directory into
the newly installed one, will there be an xlog issue?


You have to copy the xlog along with the database.

Or if you leave it in place where it is, the packages won't initialize
a new data directory.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/15/2017 09:28 AM, Shawn Thomas wrote:

Well that would make more sense of things.  I had removed and
re-installed the postresql-common package:

https://packages.debian.org/jessie/postgresql-common


Well that is the glue that holds the pgcluster scheme together. Also 
when I try it I get:


sudo apt-get remove postgresql-common

The following packages will be REMOVED:
  postgresql-9.4 postgresql-9.6 postgresql-common 
postgresql-contrib-9.4 postgresql-contrib-9.6 postgresql-server-dev-9.4 
postgresql-server-dev-9.6

Do you want to continue? [Y/n]

Which would explain a lot.



and thought that it would leave the main PG package in place.  But
perhaps I was wrong.  I’ll follow Tom’s advice and just re-install
everything (saving the old data directory) and hope the new installation
can use the old data data directory.

One question about this approach though:  the Debian package
installation automatically initializes the new data directory and starts
PG.  If I shut it down and copy the old data directory into the newly
installed one, will there be an xlog issue?

-Shawn


On Feb 15, 2017, at 9:09 AM, Magnus Hagander <mag...@hagander.net
<mailto:mag...@hagander.net>> wrote:

On Wed, Feb 15, 2017 at 6:03 PM, Shawn Thomas
<thoma...@u.washington.edu <mailto:thoma...@u.washington.edu>> wrote:

/usr/lib/postgresql/9.4/bin/pg_ctl: No such file or directory

postgres@pangaea:/usr/lib/postgresql/9.4/bin$ ls -al
  total 4008
  drwxr-xr-x 2 root root4096 Feb  9 16:17 .
  drwxr-xr-x 3 root root4096 Feb  9 16:17 ..
  -rwxr-xr-x 1 root root   68128 Nov 16 06:53 clusterdb
  -rwxr-xr-x 1 root root   68192 Nov 16 06:53 createdb
  -rwxr-xr-x 1 root root   63920 Nov 16 06:53 createlang
  -rwxr-xr-x 1 root root   72672 Nov 16 06:53 createuser
  -rwxr-xr-x 1 root root   63936 Nov 16 06:53 dropdb
  -rwxr-xr-x 1 root root   63920 Nov 16 06:53 droplang
  -rwxr-xr-x 1 root root   63904 Nov 16 06:53 dropuser
  -rwxr-xr-x 1 root root   68416 Nov 16 06:53 pg_basebackup
  -rwxr-xr-x 1 root root  351904 Nov 16 06:53 pg_dump
  -rwxr-xr-x 1 root root 2186504 Nov 16 06:53 pg_dumpall
  -rwxr-xr-x 1 root root   30992 Nov 16 06:53 pg_isready
  -rwxr-xr-x 1 root root   47600 Nov 16 06:53 pg_receivexlog
  -rwxr-xr-x 1 root root   51928 Nov 16 06:53 pg_recvlogical
  -rwxr-xr-x 1 root root  154944 Nov 16 06:53 pg_restore
  -rwxr-xr-x 1 root root  515320 Nov 16 06:53 psql
  -rwxr-xr-x 1 root root   68160 Nov 16 06:53 reindexdb
  -rwxr-xr-x 1 root root   72384 Nov 16 06:53 vacuumdb

As I mentioned, this Debian package removes pg_ctl from the bin
directory and instead attempts to wrap the pg_ctl functionality in
a perl script so that the PG process is integrated with systemd.
I really wish they hadn’t, and it’s part of the reason I’m where
I’m at.


pg_ctl is normally present in /usr/lib/postgresql//bin on a
debian system. If that is gone, somebody removed it, or you didn't
install the "postgresql-9.4" package which provides it. On a 9.4 system:

$ dpkg -S /usr/lib/postgresql/9.4/bin/pg_ctl
postgresql-9.4: /usr/lib/postgresql/9.4/bin/pg_ctl

You could try reinstalling the postgresql-9.4 package and see if it
comes back. The rest of the binaries in that directory seems to be
from postgresql-9.4-client though -- have you actually by mistake
uninstalled the server package completely?

As in, that directory is supposed to have the "postgres" binary which
is the database server and it's not there. So there is no wonder it's
not starting...

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/15/2017 09:28 AM, Joshua D. Drake wrote:

On 02/15/2017 09:17 AM, Adrian Klaver wrote:

On 02/15/2017 09:03 AM, Shawn Thomas wrote:

/usr/lib/postgresql/9.4/bin/pg_ctl: No such file or directory



That should have been:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:   xenial



This is starting to sound like someone inadvertently executed an rm
somewhere they shouldn't have (outside of just the original ssl file).


Or a defective package(s) upgrade. Either way crucial parts are missing.



JD





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/15/2017 09:03 AM, Shawn Thomas wrote:

/usr/lib/postgresql/9.4/bin/pg_ctl: No such file or directory



That should have been:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:   xenial



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver
ompt, Inc.  http://the.postgres.company/
   +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/15/2017 08:35 AM, Shawn Thomas wrote:

Yes, that’s the correct sequence of scripts.  And no there’s not anything 
really helpful in the system logs.

I’m thinking that at this point I need to approach this problem as more of a 
disaster recovery.  There was a full pg_dumpall  file that was deleted and 
cannot be recovered so I need to recover the data from the 
/var/lib/postgresql/9.4/main directory.  I believe this is called a file level 
recovery.  I assume I need to use a fully functional, same version PG (on 
another machine?) to create a full dump of the data directory.  Once I have 
this I can re-install Postgres on the initial server and read the databases 
back into it.


I have to believe that if you cannot get the server to start then the 
data directory is no shape to recover from. And if the data directory is 
good and it is the program files that are corrupted then it would be a 
matter of reinstalling Postgres. In either case the most important thing 
to do would be to make a copy of the data directory before you do 
anything else.


What exactly happened that caused the ssl cert and the pg_dumpall file 
to deleted?


In other words what else got deleted?



Any advice on how to best go about this?  The official documentation seems a 
bit thin:

https://www.postgresql.org/docs/9.4/static/backup-file.html

I’ve only worked with normal (pg_dump, pg_dumpall) backups in the past.

-Shawn


On Feb 15, 2017, at 6:35 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 02/14/2017 08:47 PM, Shawn Thomas wrote:

No it doesn’t matter if run with sudo, postgres or even root.  Debian
actually wraps the command and executes some some initial scripts with
different privileges but ends up making sure that Postgres ends up
running under the postgres user.  I get the same output if run with sudo:

sudo systemctl status postgresql@9.4-main.service
<mailto:postgresql@9.4-main.service> -l
  Error: could not exec   start -D /var/lib/postgresql/9.4/main -l
/var/log/postgresql/postgresql-9.4-main.log -s -o  -c
config_file="/etc/postgresql/9.4/main/postgresql.conf”




So you are talking about:

/etc/init.d/postgresql

which then calls:

/usr/share/postgresql-common/init.d-functions

Or is there another setup on your system?

Any relevant information in the system logs?


Thanks, though.

-Shawn



--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread Adrian Klaver

On 02/15/2017 06:53 AM, hubert depesz lubaczewski wrote:

On Wed, Feb 15, 2017 at 06:44:09AM -0800, Adrian Klaver wrote:

On 02/15/2017 06:27 AM, hubert depesz lubaczewski wrote:

On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:

On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:

Hi,
I have a function, in PostgreSQL 9.6, which does:

INSERT INTO table () values (...)
ON CONFLICT DO UPDATE ...;

The thing is that the function should return information whether the row
was modified, or created - and currently it seems that this is not
available. Or am I missing something?


All I can think of is to use:

RETURNING pk

and see if that changed or not.


Well, this wouldn't work for me as pkey will not change.


Alright you lost me. If the pkey does not change then how do you get new
rows(INSERT)?



For my particular case, I have this table
create table t (
   a_from text,
   a_to text,
   created timestamptz,
   updated timestamptz,
   primary key (a_from, a_to)
);


Well, if I do:

insert into t (a_from, a_+to)

and will use some values that do not exist in table, then insert
happens, but not sure what do you mean about "primary key change" in
this case.

On the other hand, if the from/to already exists in the table, then
update happens (on "updated" column) - and then there is definitely no
pkey change.


Yeah I see(thanks to Karsten also). So:

CREATE TABLE upsert_test (fld_1 varchar,
fld_2 varchar,
PRIMARY KEY (fld_1,-
fld_2));

INSERT INTO upsert_test (fld_1,
fld_2)
VALUES ('test1', 'test3')
ON CONFLICT (fld_1,
fld_2)
DO
UPDATE
SET fld_1 = EXCLUDED.fld_1,
fld_2 = EXCLUDED.fld_2
RETURNING
fld_1,
fld_2;

 fld_1 | fld_2
---+---
 test1 | test3
(1 row)

INSERT 0 1

INSERT INTO upsert_test (fld_1,
fld_2)
VALUES ('test4', 'test5')
ON CONFLICT (fld_1,
fld_2)
DO
UPDATE
SET fld_1 = EXCLUDED.fld_1,
fld_2 = EXCLUDED.fld_2
RETURNING
fld_1,
fld_2;

 fld_1 | fld_2
---+---
 test4 | test5


Can see the differentiation issue now. Can't see a solution right now 
other then the one you already have, a marker field that you can use to 
determine INSERT/UPDATE.





Best regards,

depesz





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread Adrian Klaver

On 02/15/2017 06:27 AM, hubert depesz lubaczewski wrote:

On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:

On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:

Hi,
I have a function, in PostgreSQL 9.6, which does:

INSERT INTO table () values (...)
ON CONFLICT DO UPDATE ...;

The thing is that the function should return information whether the row
was modified, or created - and currently it seems that this is not
available. Or am I missing something?


All I can think of is to use:

RETURNING pk

and see if that changed or not.


Well, this wouldn't work for me as pkey will not change.


Alright you lost me. If the pkey does not change then how do you get new 
rows(INSERT)?




For my particular case, I have this table
create table t (
a_from text,
a_to text,
created timestamptz,
updated timestamptz,
primary key (a_from, a_to)
);

where created and updated are set (and kept correct) with triggers. And
in my insert/update, if the row exists, I just set updated to now().


This I understand, though it does not square with the above.



So, for my particular case, I can, and do, compare if created is the
same as updated, and if no - it was update, otherwise - insert.

But it would be really good to get some proper support for
differentiating flow of such queries...

depesz





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Adrian Klaver

On 02/14/2017 08:47 PM, Shawn Thomas wrote:

No it doesn’t matter if run with sudo, postgres or even root.  Debian
actually wraps the command and executes some some initial scripts with
different privileges but ends up making sure that Postgres ends up
running under the postgres user.  I get the same output if run with sudo:

sudo systemctl status postgresql@9.4-main.service
<mailto:postgresql@9.4-main.service> -l
   Error: could not exec   start -D /var/lib/postgresql/9.4/main -l
/var/log/postgresql/postgresql-9.4-main.log -s -o  -c
config_file="/etc/postgresql/9.4/main/postgresql.conf”




So you are talking about:

/etc/init.d/postgresql

which then calls:

/usr/share/postgresql-common/init.d-functions

Or is there another setup on your system?

Any relevant information in the system logs?


Thanks, though.

-Shawn



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread Adrian Klaver

On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:

Hi,
I have a function, in PostgreSQL 9.6, which does:

INSERT INTO table () values (...)
ON CONFLICT DO UPDATE ...;

The thing is that the function should return information whether the row
was modified, or created - and currently it seems that this is not
available. Or am I missing something?


All I can think of is to use:

RETURNING pk

and see if that changed or not.



Best regards,

depesz






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-14 Thread Adrian Klaver

On 02/14/2017 05:00 PM, Adrian Klaver wrote:

On 02/14/2017 12:00 PM, Shawn Thomas wrote:

Yes that would be the standard approach.  But the Debian package removes
pg_ctl from it normal place and wraps it with a perl script in a way
that makes it difficult to work with (it doesn’t accept the same
arguments):

https://wiki.debian.org/PostgreSql#pg_ctl_replacement

@Mangnus, can you give me an example of how I might use pg_lsclusters
and pg_ctlcluster?  I’ve tried:



I do not see a sudo below or is it apparent whether you are doing this
as the postgres user.


pg_ctlcluster 9.4 main start
Error: could not exec   start -D /var/lib/postgresql/9.4/main -l
/var/log/postgresql/postgresql-9.4-main.log -s -o  -c
config_file="/etc/postgresql/9.4/main/postgresql.conf”


Not sure how close Debian 8 is to Ubuntu 16.04(something I use), but
from your first post they look like they share the same startup scripts.
So something like:

sudo systemctl restart postgresql@9.4-main.service

 ^^^
  Should be  start





-Shawn


On Feb 14, 2017, at 11:52 AM, Magnus Hagander <mag...@hagander.net
<mailto:mag...@hagander.net>> wrote:

On Tue, Feb 14, 2017 at 8:47 PM, Joshua D. Drake <j...@commandprompt.com
<mailto:j...@commandprompt.com>> wrote:

On 02/14/2017 11:43 AM, Shawn Thomas wrote:

pangaea:/var/log# systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service;
enabled)
   Active: active (exited) since Tue 2017-02-14 10:48:18 PST;
50min ago
  Process: 28668 ExecStart=/bin/true (code=exited,
status=0/SUCCESS)
 Main PID: 28668 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/postgresql.service


What about if use pg_ctl as the postgres user? That will give you
a better idea.


You don't want ot be doing that on a systemd system, but try a
combination of pg_lsclusters and pg_ctlcluster. Might be you need to
shut it down once that way before it realizes it's down,and then start
it back up.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-14 Thread Adrian Klaver

On 02/14/2017 12:00 PM, Shawn Thomas wrote:

Yes that would be the standard approach.  But the Debian package removes
pg_ctl from it normal place and wraps it with a perl script in a way
that makes it difficult to work with (it doesn’t accept the same arguments):

https://wiki.debian.org/PostgreSql#pg_ctl_replacement

@Mangnus, can you give me an example of how I might use pg_lsclusters
and pg_ctlcluster?  I’ve tried:



I do not see a sudo below or is it apparent whether you are doing this 
as the postgres user.



pg_ctlcluster 9.4 main start
Error: could not exec   start -D /var/lib/postgresql/9.4/main -l
/var/log/postgresql/postgresql-9.4-main.log -s -o  -c
config_file="/etc/postgresql/9.4/main/postgresql.conf”


Not sure how close Debian 8 is to Ubuntu 16.04(something I use), but 
from your first post they look like they share the same startup scripts. 
So something like:


sudo systemctl restart postgresql@9.4-main.service




-Shawn


On Feb 14, 2017, at 11:52 AM, Magnus Hagander <mag...@hagander.net
<mailto:mag...@hagander.net>> wrote:

On Tue, Feb 14, 2017 at 8:47 PM, Joshua D. Drake <j...@commandprompt.com
<mailto:j...@commandprompt.com>> wrote:

On 02/14/2017 11:43 AM, Shawn Thomas wrote:

pangaea:/var/log# systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service;
enabled)
   Active: active (exited) since Tue 2017-02-14 10:48:18 PST;
50min ago
  Process: 28668 ExecStart=/bin/true (code=exited,
status=0/SUCCESS)
 Main PID: 28668 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/postgresql.service


What about if use pg_ctl as the postgres user? That will give you
a better idea.


You don't want ot be doing that on a systemd system, but try a
combination of pg_lsclusters and pg_ctlcluster. Might be you need to
shut it down once that way before it realizes it's down,and then start
it back up.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database folder name and tables filenames

2017-02-14 Thread Adrian Klaver
On 02/14/2017 09:47 AM, Mimiko wrote:
> On 14.02.2017 17:30, Adrian Klaver wrote:
>>> Is there a way to change postgres behavior to name database folders by
>>> the database name? And table files in them by table's name? And not
>>> using OIDs.
>>
>> No.
>>
>> Is there a particular problem you are trying to solve?
> 
> No, there is not a problem. Its a convenience to visually view databases
> and tables with theirs name and know what the size they occupy with
> using queries of pg_catalog, like there is in mysql.

Take look at:

https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT

So as example:

test=# select pg_size_pretty(pg_database_size('test'));
 pg_size_pretty 

 8464 kB
(1 row)


> 
> 
> On 14.02.2017 17:34, Tom Lane wrote:
>> It used to work like that, decades ago, and it caused enormous problems
>> during table/database renames.  We're not going back.
> 
> So this is the culprit. Isn't there any option to use names? Even when
> compiling?
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use full text to rank results higher if they are "closer hit"

2017-02-14 Thread Adrian Klaver
On 02/14/2017 07:35 AM, Thomas Nyberg wrote:
> Hello,
> 
> I think it's easier to explain my question with example code:
> 
> 
> CREATE TABLE t ( s VARCHAR );
> CREATE TABLE
> 
> INSERT INTO t VALUES ('hello'), ('hello world');
> INSERT 0 2
> 
> SELECT * FROM t;
>   s
> -
>  hello
>  hello world
> (2 rows)
> 
> SELECT s, ts_rank(vector, query) AS rank
> FROM t, to_tsvector(s) vector, to_tsquery('hello') query
> WHERE query @@ vector;
>   s  |   rank
> -+---
>  hello   | 0.0607927
>  hello world | 0.0607927
> (2 rows)
> 
> 
> Here both 'hello' and 'hello world' are ranked equally highly when
> searching with 'hello'. What I'm wondering is, is there a way within
> postgres to have it match higher to just 'hello' than 'hello world'?
> I.e. something like it slightly down-weights extraneous terms? Of course
> in general I don't know the query or the field strings ahead of time.

Some digging around found this:

https://www.postgresql.org/docs/9.6/static/textsearch-controls.html#TEXTSEARCH-RANKING

Setting a normalization of 1:

test=# SELECT s, ts_rank(vector, query, 1) AS rank
FROM t, to_tsvector(s) vector, to_tsquery('hello') query
WHERE query @@ vector; 
  s  |   rank
-+---
 hello   | 0.0607927
 hello world | 0.0383559

> 
> Thanks for any help!
> 
> Cheers,
> Thomas
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database folder name and tables filenames

2017-02-14 Thread Adrian Klaver

On 02/14/2017 07:19 AM, Mimiko wrote:

Hello.

Is there a way to change postgres behavior to name database folders by
the database name? And table files in them by table's name? And not
using OIDs.


No.

Is there a particular problem you are trying to solve?








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database folder name and tables filenames

2017-02-14 Thread Adrian Klaver

On 02/14/2017 07:19 AM, Mimiko wrote:

Hello.

Is there a way to change postgres behavior to name database folders by
the database name? And table files in them by table's name? And not
using OIDs.


For more information see:

https://www.postgresql.org/docs/9.6/static/storage-file-layout.html







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Rollback option

2017-02-14 Thread Adrian Klaver

On 02/14/2017 05:12 AM, mpomykacz wrote:

Ok, thanks for the answers. But unfortunatelly they did not solve my problem.


Still not actually sure what the issue is?:

1) Problem with pgAdmin setup

or

2) Broader issue of having Postgres rollback automatically on a error.

or

3) Patch management.



I will move it to the pgadmin subforum.
Thanks:)



--
View this message in context: 
http://postgresql.nabble.com/Auto-Rollback-option-tp5943942p5944159.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Adrian Klaver

On 02/13/2017 02:10 PM, mpomykacz wrote:

Yes, I'm talking about pgAdmin III - sorry...

I think that auto-commit is on on default but auto-rollback is off. But I'll
check if you say so.


Did you look here:

https://www.pgadmin.org/docs/1.22/options-query_tool.html

It seems checking it here would make the choice persist between 
sessions. This assumes that everyone uses the same instance of pgAdmin3. 
Otherwise it would need to be checked on each instance.




And I know I can check the box next to Enable Auto ROLLBACK but I'm trying
to avoid it and enable auto rollback not by a manual way.



--
View this message in context: 
http://postgresql.nabble.com/Auto-Rollback-option-tp5943942p5944047.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmlelement AND timestamps.

2017-02-13 Thread Adrian Klaver

On 02/13/2017 02:56 PM, Lynn Dobbs wrote:

I just migrated from 9.2.4 to 9.6.1 and had several user created
functions fail.

Recreating the failure with "SELECT xmlelement(name foo,
'infinity'::timestamp)
ERROR: timestamp out of range
DETAIL: XML does not support infinite timestamp values.

I don't find anything in the documentation that explains this.  I
consider this a regression.


All I could find was this thread from 2009:

https://www.postgresql.org/message-id/41F26B729B014C3E8F20F5B7%40teje

which indicated it was fixed at that time.





I have many tables that have a "starting" and "ending" timestamp that
default to "-infinity"
and "infinity" respectively.   Any function I have that outputs xml
containing those columns
have to have those values cast to text.

Lynn Dobbs
--
Chief Technical Office
CreditLink Corporation
858-496-1000 x 103






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Potential bug with pg_notify

2017-02-13 Thread Adrian Klaver
On 02/13/2017 11:50 AM, François Beaulieu wrote:
> 
>> On Feb 13, 2017, at 1:56 PM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
>>
>> On 02/13/2017 09:04 AM, François Beaulieu wrote:
>>>
>>>> On Feb 13, 2017, at 11:45 AM, Adrian Klaver <adrian.kla...@aklaver.com> 
>>>> wrote:
>>>>
>> |
>>>>>
>>>>>> 3) Are the first row and the second row in the same partition?
>>>>>
>>>>> Doubtful, the problem occurs several times a day and we only have one 
>>>>> partition a day. Let me check with the above example. What would be the 
>>>>> best way to determine which child a row is in, and the relative position 
>>>>> in the child table?
>>>>
>>>> As to position, maybe ctid though it has caveats:
>>>
>>> The three rows in my example return a ctid of (742,17), (742,18) and 
>>> (742,19) respectively, in their child table. So, probably not at a 
>>> partition boundary.
>>>
>>>>> Also; my worker in written in perl and uses DBD::Pg. I haven’t been able 
>>>>> to 100% eliminate the module itself as the cause of the bug. Any 
>>>>> suggestions on how I might go about doing that efficiently?
>>>>
>>>> What does the worker do?
>>>
>>> Sorry, that's my employer’s classified IP. :-)
>>> Does it matter?
>>
>> Only that it makes it harder to give any suggestions on eliminating it as a 
>> source of error if it is a black box.  I don't think, at this point, it is 
>> necessary to see the actual source. If it is possible a high level synopsis 
>> of what it does might be sufficient.
> 
> Suffice it to say the worker uses a read-only connection to the database to 
> receive these notices and to query the table for the matching rows in certain 
> circumstances. It never modifies the database in any way; it only uses this 
> information to act upon a completely different subsystem. I loosely based it 
> on a snippet of code from this very mailing list:
> 
> https://www.postgresql.org/message-id/20050104031937.ga80...@winnie.fuhr.org
> 
>>>> Could it be the module is not dealing with time zones correctly? Though 
>>>> thinking about this that would seem to manifest a problem only around the 
>>>> 7th day boundary. So put this down to thinking aloud.
>>>
>>> No, the partitioning scheme seems to be respecting the timezone properly, 
>>> and my issue is happening every few hours in the middle of the day and 
>>> we’re in UTC+5, so not near the end of the day in UTC. Besides, I believe 
>>> timestamp without timezone assumes the local timezone of the server, which 
>>> is set to UTC anyway.
>>>
>>> Has the schema eliminated your original theory regarding the delaying of 
>>> the generation of the _id? I don’t think that would normally be an issue 
>>> that occurs sporadically and the _id seems to be part of the INSERT, which 
>>> would indicate that, as it should, it’s done generating before my trigger 
>>> is called.
>>
>> I don't see anything that would explain a delay. Still the fact remains that 
>> in most cases the notify captures the _id, but in some cases it does not. 
>> Going back to your OP I realized I missed that the NEW.userfield was also 
>> not coming through. So that seems to confirm that pg_notify() is firing 
>> before it gets access to NEW.*. Having said that I have no idea why?
>>
>> The only thing I can think to do is(untested):
>>
>> CREATE OR REPLACE FUNCTION notify_trigger() RETURNS trigger AS $$
>> DECLARE
>>  _newid integer;
>> BEGIN
>>  SELECT NEW._id INTO _newid;
>>  IF _newid IS NULL OR NOT FOUND THEN
>>  RAISE NOTICE 'NEW._id is NULL/NOT FOUND';
>>  pg_sleep(0.1); --Or whatever interval you want.
>>  END IF;
>>  PERFORM pg_notify('watchers', TG_TABLE_NAME || ',' || NEW._id|| ',' || 
>> NEW.userfield);
>>  RETURN new;
>> END;
>> $$ LANGUAGE plpgsql;
>>
>> Not really a solution but it might help determine whether it is a timing 
>> issue. Also this is probably something that should be done on a test server 
>> to be safe.
> 
> Thanks for the tip, I’ll try that in my lab. I want to try to replicate the 
> issue more consistently first, so that my tests after the change will be more 
> conclusive. 
> 
> In the meantime, if anyone has any other suggestions, please don’t hesitate.

The only thing I can come up with is in your test lab once you 
replicate the issue crank up the logging level:

https://www.postgresql.org/docs/9.4/static/runtime-config-logging.html#RUNTIME-CONFIG-SEVERITY-LEVELS

log_min_messages

to see if more detail sheds any light.

> 
> 
> Thanks,
> -=François Beaulieu
> SBK Telecom
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Potential bug with pg_notify

2017-02-13 Thread Adrian Klaver

On 02/13/2017 09:04 AM, François Beaulieu wrote:



On Feb 13, 2017, at 11:45 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:


 |



3) Are the first row and the second row in the same partition?


Doubtful, the problem occurs several times a day and we only have one partition 
a day. Let me check with the above example. What would be the best way to 
determine which child a row is in, and the relative position in the child table?


As to position, maybe ctid though it has caveats:


The three rows in my example return a ctid of (742,17), (742,18) and (742,19) 
respectively, in their child table. So, probably not at a partition boundary.


Also; my worker in written in perl and uses DBD::Pg. I haven’t been able to 
100% eliminate the module itself as the cause of the bug. Any suggestions on 
how I might go about doing that efficiently?


What does the worker do?


Sorry, that's my employer’s classified IP. :-)
Does it matter?


Only that it makes it harder to give any suggestions on eliminating it 
as a source of error if it is a black box.  I don't think, at this 
point, it is necessary to see the actual source. If it is possible a 
high level synopsis of what it does might be sufficient.





Could it be the module is not dealing with time zones correctly? Though 
thinking about this that would seem to manifest a problem only around the 7th 
day boundary. So put this down to thinking aloud.


No, the partitioning scheme seems to be respecting the timezone properly, and 
my issue is happening every few hours in the middle of the day and we’re in 
UTC+5, so not near the end of the day in UTC. Besides, I believe timestamp 
without timezone assumes the local timezone of the server, which is set to UTC 
anyway.

Has the schema eliminated your original theory regarding the delaying of the 
generation of the _id? I don’t think that would normally be an issue that 
occurs sporadically and the _id seems to be part of the INSERT, which would 
indicate that, as it should, it’s done generating before my trigger is called.


I don't see anything that would explain a delay. Still the fact remains 
that in most cases the notify captures the _id, but in some cases it 
does not. Going back to your OP I realized I missed that the 
NEW.userfield was also not coming through. So that seems to confirm that 
pg_notify() is firing before it gets access to NEW.*. Having said that I 
have no idea why?


The only thing I can think to do is(untested):

CREATE OR REPLACE FUNCTION notify_trigger() RETURNS trigger AS $$
DECLARE
_newid integer;
BEGIN
  SELECT NEW._id INTO _newid;
  IF _newid IS NULL OR NOT FOUND THEN
RAISE NOTICE 'NEW._id is NULL/NOT FOUND';
pg_sleep(0.1); --Or whatever interval you want.
  END IF;
  PERFORM pg_notify('watchers', TG_TABLE_NAME || ',' || NEW._id|| ',' 
|| NEW.userfield);

  RETURN new;
END;
$$ LANGUAGE plpgsql;

Not really a solution but it might help determine whether it is a timing 
issue. Also this is probably something that should be done on a test 
server to be safe.





Thanks,
-=François Beaulieu
SBK Telecom




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Adrian Klaver
On 02/13/2017 09:59 AM, John R Pierce wrote:
> On 2/13/2017 7:15 AM, mpomykacz wrote:
>> So my problem is like this:
>>
>> I start the transaction with BEGIN TRANSACTION;
>> Then I have for example some INSERTs to DB
>> and at the end COMMIT; and END TRANSACTION;
> 
> COMMIT ends the transaction.   In PostgreSQL, END TRANSACTION is
> redundant, equivalent to COMMIT, do one or the other, not both.
> 
>>
>> But if one of this INSERTs causes error, the transaction will stop
>> (but it
>> is still open and next patch is implemented within the same transaction).
> 
> Patch ?
> 
>>
>> When I turn the Auto-Rollback on everything is ok : in situation like
>> this
>> my transaction is automatically rollbacked and closed.
> 
> there is no autorollback option in standard PostgreSQL ?

Forgot to add, in pgAdmin3 there is:

https://www.pgadmin.org/docs/1.22/options-query_tool.html?highlight=rollback

Enable Auto ROLLBACK - Check the box next to Enable Auto ROLLBACK to 
instruct the query tool to execute a ROLLBACK if a query fails.


> 
>>
>> But I do not want to do the Auto-Rollback click manualy in the Option
>> menu
>> or query editor window (because I can not be sure that the person who
>> will
>> run the patch would remember about this click).
> 
> option?   query editor window? what software are you talking about?
> 
>> I'm using 1.22.1 version.
> 
> 1.22.1 version? PostgreSQL versions currently supported are 9.2.x to
> 9.6.x
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Adrian Klaver

On 02/13/2017 09:59 AM, John R Pierce wrote:

On 2/13/2017 7:15 AM, mpomykacz wrote:

So my problem is like this:

I start the transaction with BEGIN TRANSACTION;
Then I have for example some INSERTs to DB
and at the end COMMIT; and END TRANSACTION;


COMMIT ends the transaction.   In PostgreSQL, END TRANSACTION is
redundant, equivalent to COMMIT, do one or the other, not both.



But if one of this INSERTs causes error, the transaction will stop
(but it
is still open and next patch is implemented within the same transaction).


Patch ?


Pretty sure the OP is applying a series of SQL driven patches to 
add/change/remove database schema.






When I turn the Auto-Rollback on everything is ok : in situation like
this
my transaction is automatically rollbacked and closed.


there is no autorollback option in standard PostgreSQL ?



But I do not want to do the Auto-Rollback click manualy in the Option
menu
or query editor window (because I can not be sure that the person who
will
run the patch would remember about this click).


option?   query editor window? what software are you talking about?


I would say pgAdmin3:

https://www.pgadmin.org/download/source.php




I'm using 1.22.1 version.


1.22.1 version? PostgreSQL versions currently supported are 9.2.x to


See above.


9.6.x





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Potential bug with pg_notify

2017-02-13 Thread Adrian Klaver

On 02/13/2017 07:59 AM, François Beaulieu wrote:



On Feb 13, 2017, at 10:28 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

On 02/10/2017 02:54 PM, François Beaulieu wrote:


Hi all,

I’m trying to feed a worker process on another server using pg_notify in a 
trigger. I’m running pgsql 9.4 and hitting some behaviour that I’m hoping is 
just a bug that can be solved with an upgrade, but I’m not finding any 
references to it being a known bug and the uptime on my database server is 
critical so I can’t upgrade on a whim. Basically, the trigger runs on every row 
insert and notifies me with a few fields from the row. This works perfectly 
most of the time, but every few hundred rows, the notify will return null 
values. Here are the details:

My database uses time-based partitioning (and subpartitioning) with pg_partman. 
This is done with a before insert trigger on the main parent table. pg_partman 
creates subtables dynamically, so I can’t easily run my trigger on each child 
table. I also can’t run it after insert on my parent table because the row 
never makes it there. Thus, I ave elected to run my trigger before insert on 
the parent table, and have named it so that it is alphabetically first and will 
run before the partitioning trigger. Works perfectly most of the time. Here are 
the trigger and associated plpgsql function:

—CODE---
CREATE OR REPLACE FUNCTION notify_trigger() RETURNS trigger AS $$
DECLARE
BEGIN
PERFORM pg_notify('watchers', TG_TABLE_NAME || ',' || NEW._id || ',' || 
NEW.userfield);
RETURN new;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER aaa_notify BEFORE INSERT ON aaa FOR EACH ROW execute procedure 
notify_trigger();
—/CODE—

On my listener, every once in a while, this returns rows such as this:

AAA,17947227,XXX
AAA, ,
AAA,17947229,ZZZ

Notice the second line, which appears to have even the autoincrement ‘_id' 
empty. It would seem to match _id = 17947228 and that row does exist in the 
table and has data populated for all fields.

Has anyone ever seen anything like this?


Can't say I have. I think this is going to need more information:

--
Adrian Klaver
adrian.kla...@aklaver.com



1) What is the schema for the parent table?


 Table "public.cdr"
  Column|Type | Modifiers   
  | Storage  | Stats target | Descripti
on
-+-+---+--+--+--
---
_id | bigint  | not null default 
nextval('cdr__id_seq'::regclass) | plain|  |
calldate| timestamp without time zone | not null
  | plain|  |
callanswer  | timestamp without time zone | 
  | plain|  |
callend | timestamp without time zone | not null
  | plain|  |
clid| character varying(80)   | 
  | extended |  |
dnid| character varying(80)   | 
  | extended |  |
src | character varying(80)   | 
  | extended |  |
dst | character varying(80)   | not null
  | extended |  |
dcontext| character varying(80)   | not null
  | extended |  |
channel | character varying(80)   | not null
  | extended |  |
dstchannel  | character varying(80)   | 
  | extended |  |
lastapp | character varying(80)   | 
  | extended |  |
lastdata| character varying(80)   | 
  | extended |  |
duration| integer | not null
  | plain|  |
billsec | integer | not null
  | plain|  |
disposition | character varying(45)   | not null
  | extended |  |
amaflags| integer | not null
  | plain|  |
accountcode | character varying(20)   | 
  | extended |  |
uniqueid| character varying(150)  | not null
  | extended |  |
userfield   | character varying(255)  | 

Re: [GENERAL] intentional or oversight? pg_dump -c does not restore default priviliges on schema public

2017-02-13 Thread Adrian Klaver

On 02/13/2017 07:52 AM, Stephen Frost wrote:

Greetings,

* Adrian Klaver (adrian.kla...@aklaver.com) wrote:

On 02/13/2017 06:04 AM, Stephen Frost wrote:

* Adrian Klaver (adrian.kla...@aklaver.com) wrote:

I am following this up to the point of not understanding what
exactly changed between 9.5 and 9.6. Namely 9.5 does include the
default ACL's in the dump output and 9.6 does not.


Quite a bit in pg_dump changed, but the relevant bit here is that we now
try to include in the pg_dump output any ACLs which have been changed

>from their initdb-time settings for initdb-time objects.  What that

means is that if you don't change the privileges for the public schema

>from what they're set to at initdb-time, then we don't dump out any ACL

commands for the public schema.  That ends up being incorrect in '-c'
mode because we drop the public schema in that mode and recreate it, in
which case we need to re-implement the ACLs which existed for the public
schema at initdb-time.


Thanks for the explanation in this post and your previous one. If I
am following pg_init_privs is the initial state of objects ACLs and
if that changes then those entries are removed.


No, if the object is *dropped* then the entry is removed from
pg_init_privs.  Otherwise, the entries in pg_init_privs aren't changed.


So would not the
general case be, on recreating an object use the ACLs in
pg_init_privs if they exist otherwise use the ACLs as they exist
wherever they go to on change away from pg_init_privs?


pg_init_privs doesn't track the object's name, so this isn't actually
possible.  Even if we did track the name of the object, I don't think
we'd actually want to set the privileges to what they were set to at
initdb time.  If you drop the public schema and then recreate it, are
you really expecting it to get the initdb-time privileges it had..?
How would you reconsile that with default privileges (which we don't
have for schemas right now, but it's been proposed...).

This case is about a pg_dump, which is a very different case in that we
want to recreate the state of the system as it existed at the time of
the dump.


I gather that
is what you are proposing as a special case for the public schema.
Just wondering why it should not be the general case?


Not quite..  This is about what pg_dump does when a -c is used.
Unfortunately, it's *already* doing something special with the public
schema (if it wasn't, then this wouldn't really be an issue..).  This is
just about making it do the right thing in that already-existing
special-case.


Alright I see now, thanks.



Thanks!

Stephen




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Potential bug with pg_notify

2017-02-13 Thread Adrian Klaver

On 02/10/2017 02:54 PM, François Beaulieu wrote:


Hi all,

I’m trying to feed a worker process on another server using pg_notify in a 
trigger. I’m running pgsql 9.4 and hitting some behaviour that I’m hoping is 
just a bug that can be solved with an upgrade, but I’m not finding any 
references to it being a known bug and the uptime on my database server is 
critical so I can’t upgrade on a whim. Basically, the trigger runs on every row 
insert and notifies me with a few fields from the row. This works perfectly 
most of the time, but every few hundred rows, the notify will return null 
values. Here are the details:

My database uses time-based partitioning (and subpartitioning) with pg_partman. 
This is done with a before insert trigger on the main parent table. pg_partman 
creates subtables dynamically, so I can’t easily run my trigger on each child 
table. I also can’t run it after insert on my parent table because the row 
never makes it there. Thus, I ave elected to run my trigger before insert on 
the parent table, and have named it so that it is alphabetically first and will 
run before the partitioning trigger. Works perfectly most of the time. Here are 
the trigger and associated plpgsql function:

—CODE---
CREATE OR REPLACE FUNCTION notify_trigger() RETURNS trigger AS $$
DECLARE
BEGIN
  PERFORM pg_notify('watchers', TG_TABLE_NAME || ',' || NEW._id || ',' || 
NEW.userfield);
  RETURN new;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER aaa_notify BEFORE INSERT ON aaa FOR EACH ROW execute procedure 
notify_trigger();
—/CODE—

On my listener, every once in a while, this returns rows such as this:

AAA,17947227,XXX
AAA, ,
AAA,17947229,ZZZ

Notice the second line, which appears to have even the autoincrement ‘_id' 
empty. It would seem to match _id = 17947228 and that row does exist in the 
table and has data populated for all fields.

Has anyone ever seen anything like this?


Can't say I have. I think this is going to need more information:

1) What is the schema for the parent table?

2) What is the pg_partman trigger function definition, as well the 
CREATE TRIGGER definition that calls the function?


3) Are the first row and the second row in the same partition?


Just wondering if the pg_partman partitioning is delaying the allocation 
of _id from the sequence in a way that your notify_trigger() does not 
get it in time. It is just that TG_TABLE_NAME and NEW.userfield are part 
of the INSERT, while NEW._id is actually a request for information from 
another object.




Thanks,
-=François Beaulieu
SBK Telecom





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Rollback option

2017-02-13 Thread Adrian Klaver

On 02/13/2017 05:55 AM, Małgorzata Hubert wrote:

Hi,
is there any way to set Auto-Rollback : ON, automaticly during
instalation process or using query (maybe something like set autocommit
= 'on')?
We need it to automaticly close the transaction if an error occures
during implementing patches.


How are you applying the patches?

With what library/interface/etc?

For instance in psql you have:

https://www.postgresql.org/docs/9.6/static/app-psql.html
"AUTOCOMMIT

When on (the default), each SQL command is automatically committed 
upon successful completion. To postpone commit in this mode, you must 
enter a BEGIN or START TRANSACTION SQL command. When off or unset, SQL 
commands are not committed until you explicitly issue COMMIT or END. The 
autocommit-off mode works by issuing an implicit BEGIN for you, just 
before any command that is not already in a transaction block and is not 
itself a BEGIN or other transaction-control command, nor a command that 
cannot be executed inside a transaction block (such as VACUUM).


Note: In autocommit-off mode, you must explicitly abandon any 
failed transaction by entering ABORT or ROLLBACK. Also keep in mind that 
if you exit the session without committing, your work will be lost.


Note: The autocommit-on mode is PostgreSQL's traditional 
behavior, but autocommit-off is closer to the SQL spec. If you prefer 
autocommit-off, you might wish to set it in the system-wide psqlrc file 
or your ~/.psqlrc file.


"

NOTE that you have to explicitly ROLLBACK a failed transaction though.



Thanks in advanced for the answear.
Best regards,
Malgorzata Pomykacz



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intentional or oversight? pg_dump -c does not restore default priviliges on schema public

2017-02-13 Thread Adrian Klaver

On 02/13/2017 06:04 AM, Stephen Frost wrote:

Adrian,

* Adrian Klaver (adrian.kla...@aklaver.com) wrote:

I am following this up to the point of not understanding what
exactly changed between 9.5 and 9.6. Namely 9.5 does include the
default ACL's in the dump output and 9.6 does not.


Quite a bit in pg_dump changed, but the relevant bit here is that we now
try to include in the pg_dump output any ACLs which have been changed
from their initdb-time settings for initdb-time objects.  What that
means is that if you don't change the privileges for the public schema
from what they're set to at initdb-time, then we don't dump out any ACL
commands for the public schema.  That ends up being incorrect in '-c'
mode because we drop the public schema in that mode and recreate it, in
which case we need to re-implement the ACLs which existed for the public
schema at initdb-time.


Thanks for the explanation in this post and your previous one. If I am 
following pg_init_privs is the initial state of objects ACLs and if that 
changes then those entries are removed. So would not the general case 
be, on recreating an object use the ACLs in pg_init_privs if they exist 
otherwise use the ACLs as they exist wherever they go to on change away 
from pg_init_privs? I gather that is what you are proposing as a special 
case for the public schema. Just wondering why it should not be the 
general case?




Thanks!

Stephen




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configure can't find libcrypto on MacOS Sierra for pg 9.6.2

2017-02-12 Thread Adrian Klaver

On 02/12/2017 12:39 PM, Jerry LeVan wrote:

Sigh,

I will try to build without OpenSSL…


Of course there is another option, prebuilt binaries:

https://www.postgresql.org/download/macosx/



My database is seventeen years old, I don’t even remember
which Postgresql I started out with...

Jerry


On Feb 12, 2017, at 10:53 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

Jerry LeVan <jerry.le...@gmail.com> writes:

Hello, I am trying to build pg 9.6.2 on my MacOS Sierra but
configure is being balky


Hmm, I'm not really sure why it's failing at that step --- it gets past
that for me.  Possibly looking into config.log to see if there's a more
detailed error report would be illuminating.  However, trying to build
against Apple's openssl libraries is a lost cause anyway, because they
removed the header files in Sierra.  So it will fail when it gets to
header file checks; where it stops for me is

checking for openssl/ssl.h... no
configure: error: header file  is required for OpenSSL

AFAIK the only way forward is to install a complete OpenSSL installation
from source, or using MacPorts or brew or other tool-of-choice.

Apple would really like people to start using their SSL infrastructure.
That handwriting has been on the wall for years, but nobody's gotten
around to writing the necessary interface logic for Postgres.

regards, tom lane







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configure can't find libcrypto on MacOS Sierra for pg 9.6.2

2017-02-12 Thread Adrian Klaver

On 02/12/2017 12:39 PM, Jerry LeVan wrote:

Sigh,

I will try to build without OpenSSL…


I am not a Mac user, but from the below it does not seem that difficult 
to get around Apples decision:


https://solitum.net/openssl-os-x-el-capitan-and-brew/



My database is seventeen years old, I don’t even remember
which Postgresql I started out with...

Jerry


On Feb 12, 2017, at 10:53 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

Jerry LeVan <jerry.le...@gmail.com> writes:

Hello, I am trying to build pg 9.6.2 on my MacOS Sierra but
configure is being balky


Hmm, I'm not really sure why it's failing at that step --- it gets past
that for me.  Possibly looking into config.log to see if there's a more
detailed error report would be illuminating.  However, trying to build
against Apple's openssl libraries is a lost cause anyway, because they
removed the header files in Sierra.  So it will fail when it gets to
header file checks; where it stops for me is

checking for openssl/ssl.h... no
configure: error: header file  is required for OpenSSL

AFAIK the only way forward is to install a complete OpenSSL installation
from source, or using MacPorts or brew or other tool-of-choice.

Apple would really like people to start using their SSL infrastructure.
That handwriting has been on the wall for years, but nobody's gotten
around to writing the necessary interface logic for Postgres.

regards, tom lane







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configure can't find libcrypto on MacOS Sierra for pg 9.6.2

2017-02-12 Thread Adrian Klaver

On 02/12/2017 06:56 AM, Jerry LeVan wrote:

Hello, I am trying to build pg 9.6.2 on my MacOS Sierra but
configure is being balky

configure:9494: checking for CRYPTO_new_ex_data in -lcrypto
configure:9519: gcc -o conftest -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-for
mat-attribute -Wformat-security -fno-strict-aliasing -fwrapv 
-Wno-unused-command-line-argument -O2 conftest.c -lcrypto  -lz -ledi
t -lm  >&5
ld: library not found for -lcrypto
clang: error: linker command failed with exit code 1 (use -v to see invocation)
configure:9519: $? = 1

but libcrypto is present:
eagle:Desktop postgres$ ls -al /usr/lib/libcrypto*
-rwxr-xr-x  1 root  wheel  2043536 Dec 10 04:53 /usr/lib/libcrypto.0.9.7.dylib
-rwxr-xr-x  1 root  wheel  2681408 Dec 10 04:53 /usr/lib/libcrypto.0.9.8.dylib
-rw-r--r--  1 root  wheel  4209728 Dec 10 04:53 /usr/lib/libcrypto.35.dylib
lrwxr-xr-x  1 root  wheel   21 Sep 24 13:20 /usr/lib/libcrypto.dylib -> 
libcrypto.0.9.8.dylib

The config.log file for 9.3.2 passes the test.

And for the 9.3.2 image:
eagle:Desktop postgres$ otool -L /usr/local/bin/postgres
/usr/local/bin/postgres:
/usr/lib/libssl.0.9.8.dylib (compatibility version 0.9.8, current 
version 50.0.0)
/usr/lib/libcrypto.0.9.8.dylib (compatibility version 0.9.8, current 
version 50.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 1197.1.1)
eagle:Desktop postgres$

Any suggestions?


Not a suggestion, but a question:

What are the full command line invocations to configure for 9.3 and 9.6?

Now a suggestion, do you have the openssl devel package installed?



Thanks

Jerry




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Off Topic] Visualizing grouping sets/cubes

2017-02-11 Thread Adrian Klaver

On 02/10/2017 05:44 AM, Leonardo M. Ramé wrote:

Hi, I'm reading about Grouping Sets/Rollup/Cube and I wonder which
js/html5 library allows displaying *easily* (without having to re-format
it) the returned data from those functions.


Just ran across the below, not sure if it meets your needs:

https://github.com/getredash/redash



Regards,



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intentional or oversight? pg_dump -c does not restore default priviliges on schema public

2017-02-11 Thread Adrian Klaver

On 02/11/2017 02:06 PM, Stephen Frost wrote:

Greetings,

* Frank van Vugt (ftm.van.v...@foxi.nl) wrote:

I noticed the following and wondered whether this is intentional or an
oversight in pg_dump's '-c' option?

The clean option causes the public schema to be dropped and recreated, but
this is done with the default schema priviliges, which are not the same as the
ones assigned during create database:


Interesting.  The reason this happens is that the privileges for the
public schema aren't dumped when they are the same as what you would get
from a default install in 9.6+, but using -c will end up dropping and
recreating it, which, as you note, will end up having different
privileges than the default install because they'll be the regular
default privilegs of "nothing" for schemas.

This is happening just for the public schema due to how it's handled in
a special way in pg_dump_archive.c:_printTocEntry().  This only impacts
ACLs because those are the only things which are different for the
public schema vs. it's initdb settings (there's no SECURITY LABEL, for
example, on the initdb'd public schema).

Due to how the public schema is (and always has been) handled in this
special way, this is a bug which needs to be fixed by having the default
ACLs for the public schema included in the dump output if -c is being
used.


I am following this up to the point of not understanding what exactly 
changed between 9.5 and 9.6. Namely 9.5 does include the default ACL's 
in the dump output and 9.6 does not.




I'm not seeing a very simple answer for this, unfortunately.  I'm
thinking we're going to need to pull the public schema's permissions
differently if we're in clean mode (by comparing to the default schema
privileges) vs. when we're not (in which case we should be comparing to
*public*'s initdb-time privileges, as we do now).  One option would be
to handle that by hacking up buildACLQueries() to take a flag which
basically means "we are dropping the public schema, do not consider its
pg_init_privs settings" but that strikes me as awful grotty.  Another
option would be to change getNamespaces() to run a special query
(perhaps as a UNION-ALL combination with the existing query) that is
just to get the info for the 'public' schema (and exclude the 'public'
schema from the first half of the query, of course).

Thanks for the report!

Stephen




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intentional or oversight? pg_dump -c does not restore default priviliges on schema public

2017-02-11 Thread Adrian Klaver

On 02/11/2017 01:14 PM, Frank van Vugt wrote:

Hi Adrian,

Op zaterdag 11 februari 2017 13:02:29 schreef Adrian Klaver:

What version of Postgres?


Ah, sorry, missed copying that in:

postgres=# select version();
   version
--
 PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2, 64-bit
(1 row)




I see the same thing now. A diff reveals:

aklaver@tito:~> diff /tmp/publictest95  /tmp/publictest96
5,6c5,6
< -- Dumped from database version 9.5.5
< -- Dumped by pg_dump version 9.5.5
---
> -- Dumped from database version 9.6.1
> -- Dumped by pg_dump version 9.6.1
9a10
> SET idle_in_transaction_session_timeout = 0;
47,56d47
<
<
< --
< -- Name: public; Type: ACL; Schema: -; Owner: postgres
< --
<
< REVOKE ALL ON SCHEMA public FROM PUBLIC;
< REVOKE ALL ON SCHEMA public FROM postgres;
< GRANT ALL ON SCHEMA public TO postgres;
< GRANT ALL ON SCHEMA public TO PUBLIC;

That seems to cause a problem:

aklaver@tito:~> psql -d publictest -U guest
Null display is "NULL".
psql (9.5.5)
Type "help" for help.

publictest=> create table public.public_test(id int);
CREATE TABLE


aklaver@tito:~> /usr/local/pgsql96/bin/psql -d publictest -U guest -p 5442
Null display is "NULL".
psql (9.6.1)
Type "help" for help.

publictest=> create table public.public_test(id int);
ERROR:  permission denied for schema public
LINE 1: create table public.public_test(id int);




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intentional or oversight? pg_dump -c does not restore default priviliges on schema public

2017-02-11 Thread Adrian Klaver
On 02/11/2017 12:42 PM, Frank van Vugt wrote:
> L.S.
> 
> I noticed the following and wondered whether this is intentional or an 
> oversight in pg_dump's '-c' option?

What version of Postgres?

Because when I do it on 9.5.5 I get:

test=# create database publictest;  

   
CREATE DATABASE 

   
test=# \c publictest;   

   
You are now connected to database "publictest" as user "postgres".  

   
publictest=# \dn+   

   
  List of schemas   

   
  Name  |  Owner   |  Access privileges   |  Description

   
+--+--+ 

   
 public | postgres | postgres=UC/postgres+| standard public schema  

   
|  | =UC/postgres | 

   
(1 row) 


aklaver@tito:~> pg_dump -c -f /tmp/publictest -h localhost -Fp -U postgres 
publictest   

   
aklaver@tito:~> psql -d publictest -U postgres  

Null display is "NULL".
psql (9.5.5)
Type "help" for help.

publictest=# \i /tmp/publictest 
SET
SET
SET
SET
SET
SET
SET
SET
DROP EXTENSION
DROP SCHEMA
CREATE SCHEMA
ALTER SCHEMA
COMMENT
CREATE EXTENSION
COMMENT
REVOKE
REVOKE
GRANT
GRANT

publictest=# \dn+
  List of schemas
  Name  |  Owner   |  Access privileges   |  Description   
+--+--+
 public | postgres | postgres=UC/postgres+| standard public schema
|  | =UC/postgres | 
(1 row)

> 
> The clean option causes the public schema to be dropped and recreated, but 
> this is done with the default schema priviliges, which are not the same as 
> the 
> ones assigned during create database:
> 
> 
> *** USING PSQL
> 
> postgres=# create database publictest;
> 
> postgres=# \c publictest;
> 
> publictest=# \dn+
>   List of schemas
>   Name  |  Owner   |  Access privileges   |  Description   
> +--+--+
>  public | postgres | postgres=UC/postgres+| standard public schema
> |  | =UC/postgres | 
> (1 row)
> 
> 
> 
> *** USING SHELL
> 
> host:~ # pg_dump -c -f /tmp/publictest -h localhost -Fp -U postgres publictest
> 
> 
> 
> *** USING PSQL
> 
> publictest=# \i /tmp/publictest
> 
> publictest=# \dn+
> List of schemas
>   Name  |  Owner   | Access privileges |  Description   
> +--+---+
>  public | postgres |   | standard public schema
> (1 row)
> 
> publictest=# grant usage on schema public to public;
> GRANT
> publictest=# grant create on schema public to public;
> GRANT
> 
> testje=# \dn+
>   List of schemas
>   Name  |  Owner   |  Access privileges   |  Description   
> +------+--+
>  public | postgres | postgres=UC/postgres+| standard public schema
> |  | =UC/postgres | 
> (1 row)
> 
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom shuffle function stopped working in 9.6

2017-02-11 Thread Adrian Klaver

On 02/11/2017 10:51 AM, Alexander Farber wrote:

At the same time this advice from
http://stackoverflow.com/questions/42179012/how-to-shuffle-array-in-postgresql-9-6-and-also-lower-versions
works, don't know why though:

words=> select array_agg(u order by random())
words-> from unnest(array['a','b','c','d','e','f']) u;
   array_agg
---
 {d,a,f,c,b,e}



To future proof your code follow the advice shown in the doc snippet in 
the first answer to your SO question. This is what I showed in my 
answers to your questions.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom shuffle function stopped working in 9.6

2017-02-11 Thread Adrian Klaver

On 02/11/2017 09:17 AM, Alexander Farber wrote:

I think ORDER BY RANDOM() has stopped working in 9.6.2:

words=> select unnest(ARRAY['a','b','c','d','e','f']) order by random();
 unnest

 a
 b
 c
 d
 e
 f
(6 rows)



So back to your original question:

CREATE OR REPLACE FUNCTION public.words_shuffle(in_array character 
varying[])

 RETURNS character varying[]
 LANGUAGE sql
 STABLE
AS $function$
SELECT array_agg(letters.x) FROM
(SELECT * FROM  UNNEST(in_array) x ORDER BY RANDOM()) letters;
$function$


postgres=>  select * from  words_shuffle(ARRAY['a','b','c','d','e','f']);
 words_shuffle
---
 {d,f,a,e,c,b}
(1 row)

postgres=>  select * from  words_shuffle(ARRAY['a','b','c','d','e','f']);
 words_shuffle
---
 {c,d,a,e,f,b}
(1 row)


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom shuffle function stopped working in 9.6

2017-02-11 Thread Adrian Klaver
On 02/11/2017 09:17 AM, Alexander Farber wrote:
> I think ORDER BY RANDOM() has stopped working in 9.6.2:
> 
> words=> select unnest(ARRAY['a','b','c','d','e','f']) order by random();
>  unnest 
> 
>  a
>  b
>  c
>  d
>  e
>  f
> (6 rows)
> 

postgres=> select version();
   version  
 
-
 PostgreSQL 9.6.1 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 4.8.5, 
64-bit
(1 row)

postgres=> select * from  unnest(ARRAY['a','b','c','d','e','f']) order by 
random();
 unnest 

 d
 c
 a
 f
 e
 b
(6 rows)

postgres=> select * from  unnest(ARRAY['a','b','c','d','e','f']) order by 
random();
 unnest 
----
 b
 d
 e
 c
 a
 f
(6 rows)



-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread Adrian Klaver

On 02/10/2017 02:14 PM, Roberto Balarezo wrote:

Hmmm... I didn't know PostgreSQL had a facility for query logging and
debugging of parameters to a logfile. Thought I had to execute a
describe or something like that. Thanks, I'll try it to see what's
happening!


Start here:

https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html

Set up logging and then set :

log_statement = 'all'

This will generate a lot of logs so you will probably not want to keep 
it that way.


A Python example:

In [6]: date.today()
Out[6]: datetime.date(2017, 2, 10)

In [7]: cur.execute('select %s', [date.today()])

From Postgres log;

aklaver-2017-02-10 14:35:42.842 PST-0LOG:  statement: BEGIN
aklaver-2017-02-10 14:35:42.842 PST-0LOG:  statement: select 
'2017-02-10'::date





2017-02-10 16:57 GMT-05:00 Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>>:

On 02/10/2017 01:51 PM, Roberto Balarezo wrote:

Hi,

The parameter defaultDueDate is a java.sql.Date object, an
actual Date.
When I run the query with the value in it, it works:
```sql
db=> select COALESCE(duedate, date '2017-02-01' + 1) from
invoices order
by duedate desc;
  coalesce
-
 2017-02-02 00:00:00
 2017-02-02 00:00:00
 2016-11-14 00:00:00
 2017-02-10 00:00:00
 2017-02-02 00:00:00
 2017-02-13 00:00:00
 2017-02-02 00:00:00
 2017-02-02 00:00:00
```

But when I send it as a parameter, it ignores it and seems to
think the
expression is of type interger.


Which would indicate to me that is what is being passed in the
parameter. If I would guess, from information here:

https://docs.oracle.com/javase/7/docs/api/java/sql/Date.html
<https://docs.oracle.com/javase/7/docs/api/java/sql/Date.html>

milliseconds since January 1, 1970 00:00:00.000 GMT.

Turn on/up logging in Postgres and run a query with that
java.sql.Date object. I am betting that what you will see in the
logs is an integer.



2017-02-10 16:32 GMT-05:00 Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
<mailto:adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>>>:


On 02/10/2017 07:17 AM, Roberto Balarezo wrote:

Hi, I would like to know why this is happening and some
advice
if there
is a way to solve this problem:

I have a query like this:

|select COALESCE(duedate, ? + 1) from invoices order by
duedate desc
limit 10; |


What is the 1 in ? + 1 supposed to represent?


where ? is a query parameter. I’m using JDBC to connect
to the
database,
and sending parameters like this:

|query.setDate(1, defaultDueDate); |

Where defaultDueDate is a java.sql.Date object. However,
when I
try to
execute the query, I get this error:

|org.postgresql.util.PSQLException: ERROR: COALESCE
types timestamp
without time zone and integer cannot be matched |


So what is the actual value of defaultDueDate?

Looks like it is an integer from the ERROR message.

Might want to look in the Postgres logs to see if they show
anything
that might help.


Why is it inferring that the type is integer, when I
send it as
Date??


I don't use Java, but I did find the below, don't know if it
helps?:


https://jdbc.postgresql.org/documentation/94/escapes-datetime.html
<https://jdbc.postgresql.org/documentation/94/escapes-datetime.html>

<https://jdbc.postgresql.org/documentation/94/escapes-datetime.html
<https://jdbc.postgresql.org/documentation/94/escapes-datetime.html>>



When I force the type using a cast, like this:

|select COALESCE(duedate, CAST(? AS DATE) + 1) from invoices
order by
duedate desc limit 10; |

I get this error:

|org.postgresql.util.PSQLException: ERROR: could not
determine
data type
of parameter $1 |

If I’m telling PostgreSQL that the parameter is going to
be a
Date, and
send through the driver a Date, why it is having trouble
determining the
datatype of the parameter??
What can I do to make it work?

For reference, I’m using PostgreSQL 9.2.15 and JDBC driver
9.4.1207.jre6.

Thanks for your 

Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread Adrian Klaver

On 02/10/2017 01:51 PM, Roberto Balarezo wrote:

Hi,

The parameter defaultDueDate is a java.sql.Date object, an actual Date.
When I run the query with the value in it, it works:
```sql
db=> select COALESCE(duedate, date '2017-02-01' + 1) from invoices order
by duedate desc;
  coalesce
-
 2017-02-02 00:00:00
 2017-02-02 00:00:00
 2016-11-14 00:00:00
 2017-02-10 00:00:00
 2017-02-02 00:00:00
 2017-02-13 00:00:00
 2017-02-02 00:00:00
 2017-02-02 00:00:00
```

But when I send it as a parameter, it ignores it and seems to think the
expression is of type interger.


Which would indicate to me that is what is being passed in the 
parameter. If I would guess, from information here:


https://docs.oracle.com/javase/7/docs/api/java/sql/Date.html

milliseconds since January 1, 1970 00:00:00.000 GMT.

Turn on/up logging in Postgres and run a query with that java.sql.Date 
object. I am betting that what you will see in the logs is an integer.





2017-02-10 16:32 GMT-05:00 Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>>:

On 02/10/2017 07:17 AM, Roberto Balarezo wrote:

Hi, I would like to know why this is happening and some advice
if there
is a way to solve this problem:

I have a query like this:

|select COALESCE(duedate, ? + 1) from invoices order by duedate desc
limit 10; |


What is the 1 in ? + 1 supposed to represent?


where ? is a query parameter. I’m using JDBC to connect to the
database,
and sending parameters like this:

|query.setDate(1, defaultDueDate); |

Where defaultDueDate is a java.sql.Date object. However, when I
try to
execute the query, I get this error:

|org.postgresql.util.PSQLException: ERROR: COALESCE types timestamp
without time zone and integer cannot be matched |


So what is the actual value of defaultDueDate?

Looks like it is an integer from the ERROR message.

Might want to look in the Postgres logs to see if they show anything
that might help.


Why is it inferring that the type is integer, when I send it as
Date??


I don't use Java, but I did find the below, don't know if it helps?:

https://jdbc.postgresql.org/documentation/94/escapes-datetime.html
<https://jdbc.postgresql.org/documentation/94/escapes-datetime.html>



When I force the type using a cast, like this:

|select COALESCE(duedate, CAST(? AS DATE) + 1) from invoices
order by
duedate desc limit 10; |

I get this error:

|org.postgresql.util.PSQLException: ERROR: could not determine
data type
of parameter $1 |

If I’m telling PostgreSQL that the parameter is going to be a
Date, and
send through the driver a Date, why it is having trouble
determining the
datatype of the parameter??
What can I do to make it work?

For reference, I’m using PostgreSQL 9.2.15 and JDBC driver
9.4.1207.jre6.

Thanks for your advice!

    ​



    --
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread Adrian Klaver

On 02/10/2017 01:33 PM, Arjen Nienhuis wrote:



On Feb 10, 2017 8:11 PM, "Roberto Balarezo" <rober...@gmail.com
<mailto:rober...@gmail.com>> wrote:

Hi, I would like to know why this is happening and some advice if
there is a way to solve this problem:

I have a query like this:

|select COALESCE(duedate, ? + 1) from invoices order by duedate desc
limit 10; |

where ? is a query parameter. I’m using JDBC to connect to the
database, and sending parameters like this:

|query.setDate(1, defaultDueDate); |

If you want to add to a date you cannot just add 1. You need an
interval: coalesce(duedate, ? + interval '1 day')

See:

https://www.postgresql.org/docs/9.6/static/functions-datetime.html


Actually that is not the case, from above docs:

"Also, the + and * operators come in commutative pairs (for example both 
date + integer and integer + date); we show only one of each such pair."


and:

test=# select current_date;
date

 2017-02-10
(1 row)

test=# select current_date + 1;
  ?column?

 2017-02-11
(1 row)




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread Adrian Klaver

On 02/10/2017 07:17 AM, Roberto Balarezo wrote:

Hi, I would like to know why this is happening and some advice if there
is a way to solve this problem:

I have a query like this:

|select COALESCE(duedate, ? + 1) from invoices order by duedate desc
limit 10; |


What is the 1 in ? + 1 supposed to represent?



where ? is a query parameter. I’m using JDBC to connect to the database,
and sending parameters like this:

|query.setDate(1, defaultDueDate); |

Where defaultDueDate is a java.sql.Date object. However, when I try to
execute the query, I get this error:

|org.postgresql.util.PSQLException: ERROR: COALESCE types timestamp
without time zone and integer cannot be matched |


So what is the actual value of defaultDueDate?

Looks like it is an integer from the ERROR message.

Might want to look in the Postgres logs to see if they show anything 
that might help.




Why is it inferring that the type is integer, when I send it as Date??


I don't use Java, but I did find the below, don't know if it helps?:

https://jdbc.postgresql.org/documentation/94/escapes-datetime.html



When I force the type using a cast, like this:

|select COALESCE(duedate, CAST(? AS DATE) + 1) from invoices order by
duedate desc limit 10; |

I get this error:

|org.postgresql.util.PSQLException: ERROR: could not determine data type
of parameter $1 |

If I’m telling PostgreSQL that the parameter is going to be a Date, and
send through the driver a Date, why it is having trouble determining the
datatype of the parameter??
What can I do to make it work?

For reference, I’m using PostgreSQL 9.2.15 and JDBC driver 9.4.1207.jre6.

Thanks for your advice!

​




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alter view with psql command line

2017-02-10 Thread Adrian Klaver

On 02/10/2017 09:09 AM, Leonardo M. Ramé wrote:

Hi, is there a way to alter a view using *psql*?, something like what
\ef does for functions.


In 9.6:

https://www.postgresql.org/docs/9.6/static/app-psql.html

\ev [ view_name [ line_number ] ]




Regards,



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locks Postgres

2017-02-09 Thread Adrian Klaver

On 02/09/2017 09:00 PM, Patrick B wrote:

Hi guys

I just wanna understand the locks in a DB server:
Imagem inline 1

Access share = Does that mean queries were waiting because an
update/delete/insert was happening?


https://www.postgresql.org/docs/9.3/static/explicit-locking.html



I'm asking because I got a very big spike with > 30 seconds web response
time.
Running PG 9.3

Thanks!
Patrick



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BST Time Zone Discrepancy

2017-02-06 Thread Adrian Klaver

On 02/06/2017 12:44 PM, Igal @ Lucee.org wrote:

Tom,

Thank you for your reply:

On 2/6/2017 12:18 PM, Tom Lane wrote:

This is controlled by the timezone_abbreviations file, which if
you haven't changed it lists:

# CONFLICT! BST is not unique
# Other timezones:
#  - BST: Bougainville Standard Time (Papua New Guinea)
BST  3600 D  # British Summer Time
 # (Europe/London)

I haven't changed any of the config files.  I can not find that file on
my system (maybe it's in the source code only).


timezone_abbreviations is actually a setting in postgresql.conf. The 
file Tom is referring to is the file that setting points to, by default 
that is Default. This is a file in the Postgres share/timezonesets 
directory. A do not use RH so I am not sure where that directory lives.




I am using the Red Hat distribution:  PostgreSQL 9.6.1 on
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat
4.8.5-4), 64-bit


pg_timezone_names shows the *current* abbreviation for the zone in question

I'm not sure what you mean by "current".  If this is not an issue then
that's fine, you can ignore this message.  It just seemed weird to me
that pg_timezone_names and pg_timezone_abbrevs showed very different
results for the same code.

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Result of timestamp - timestamp

2017-02-05 Thread Adrian Klaver

On 02/05/2017 01:21 PM, Thomas Kellerer wrote:

Hello,

I just stumbled about a report that has been running for a long time now
and that relied on the fact that the interval "timestamp - timestamp"
always returns an interval with days, hours, minutes. But never a
"justified" interval with years, months, days and so on.


According to the docs:

https://www.postgresql.org/docs/9.6/static/functions-datetime.html

"Subtraction of date or timestamp values with the "-" operator returns 
the number of days (24-hours) and hours/minutes/seconds between the 
values, making the same adjustments."


It should always return days and hours.



The query usees "extract(day from timestamp - timestamp)" which is
working fine, but would apparently fail if a justified interval was
returned


Did that happen?



But I wonder if I'm relying on undocumented behaviour or if there is any
situation where timestamp - timestamp would return a "justified" interval.

So, my question is: will timestamp '2017-02-05 18:19:20' - timestamp
'2016-11-18 23:00:00' always return "78 days 21:00:00"?

Or is there any situation where the returned interval would be "2 mons
18 days 21:00:00" without using justiy_interval() on it.

I couldn't find a clear statement on that in the manual.

Thomas







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous Commit, WAL archiving and statement_timeout

2017-02-02 Thread Adrian Klaver

On 02/02/2017 09:15 AM, JP Jacoupy wrote:


Hello,

I noticed something strange and I would like to understand what's
happening.

I have the following environment:
  - 2 PostgreSQL instance running in hot-standby with synchronous commit
activated.


There have been many changes in replication over the years/versions, so 
it would be helpful to know what Postgres version you are using?



(further called Master & Slave)
  - The archiving of the WAL files is activated on the master running
every 5 minutes
  - Slave is down

I set the statement_timeout inside my ~/.psqlrc:
$ cat ~/.psqlrc
set statement_timeout = 1;
commit;

When running an UPDATE statement (via psql) on the master, it hangs
(psql seems to
wait a response from the slave) ignoring any value I set in my .psqlrc
and the update is
done and written on the Master. Furthermore if I try (doing the same
things in a small
script with a timeout on the call to PQexec) to make a call to PQcancel
it does nothing
on the Master.

I expected the statement to timeout because the synchronous_commit
wouldn't work


https://www.postgresql.org/docs/9.4/static/warm-standby.html#SYNCHRONOUS-REPLICATION

"25.2.8.3. Planning for High Availability

Commits made when synchronous_commit is set to on or remote_write will 
wait until the synchronous standby responds. The response may never 
occur if the last, or only, standby should crash."



since the Slave is down while rollbacking on the Master.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbouncer increase pool_size, reload does not work

2017-02-02 Thread Adrian Klaver

On 02/02/2017 03:27 AM, alexanderfelipewo wrote:

hello!

i have a question for pgbouncer in case someone has faced this and there is
a solution available.

i have a db with pgbouncer where the pool_size is set to 50 (i ve noticed
this using both session/transaction modes). Sometimes there are waiting
sessions b/c more than 50 want to execute sth at once and usually each query
is not so fast. So let's say i want to increase the pool_size from 50 to 70.
I change the config file and then login to pgbouncer where i run a
'reload;'. Now when i check the 'show config' i see the new value in the
'default_pool_size' parameter.

BUT

the problem is that the amount of servers is still 50 and does not go up to
70. At the same time there are clients in a 'waiting' state. If i do a
restart then it will work and the servers will be able to go up to 70.

I ve tried these in different databases and it doesnt happen all the time
but it does happen pretty often.
Has anyone else seen this? if so, is there a solution (except restarting) or
explanation?


Would this work?:

https://pgbouncer.github.io/faq.html#how-to-upgrade-pgbouncer-without-dropping-connections




thank you in advance



--
View this message in context: 
http://postgresql.nabble.com/pgbouncer-increase-pool-size-reload-does-not-work-tp5942273.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoiding repeating simple field definitions

2017-02-02 Thread Adrian Klaver

On 02/02/2017 07:20 AM, Guyren Howe wrote:

I saw a thing somewhere about avoiding repeating the same field
definitions. So an app I’m working on uses an exactly 6-character sting
as an identifier, which appears in many places.


The thing would be?

Can you show an example of the 6 character string and how it is used in 
multiple places?


What is your concern?



IIRC, the thing I read proposed defining a type AS IMPLICIT, but I’m not
sure. Mainly because the docs urge caution with using AS IMPLICIT.

Thoughts?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Making changes to PostgreSQL's configure logic so as to have contrib modules installed in a specific directory and make them use PGXS?

2017-01-31 Thread Adrian Klaver

On 01/31/2017 03:04 PM, postgres user wrote:

Hi,

I want to configure my PostgreSQL installation in such a manner such
that the contrib modules of the PostgreSQL distribution are stored in a
specific directory and they should use the PGXS extensions management
system supported by Postgres, as they are currently packaged along with
Postgres and follow a specific directory structure they are installed
along with Postgres but I don't want that. I want all those contrib
module extensions to use a separate pg_config and hence want their
makefiles to use PGXS. How do I go about doing that.


Not sure, though some more information would help:

1) Am I right in assuming you are building everything from source?

2) If not what is your install procedure?

3) When you say all the contrib modules do really mean all or all in 
some list?


4) Where is the directory you want them to be installed in located?

5) What OS are you using?

6) Have you looked at an OS packaging systems to do this?



Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery Assistance

2017-01-30 Thread Adrian Klaver

On 01/29/2017 03:12 PM, Brian Mills wrote:

OK. I think I'm on to something here, I first reset back to my file
level backup.
I created a recovery.conf file in the root of the data directory like this:
-
restore_command = 'cp /mnt/archive/%f %p'
recovery_target_time = '2017-01-24 02:08:00.023064+11'
recovery_target_inclusive = 'true'
pause_at_recovery_target = 'false'
-
Note, the archive directory had no files in it, I left the WAL files in
the pg_xlog directory.

Then I started up the database again:
postgres@atlassian:~/9.3/main$ /usr/lib/postgresql/9.3/bin/pg_ctl -D
/etc/postgresql/9.3/main start
server starting
postgres@atlassian:~/9.3/main$ 2017-01-30 10:07:28 AEDT LOG:  database
system was interrupted while in recovery at 2017-01-27 20:13:26 AEDT
2017-01-30 10:07:28 AEDT HINT:  This probably means that some data is
corrupted and you will have to use the last backup for recovery.
2017-01-30 10:07:28 AEDT LOG:  starting point-in-time recovery to
2017-01-24 02:08:00.023064+11
2017-01-30 10:07:28 AEDT LOG:  database system was not properly shut
down; automatic recovery in progress
2017-01-30 10:07:28 AEDT WARNING:  WAL was generated with
wal_level=minimal, data may be missing


This would be a problem:

https://www.postgresql.org/docs/9.3/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS

"But minimal WAL does not contain enough information to reconstruct the 
data from a base backup and the WAL logs, so either archive or 
hot_standby level must be used to enable WAL archiving (archive_mode) 
and streaming replication. "



2017-01-30 10:07:28 AEDT HINT:  This happens if you temporarily set
wal_level=minimal without taking a new base backup.
2017-01-30 10:07:28 AEDT LOG:  redo starts at 5/528B4558
2017-01-30 10:07:40 AEDT LOG:  consistent recovery state reached at
5/A3FFFA30
cp: cannot stat ‘/mnt/archive/0001000500A3’: No such file or
directory
cp: cannot stat ‘/mnt/archive/0001000500A4’: No such file or
directory
2017-01-30 10:07:40 AEDT LOG:  redo done at 5/A3FFF9E8
2017-01-30 10:07:40 AEDT LOG:  last completed transaction was at log
time 2017-01-24 02:08:00.023064+11
cp: cannot stat ‘/mnt/archive/0001000500A3’: No such file or
directory
cp: cannot stat ‘/mnt/archive/0002.history’: No such file or directory
2017-01-30 10:07:40 AEDT LOG:  selected new timeline ID: 2
cp: cannot stat ‘/mnt/archive/0001.history’: No such file or directory
2017-01-30 10:07:40 AEDT LOG:  archive recovery complete
2017-01-30 10:08:55 AEDT FATAL:  the database system is starting up
2017-01-30 10:08:57 AEDT FATAL:  the database system is starting up

This time it looks like it has actually finished the startup and
recovery. However I think I might have something wrong about the process.
Any thoughts on the above log?


See above.




*Brian Mills*
CTO


*Mob: *0410660003 
*Melbourne* 03 9012 3460 <tel:03%209012%203460> or 03 8376 6327
<tel:03%208376%206327> *|* * **Sydney* 02 8064 3600
<tel:02%208064%203600> *|*  *Brisbane* 07 3173 1570 <tel:07%203173%201570>
Level 1 *|*  600 Chapel Street *|* South
Yarra*|*  VIC *|*  3141 *|*  Australia

<https://www.facebook.com/TryBooking/> <https://twitter.com/trybooking> 
<https://www.linkedin.com/company/trybooking-com>

On 30 January 2017 at 04:49, Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>> wrote:

On 01/28/2017 11:23 PM, Brian Mills wrote:

I presume this is a binary log file for the database.

Am I able to recover to a point in time using this log file?

What I would do in SQL Server would be recover to a point in
time, say a
bit before the last completed transaction time the log mentions,
then
take a backup. Is that possible in postgres?


Had another thought. If I remember correctly you are using this as
an exercise in Postgres recovery. If that is indeed the case you
might try:

1) Stop the Postgres instance you have running now.

2) Move the WAL file that Postgres is currently stalled on,
0001000500A3, out of pg_xlog.

3) Restart the Postgres instance.

My guess it it will not bring it back to the exact point you want,
but close. You can get a general idea by running(before and after
removing the WAL), as the postgres user:

pg_controldata -D /etc/postgresql/9.3/main


The log mentions this:
2017-01-27 20:36:18 AEDT LOG:  last completed transaction was at log
time 2017-01-24 02:08:00.023064+11

(which is moments before, or possibly as the disk filled up
doing a db
    backup dump)

*Brian Mills*
CTO




--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To

Re: [GENERAL] using hstore to store documents

2017-01-29 Thread Adrian Klaver

On 01/28/2017 05:57 PM, Rita wrote:

sorry for the late reply.

My table schema is very simple

DROP TABLE xmltest;





create table xmltest(


  id serial,-- dont really need the serial


Maybe not a serial id, but a Primary Key of some sort would help with 
what you say you want to do below, I think.





data xml NOT null


);











INSERT INTO xmltest (data, id) VALUES ('





 


 John Doe


 1986


 


 


 php


 python


 java


 


', 1);

I really don't need the serial but every 30 seconds or so I plan to
overwrite the data portion. so, I suppose I wanted a simple key/value
thats where my orignal question stemmed from.


Why do you want to overwrite the data if you plan to refer to it below?



After xmltest has been populated, I can run xpath and unest to get my
data into a row but I would like to store that result in another table,
I am guessing I should look into triggers for something like that?
Eventually, I plan to have 5-6 downstream tables which will have xmltest


Why 5-6 tables?

Are they each holding some subset of data?

A schematic representation of what you are thinking of doing would help 
with developing an answer to your question.



as my head. The application will be accessing the downstream tables and
rarely be touching xmltest (head table).




On Thu, Jan 26, 2017 at 10:38 AM, David G. Johnston
<david.g.johns...@gmail.com <mailto:david.g.johns...@gmail.com>> wrote:

On Thu, Jan 26, 2017 at 5:37 AM, Rita <rmorgan...@gmail.com
<mailto:rmorgan...@gmail.com>>wrote:

of course, sorry for being vague.

I have an external process generating a XML file (every 30 secs)
which is about 10MB. I would like to store the file as XML type
for me to query using xpath. I plan to query it every few
seconds by few hundred clients. so, it maybe easier for me
create a separate table of my xpath results and have clients
query that table (xpath can be expensive).


​If the XML being generated has a fixed structure/schema I
personally would treat the XML as a serialization format and
de-serialize and store it in a database as one or more relationally
linked tables.​  If you have to deal with the possibility of dynamic
structure I would still try to put the fixed items into individual
columns and then and then any dynamic items could be stuffed into an
hstore typed table.

My answer to your stated question is: what happened when you tried
doing that?  Documentation and a bit of experimentation goes a long
ways in learning.

David J.




--
--- Get your facts first, then you can distort them as you please.--



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery Assistance

2017-01-29 Thread Adrian Klaver

On 01/28/2017 11:23 PM, Brian Mills wrote:

I presume this is a binary log file for the database.

Am I able to recover to a point in time using this log file?

What I would do in SQL Server would be recover to a point in time, say a
bit before the last completed transaction time the log mentions, then
take a backup. Is that possible in postgres?


Had another thought. If I remember correctly you are using this as an 
exercise in Postgres recovery. If that is indeed the case you might try:


1) Stop the Postgres instance you have running now.

2) Move the WAL file that Postgres is currently stalled on, 
0001000500A3, out of pg_xlog.


3) Restart the Postgres instance.

My guess it it will not bring it back to the exact point you want, but 
close. You can get a general idea by running(before and after removing 
the WAL), as the postgres user:


pg_controldata -D /etc/postgresql/9.3/main



The log mentions this:
2017-01-27 20:36:18 AEDT LOG:  last completed transaction was at log
time 2017-01-24 02:08:00.023064+11

(which is moments before, or possibly as the disk filled up doing a db
backup dump)

*Brian Mills*
CTO





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery Assistance

2017-01-29 Thread Adrian Klaver

On 01/28/2017 11:23 PM, Brian Mills wrote:

I presume this is a binary log file for the database.

Am I able to recover to a point in time using this log file?

What I would do in SQL Server would be recover to a point in time, say a
bit before the last completed transaction time the log mentions, then
take a backup. Is that possible in postgres?


Yes, though I am not sure you have the setup to do it. I would suggest 
reading the below to see how much of it applies:


https://www.postgresql.org/docs/9.3/static/continuous-archiving.html

In particular:
24.3.4. Recovering Using a Continuous Archive Backup

https://www.postgresql.org/docs/9.3/static/recovery-target-settings.html



The log mentions this:
2017-01-27 20:36:18 AEDT LOG:  last completed transaction was at log
time 2017-01-24 02:08:00.023064+11

(which is moments before, or possibly as the disk filled up doing a db
backup dump)

*Brian Mills*
CTO


*Mob: *0410660003 
*Melbourne* 03 9012 3460 <tel:03%209012%203460> or 03 8376 6327
<tel:03%208376%206327> *|* * **Sydney* 02 8064 3600
<tel:02%208064%203600> *|*  *Brisbane* 07 3173 1570 <tel:07%203173%201570>
Level 1 *|*  600 Chapel Street *|* South
Yarra*|*  VIC *|*  3141 *|*  Australia

<https://www.facebook.com/TryBooking/> <https://twitter.com/trybooking> 
<https://www.linkedin.com/company/trybooking-com>

On 29 January 2017 at 12:58, Brian Mills <br...@trybooking.com
<mailto:br...@trybooking.com>> wrote:

I have a consistent sql dump from 24 hour previous.

The file level backup was done with rsync -a of full data directory
after the issue occurred so could reset as I learned.

    Brian


On Sun, 29 Jan 2017 at 9:18 am, Adrian Klaver
<adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

On 01/28/2017 01:55 PM, Brian Mills wrote:
> Yes, its the last one in the directory, pg_xlog directory
>
> ...more files...
> -rw---  1 postgres postgres 16777216 Jan 21 10:05
> 0001000500A1
> -rw---  1 postgres postgres 16777216 Jan 22 21:29
> 0001000500A2
> -rw---  1 postgres postgres 16777216 Jan 24 02:08
> 0001000500A3

Best guess is the last WAL is not complete.

 From your original post:
"Attempt 2 -  startup manually and let it try recovery

I restored my file level backup and started again. "

How was the file level backup done?

>
>
> *Brian Mills*
> CTO
>
>
> *Mob: *0410660003 <tel:0410%20660%20003> >
> *Melbourne* 03 9012 3460 <tel:(03)%209012%203460>
<tel:03%209012%203460> or 03 8376 6327 <tel:(03)%208376%206327>
> <tel:03%208376%206327> *|* * **Sydney* 02 8064 3600
<tel:(02)%208064%203600>
> <tel:02%208064%203600> *|*  *Brisbane* 07 3173 1570
<tel:(07)%203173%201570> <tel:07%203173%201570>
> Level 1 *|*  600 Chapel Street *|* South
> Yarra*|*  VIC *|*  3141 *|*  Australia
>
> <https://www.facebook.com/TryBooking/
<https://www.facebook.com/TryBooking/>>
<https://twitter.com/trybooking
<https://twitter.com/trybooking>>
<https://www.linkedin.com/company/trybooking-com
<https://www.linkedin.com/company/trybooking-com>>
>
> On 29 January 2017 at 08:18, rob stone <floripa...@gmail.com
<mailto:floripa...@gmail.com>
> <mailto:floripa...@gmail.com <mailto:floripa...@gmail.com>>>
wrote:
>
> Hello Brian,
> On Sun, 2017-01-29 at 07:16 +1100, Brian Mills wrote:
> > Hi,
> >
> > No, it hasn't changed since the first time I looked at it.
> >
> > root@atlassian:/home/tbadmin# ps ax | grep post
> >  1364 ?Ss 0:00 /usr/lib/postfix/master
> >  5198 pts/3S  0:00 su postgres
> >  5221 pts/3S  0:00
/usr/lib/postgresql/9.3/bin/postgres -D
> > /etc/postgresql/9.3/main
> >  5222 ?Ss 0:10 postgres: startup process
 recovering
> > 0001000500A3
> > 11161 pts/4S+ 0:00 grep --color=auto post
> >
>
>
> Does this WAL file exist "0001000500A3"?
>
> Cheers,
> Rob
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>

    --
*Br

Re: [GENERAL] Recovery Assistance

2017-01-28 Thread Adrian Klaver

On 01/28/2017 01:55 PM, Brian Mills wrote:

Yes, its the last one in the directory, pg_xlog directory

...more files...
-rw---  1 postgres postgres 16777216 Jan 21 10:05
0001000500A1
-rw---  1 postgres postgres 16777216 Jan 22 21:29
0001000500A2
-rw---  1 postgres postgres 16777216 Jan 24 02:08
0001000500A3


Best guess is the last WAL is not complete.

From your original post:
"Attempt 2 -  startup manually and let it try recovery

I restored my file level backup and started again. "

How was the file level backup done?




*Brian Mills*
CTO


*Mob: *0410660003 
*Melbourne* 03 9012 3460 <tel:03%209012%203460> or 03 8376 6327
<tel:03%208376%206327> *|* * **Sydney* 02 8064 3600
<tel:02%208064%203600> *|*  *Brisbane* 07 3173 1570 <tel:07%203173%201570>
Level 1 *|*  600 Chapel Street *|* South
Yarra*|*  VIC *|*  3141 *|*  Australia

<https://www.facebook.com/TryBooking/> <https://twitter.com/trybooking> 
<https://www.linkedin.com/company/trybooking-com>

On 29 January 2017 at 08:18, rob stone <floripa...@gmail.com
<mailto:floripa...@gmail.com>> wrote:

Hello Brian,
On Sun, 2017-01-29 at 07:16 +1100, Brian Mills wrote:
> Hi,
>
> No, it hasn't changed since the first time I looked at it.
>
> root@atlassian:/home/tbadmin# ps ax | grep post
>  1364 ?Ss 0:00 /usr/lib/postfix/master
>  5198 pts/3S  0:00 su postgres
>  5221 pts/3S  0:00 /usr/lib/postgresql/9.3/bin/postgres -D
> /etc/postgresql/9.3/main
>  5222 ?Ss 0:10 postgres: startup process   recovering
> 0001000500A3
> 11161 pts/4S+ 0:00 grep --color=auto post
    >


Does this WAL file exist "0001000500A3"?

Cheers,
Rob





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery Assistance

2017-01-27 Thread Adrian Klaver

On 01/27/2017 05:40 PM, Brian Mills wrote:

First of all, Thank you for your time to assist me learning. I really
appreciate it.

root# ps ax | grep post
 1364 ?Ss 0:00 /usr/lib/postfix/master
 5198 pts/3S  0:00 su postgres
 5221 pts/3S  0:00 /usr/lib/postgresql/9.3/bin/postgres -D
/etc/postgresql/9.3/main
 5222 ?Ss 0:10 postgres: startup process   recovering
0001000500A3
 7930 pts/4S+ 0:00 grep --color=auto post


So if you check back does the recovering  part change?

If so Postgres is walking through the WAL files as it should.



Its a single machine postgres database server, so I'm assuming there is
no cluster log. If there is one, where would I look for it?
The only log in /var/log/postgres is postgresql-9.3-main.log


That would be it. A single Postgres instance has multiple databases in 
it, by default it starts with template0, template1 and postgres 
databases. Then add whatever databases you created and you have a 
cluster of databases.



which shows (tail):

2017-01-27 20:27:01 AEDT LOG:  database system was shut down at
2017-01-27 20:26:29 AEDT
2017-01-27 20:27:01 AEDT LOG:  MultiXact member wraparound protections
are now enabled
2017-01-27 20:27:01 AEDT LOG:  autovacuum launcher started
2017-01-27 20:27:01 AEDT LOG:  database system is ready to accept
connections
2017-01-27 20:27:02 AEDT LOG:  incomplete startup packet
2017-01-27 20:28:54 AEDT ERROR:  unexpected chunk size 104 (expected
1996) in chunk 3 of 22 for toast value 48862 in pg_toast_20028
2017-01-27 20:28:54 AEDT STATEMENT:  COPY public.bodycontent
(bodycontentid, body, contentid, bodytypeid) TO stdout;
2017-01-27 20:30:13 AEDT LOG:  received fast shutdown request
2017-01-27 20:30:13 AEDT LOG:  aborting any active transactions
2017-01-27 20:30:13 AEDT LOG:  autovacuum launcher shutting down
2017-01-27 20:30:13 AEDT LOG:  shutting down
2017-01-27 20:30:13 AEDT LOG:  database system is shut down


That would be your Attempt 1 log.


I ran the screen utility so I could leave the db started using
the pg_ctl command. The later logs in that session have not progressed,
its last entry is still
2017-01-27 23:00:01 AEDT FATAL:  the database system is starting up
Which is still later datetime than the /var/log/postgres... log.


So it is just logging to stdout and not to the log file.



Connection attempt shows the same log
postgres@atlassian:/home/myuser$ psql
psql: FATAL:  the database system is starting up

Nothing in the syslog that seems connected.

*Brian Mills*
CTO


*Mob: *0410660003 
*Melbourne* 03 9012 3460 <tel:03%209012%203460> or 03 8376 6327
<tel:03%208376%206327> *|* * **Sydney* 02 8064 3600
<tel:02%208064%203600> *|*  *Brisbane* 07 3173 1570 <tel:07%203173%201570>
Level 1 *|*  600 Chapel Street *|* South
Yarra*|*  VIC *|*  3141 *|*  Australia

<https://www.facebook.com/TryBooking/> <https://twitter.com/trybooking> 
<https://www.linkedin.com/company/trybooking-com>

On 28 January 2017 at 12:05, Adrian Klaver <adrian.kla...@aklaver.com
<mailto:adrian.kla...@aklaver.com>> wrote:

On 01/27/2017 01:31 PM, Brian Mills wrote:

Hi,

I have a Atlassian Confluence Wiki that depends on postgres, but I
haven't much experience with postgres other than for this purpose.

A few days ago, the hard disk filled, so all services stopped
working.
When the admin realised this he increased the disk size (its in
a cloud,
so that was easy to do) however I think the way it shutdown left
Postgres in an inconsistent state for some reason.
Now when I start it up I get this message in the log over and
over again:
"FATAL:  the database system is starting up"

I have a backup, which I have successfully recovered, but it is
24 hours
old, the next backup was the cause of the disk filling. So I'm using
this as exercise in learning a bit more about postgres.

I did some research and found a number of options. I took a file
level
backup with the service not running then tried 2 things. I
haven't found
much else on what to do though.

*Attempt 1 - Reset Log *

It sounded like this shouldn't be my first option (it wasn't)
but it did
sound like what I needed to do.
I ran this command
./pg_resetxlog /var/lib/postgresql/9.3/main -f
It worked a treat, the database did startup ok.
However when I tried to dump the DB:
root@atlassian:/home/myuser# sudo -u postgres pg_dump confluencedb >
$now-confluencedb.sql
pg_dump: Dumping the contents of table "bodycontent" failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  unexpected chunk
size 104
(expected 1996) in chunk 3 of 22 for toast value 48862 in
   

Re: [GENERAL] Recovery Assistance

2017-01-27 Thread Adrian Klaver

On 01/27/2017 01:31 PM, Brian Mills wrote:

Hi,

I have a Atlassian Confluence Wiki that depends on postgres, but I
haven't much experience with postgres other than for this purpose.

A few days ago, the hard disk filled, so all services stopped working.
When the admin realised this he increased the disk size (its in a cloud,
so that was easy to do) however I think the way it shutdown left
Postgres in an inconsistent state for some reason.
Now when I start it up I get this message in the log over and over again:
"FATAL:  the database system is starting up"

I have a backup, which I have successfully recovered, but it is 24 hours
old, the next backup was the cause of the disk filling. So I'm using
this as exercise in learning a bit more about postgres.

I did some research and found a number of options. I took a file level
backup with the service not running then tried 2 things. I haven't found
much else on what to do though.

*Attempt 1 - Reset Log *

It sounded like this shouldn't be my first option (it wasn't) but it did
sound like what I needed to do.
I ran this command
./pg_resetxlog /var/lib/postgresql/9.3/main -f
It worked a treat, the database did startup ok.
However when I tried to dump the DB:
root@atlassian:/home/myuser# sudo -u postgres pg_dump confluencedb >
$now-confluencedb.sql
pg_dump: Dumping the contents of table "bodycontent" failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  unexpected chunk size 104
(expected 1996) in chunk 3 of 22 for toast value 48862 in pg_toast_20028
pg_dump: The command was: COPY public.bodycontent (bodycontentid, body,
contentid, bodytypeid) TO stdout;
The dump failed, so I assume this did leave my database in an
inconsistent state.


*Attempt 2 -  startup manually and let it try recovery*

I restored my file level backup and started again.
This time I tried to startup manually on the command line to see the
output (I'd done it as a service startup a number of times to nearly the
same effect too)

postgres@atlassian:/usr/lib/postgresql/9.3/bin$ ./pg_ctl -D
/etc/postgresql/9.3/main start
server starting
postgres@atlassian:/usr/lib/postgresql/9.3/bin$ 2017-01-27 20:36:08 AEDT
LOG:  database system was interrupted while in recovery at 2017-01-27
20:13:26 AEDT
2017-01-27 20:36:08 AEDT HINT:  This probably means that some data is
corrupted and you will have to use the last backup for recovery.
2017-01-27 20:36:08 AEDT LOG:  database system was not properly shut
down; automatic recovery in progress
2017-01-27 20:36:08 AEDT LOG:  redo starts at 5/528B4558
2017-01-27 20:36:18 AEDT LOG:  redo done at 5/A3FFF9E8
2017-01-27 20:36:18 AEDT LOG:  last completed transaction was at log
time 2017-01-24 02:08:00.023064+11
2017-01-27 23:00:01 AEDT FATAL:  the database system is starting up
2017-01-27 23:00:01 AEDT FATAL:  the database system is starting up


What does ps ax | grep post show?

Is the cluster set up to log to a file, if so what does it show?

Does the system log show anything relevant?



I've left it that way for 12 hours, and its still not allowing connections.
I assume its doing some kind of consistency check?


What does it say when you attempt a connection?



Does anyone have any suggestions on what I should be doing to try and
restore this database?

- The amount of change is minimal in the DB after 6pm it should be
basically no change overnight.
- The log above seems to suggest it has completed a redo ok, is that right?
- The last completed transaction time 2017-01-24 02:08:00.023064+11 the
log mentions would be fine to use, so loosing even a few hours before
that would be more than adequate.

I'm just not clear what the database is doing now, or how I should be
trying to recover it.

Any help anyone can suggest would be great! I've given myself this
weekend to attempt to recover more than the last backup, after that I
need to restore the service for my team to use and suck up the lost last
day of updates.

Thanks,
Brian



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    1   2   3   4   5   6   7   8   9   10   >