Re: [GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Victor Blomqvist
On Thu, Feb 18, 2016 at 11:05 PM, Tom Lane  wrote:

> Victor Blomqvist  writes:
> > We just had a major issue on our databases, after a index was replaced a
> > user defined function didnt change its query plan to use the new index.
>
> I'm suspicious that this is some variant of the problem discussed a couple
> days ago:
>
>
> http://www.postgresql.org/message-id/flat/CAKt_ZfsvdE3WiHUjg81MT1KzOkebqz_bg=dbhf4l6rkfjte...@mail.gmail.com
>
> However, that theory requires that the index not have been immediately
> usable, which implies that it initially had some broken HOT chains,
> which really should not have happened if you were simply replacing one
> index with an identical one.  (The pre-existing index should've been
> enough to ensure HOT chain consistency for its columns.)
>
> Perhaps you were doing something "cute" like replacing a single-column
> index with a multi-column one?
>

No the new index looked exactly as the old one. The index was created with
CONCURRENTLY, and we waited until it returned.

I could use the index just fine when running a query in a separate
connection, so I am not sure if it helps dropping the old index in a
transaction and run a query to verify that the new index is usable? How can
I know that the new index is usable from already open connections?

Another thing to note about this index is that it sometimes has an
unexpected zero page inside, resulting in this error: ERROR:  index
"user_pictures_picture_dhash_idx" contains unexpected zero page at block
123780. But it always happens sporadically on hour read slaves, and
rerunning the query that cause it doesnt reproduce the error. I asked about
this problem before, here:
http://www.postgresql.org/message-id/flat/cal870dvxr9fhkyej5smydk4pjupl5kwabuhspbtqek03gdd...@mail.gmail.com#cal870dvxr9fhkyej5smydk4pjupl5kwabuhspbtqek03gdd...@mail.gmail.com



>
> regards, tom lane
>


Re: [GENERAL] Exporting a PDF from a bytea column

2016-02-18 Thread David G. Johnston
On Thu, Feb 18, 2016 at 7:42 PM, CS DBA  wrote:

>
> http://www.postgresql.org/message-id/AANLkTi=2darhqprgtxmcx0aus9mahq2sfxxbvihtu...@mail.gmail.com
>
> Short answer, to avoid the binary blob problem, is to encode the binary
> data, export it, then decode it.
>
> This can be done is psql.  If your client can handle binary directly (e.g,
> JDBC/Java) you can use that language's facilities to perform the
> binary transfer directly thus bypassing the need to transcode.
>
> Can it be done from a Linux shell script?  Any examples? Seems to be
> little info on this in my googling?
>

​Others may provide actual examples but at the moment I don't have the time
to explore to that depth.

To answer your "shell script" question...anything can be done in a shell
script - as long as you have the right programs on your system to do the
actual work.  I already mentioned "psql" which, if you want the least
amount of pure "coding", is going to be the helper program you will want to
use.  And its usage is well documented.  I would suggest base64 encoding
and then using whatever standard base64 decoder program exists on your
Linux box to perform the decoding.

David J.
​


Re: [GENERAL] Appending key-value to JSONB tree

2016-02-18 Thread Deven Phillips
Answering my own question here... The gist is that if you need to add a new
key-value pair, you use *jsonb_set* on the non-existent key and then
provide the value as the final parameter.. The new stored procedure looks
like:

CREATE OR REPLACE FUNCTION set_user_token(email TEXT, expiration INTERVAL)
RETURNS TEXT AS $

WITH newtoken AS (

SELECT random_string(32) token, (now()+INTERVAL '6 months') expiry

),
updated AS (

SELECT

jsonb_set(data::jsonb, (SELECT ARRAY['tokens', token] FROM newtoken),
(SELECT to_jsonb(expiry) FROM newtoken)) newdata

FROM

users

WHERE

email=$1

),
updatecomplete AS (

UPDATE

users

SET

data=(SELECT newdata FROM updated)

WHERE

email=$1

)
SELECT jsonb_pretty(token) FROM newtoken $

LANGUAGE SQL;

The difficult part for me was figuring out how to build the array which
makes of the *path* parameter for *jsonb_set*...

Hope this helps others!!!

Deven

On Wed, Feb 17, 2016 at 10:47 AM, Deven Phillips 
wrote:

> I have a "user" document with a key "tokens" and I would like to write a
> stored procedure for adding new token key-value pairs to the "tokens" part
> of the tree without removing the old values. I have figured out how to
> replace the existing value in the "tokens", but I cannot seem to wrap my
> head around appending a new key-value pair. Could someone suggest an
> approach (using PostgreSQL 9.5 BTW)...
>
> Here's my existing stored proc:
>
> CREATE OR REPLACE FUNCTION set_user_token(email TEXT, expiration INTERVAL)
> RETURNS TEXT AS $$
>
> WITH newtoken AS (
>
> SELECT
>
> jsonb_build_object(random_string(32), (now()+$2)) token
>
> ),
> updated AS (
>
> SELECT
>
> jsonb_set(data::jsonb, '{"tokens"}', (SELECT token FROM newtoken)) newdata
>
> FROM
>
> users
>
> WHERE
>
> email=$1
>
> ),
> updatecomplete AS (
>
> UPDATE
>
> cc_users
>
> SET
>
> data=(SELECT newdata FROM updated)
>
> WHERE
>
> email=$1
>
> )
> SELECT jsonb_pretty(token) FROM newtoken $$
>
> LANGUAGE SQL;
>
> Thanks in advance!!!
>
> Deven Phillips
>


Re: [GENERAL] Exporting a PDF from a bytea column

2016-02-18 Thread CS DBA



On 02/18/2016 07:29 PM, David G. Johnston wrote:
On Thursday, February 18, 2016, John R Pierce > wrote:


On 2/18/2016 4:44 PM, CS DBA wrote:

The system stores PDF's as large objects
in bytea columns.


Large Objects aka LO's and bytea columns are two completely
different things.


I'll assume the "column" is the most relevant term here because the 
above is true.


Can anyone send me an example of
exporting from a bytea column to a PDF file?


I don't think you can get from a bytea field to a file without
some coding, as SQL scripting doesn't handle binary blobs very well.


http://www.postgresql.org/message-id/AANLkTi=2darhqprgtxmcx0aus9mahq2sfxxbvihtu...@mail.gmail.com

Short answer, to avoid the binary blob problem, is to encode the 
binary data, export it, then decode it.


This can be done is psql.  If your client can handle binary directly 
(e.g, JDBC/Java) you can use that language's facilities to perform the 
binary transfer directly thus bypassing the need to transcode.
Can it be done from a Linux shell script?  Any examples? Seems to be 
little info on this in my googling?





David J,




Re: [GENERAL] Exporting a PDF from a bytea column

2016-02-18 Thread David G. Johnston
On Thursday, February 18, 2016, John R Pierce  wrote:

> On 2/18/2016 4:44 PM, CS DBA wrote:
>
>> The system stores PDF's as large objects
>> in bytea columns.
>>
>
> Large Objects aka LO's and bytea columns are two completely different
> things.
>
>
I'll assume the "column" is the most relevant term here because the above
is true.


> Can anyone send me an example of
>> exporting from a bytea column to a PDF file?
>>
>
> I don't think you can get from a bytea field to a file without some
> coding, as SQL scripting doesn't handle binary blobs very well.
>
>
http://www.postgresql.org/message-id/AANLkTi=2darhqprgtxmcx0aus9mahq2sfxxbvihtu...@mail.gmail.com

Short answer, to avoid the binary blob problem, is to encode the binary
data, export it, then decode it.

This can be done is psql.  If your client can handle binary directly (e.g,
JDBC/Java) you can use that language's facilities to perform the
binary transfer directly thus bypassing the need to transcode.

David J,


Re: [GENERAL] Exporting a PDF from a bytea column

2016-02-18 Thread John R Pierce

On 2/18/2016 4:44 PM, CS DBA wrote:

The system stores PDF's as large objects
in bytea columns.


Large Objects aka LO's and bytea columns are two completely different 
things.


bytea columns are regular SQL database columns that contain a binary 
byte array.large objects (LO) are a whole separate facility, where 
each object is referenced by a unique OID, and the object is read and 
written with the lo_XXX() functions in libpq, or equivalents in other 
language specific bindings.

http://www.postgresql.org/docs/current/static/largeobjects.html

in a C program, an LO can be exported to a file via the API lo_export()
http://www.postgresql.org/docs/current/static/lo-interfaces.html#LO-EXPORT



Can anyone send me an example of
exporting from a bytea column to a PDF file? 


I don't think you can get from a bytea field to a file without some 
coding, as SQL scripting doesn't handle binary blobs very well.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Live steraming replication setup issue!

2016-02-18 Thread Andreas Kretschmer

> Currently we have master -> slave -> DR hot standby streaming replication in
> current prod environment. Between master and slave server replication running
> fine. Between slave and DR server replication is broken and I am trying to fix
> it. For DR server, slave server is master server.
> 
> Issue: Few days back, DR was lagging behind slave server and stopped
> replication. I tried to setup the replication from slave to DR (currently
> there is replication running from master to slave) with pg_basebackup command,
> I am able to restart Postgres without any error on DR server but when I try to
> run any psql on DR, it throwing up below error.
> 
> psql: FATAL:  the database system is starting up



please show us your recovery.conf. this should include this line:

standby_mode  = 'on'


-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Exporting a PDF from a bytea column

2016-02-18 Thread CS DBA

Hi all;

we have inherited a legacy database.

The system stores PDF's as large objects
in bytea columns.

Can anyone send me an example of
exporting from a bytea column to a PDF file?


Thanks in advance


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Live steraming replication setup issue!

2016-02-18 Thread Venkata Balaji N
>
> How do I setup replication between DR server and slave server while slave
> server and master server are running? I cannot stop master server. Can
> someone please guide with steps?
>
>
>

Steps are pretty much similar. You can setup replication between slave and
DR by using the backup of Master database + WAL archives (if available) and
setup primary_conninfo to point to slave database in recovery.conf on DR.
Can you please let us know which version of postgresql you are using ?
Thats important for us to make any suggestions.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Adrian Klaver

On 02/18/2016 02:15 PM, Rakesh Kumar wrote:

aha ok it is clear now.

The splitting of buffers for each db is not for the reasons you stated
below, but for better management of RAM.
In our current RDBMS we allocate BPs for each database based on its
usage/size.  With that being said, in case
of PG. having no control on BP is not a big deal.

Also, just curious , is it possible to shut down only one database in an
instance,
in case a rogue session connected to it is causing havoc. I know there
are other ways
of achieving it (like killing all sessions and revoking grant priv), but
if there is an easier
way to knock out a db temporarily, it will be great.


In 9.5:

http://www.postgresql.org/docs/9.5/interactive/sql-alterdatabase.html

ALTER DATABASE name [ [ WITH ] option [ ... ] ]

where option can be:

ALLOW_CONNECTIONS allowconn

So:

postgres=# alter database test allow_connections false;

postgres=# \c test
FATAL:  database "test" is not currently accepting connections
FATAL:  database "test" is not currently accepting connections
Previous connection kept


It will not shutdown existing sessions, but you can use the functions 
below to do that:


http://www.postgresql.org/docs/9.5/interactive/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL



-Original Message-
From: Melvin Davidson 
To: Rakesh Kumar ; pgsql-general

Sent: Thu, Feb 18, 2016 4:12 pm
Subject: Re: [GENERAL] Multiple databases and shared_buffers

 >What do you mean that the data is actually not stored in the shared
buffer.

From the link you provided :
"The “shared_buffers” configuration parameter determines how much memory
is dedicated to PostgreSQL to use for caching data."


Again, you misunderstand. Cached data (and queries) is for sharing only
to the same database.
So if user A in Database abc does a "SELECT some_column FROM table1
WHERE col2 = 4" Then the results/data from that query are available to
any other user (who has permission) in database abc and does the same
exact query. However, users from database xyz CANNOT see data and/or
results from database abc unless they specifically connect to it.
Further to the point, Beginning with 9.4, PostgreSQL also makes better
use of O/S memory for shared_buffers. But the bottom line is, you do not
need to split shared_buffers up among different databases. PostgreSQL
just uses it to make queries more efficient. There is no security
problem because users in one database cannot request buffer information
about another.

--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Live steraming replication setup issue!

2016-02-18 Thread Ashish Chauhan
Hi,

Currently we have master -> slave -> DR hot standby streaming replication in 
current prod environment. Between master and slave server replication running 
fine. Between slave and DR server replication is broken and I am trying to fix 
it. For DR server, slave server is master server.

Issue: Few days back, DR was lagging behind slave server and stopped 
replication. I tried to setup the replication from slave to DR (currently there 
is replication running from master to slave) with pg_basebackup command, I am 
able to restart Postgres without any error on DR server but when I try to run 
any psql on DR, it throwing up below error.

psql: FATAL:  the database system is starting up

Slave configuration:
hot_standby = on
listen_addresses = '*'
wal_level = hot_standby
wal_keep_segments = 3000
max_wal_senders = 5

default_statistics_target = 100
maintenance_work_mem = 1792MB
checkpoint_completion_target = 0.7
effective_cache_size = 22GB
work_mem = 144MB
wal_buffers = 16MB
checkpoint_segments = 32
shared_buffers = 7GB
max_connections = 300

DR Server configuration:
listen_addresses = '*'
hot_standby = on
wal_level = hot_standby

default_statistics_target = 100
maintenance_work_mem = 896MB
checkpoint_completion_target = 0.7
effective_cache_size = 10GB
work_mem = 352MB
wal_buffers = 16MB
checkpoint_segments = 32
shared_buffers = 3584MB
max_connections = 300

How do I setup replication between DR server and slave server while slave 
server and master server are running? I cannot stop master server. Can someone 
please guide with steps?

Thanks for your help in advance.

Thanks
-Ashish



Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Melvin Davidson
On Thu, Feb 18, 2016 at 5:15 PM, Rakesh Kumar  wrote:

> aha ok it is clear now.
>
> The splitting of buffers for each db is not for the reasons you stated
> below, but for better management of RAM.
> In our current RDBMS we allocate BPs for each database based on its
> usage/size.  With that being said, in case
> of PG. having no control on BP is not a big deal.
>
> Also, just curious , is it possible to shut down only one database in an
> instance,
> in case a rogue session connected to it is causing havoc. I know there
> are other ways
> of achieving it (like killing all sessions and revoking grant priv), but
> if there is an easier
> way to knock out a db temporarily, it will be great.
>
> -Original Message-
> From: Melvin Davidson 
> To: Rakesh Kumar ; pgsql-general <
> pgsql-general@postgresql.org>
> Sent: Thu, Feb 18, 2016 4:12 pm
> Subject: Re: [GENERAL] Multiple databases and shared_buffers
>
> >What do you mean that the data is actually not stored in the shared
> buffer.
> >From the link you provided :
> >"The “shared_buffers” configuration parameter determines how much memory
> >is dedicated to PostgreSQL to use for caching data."
>
> Again, you misunderstand. Cached data (and queries) is for sharing only to
> the same database.
> So if user A in Database abc does a "SELECT some_column FROM table1 WHERE
> col2 = 4" Then the results/data from that query are available to any other
> user (who has permission) in database abc and does the same
> exact query. However, users from database xyz CANNOT see data and/or
> results from database abc unless they specifically connect to it. Further
> to the point, Beginning with 9.4, PostgreSQL also makes better use of O/S
> memory for shared_buffers. But the bottom line is, you do not need to split
> shared_buffers up among different databases. PostgreSQL just uses it to
> make queries more efficient. There is no security problem because users in
> one database cannot request buffer information about another.
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


>also, just curious , is it possible to shut down only one database in an
instance,
>in case a rogue session connected to it is causing havoc. I know there are
other ways
>of achieving it (like killing all sessions and revoking grant priv)

You are getting off topic, but no, you cannot "shutdown" just one database.
However, you can use the attached flip_database_connect.sh to temporarily
prevent connections to a specific database.
and then reallow.
You can also use cancel_all_queries.sh to cancel ALL current queries, but
that is for all users except the superuser running it.

FYI, revoking a grant will not cancel current queries, only prevent future
access.

From your questions, it appears you are a little weak on PostgreSQL
Database Administration. I respectfully suggest you obtain a copy of the
following book to get a clearer understandings of how things work.

https://www.packtpub.com/big-data-and-business-intelligence/postgresql-9-administration-cookbook-second-edition/?utm_source=PoD&utm_medium=referral&utm_campaign=1849519064

You will also find many other useful books below

http://www.postgresql.org/docs/books/
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


flip_database_connect.sh
Description: Bourne shell script


cancel_all_queries.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sslcompression / PGSSLCOMPRESSION not behaving as documented?

2016-02-18 Thread Jeff Janes
On Fri, Jan 16, 2015 at 10:34 AM, Maciek Sakrejda  wrote:
> Thanks, everyone. That seems to be it. I still haven't gotten it to work (I
> tried setting OPENSSL_DEFAULT_ZLIB=true in
> /etc/postgresql/9.2/main/environment and restarting the server, then adding
> the same env var when connecting with the client), but now that I know where
> the problem is, I think I can work through it.

Did you ever get it to work on Ubuntu?  If so, what did you have to do?

OPENSSL_DEFAULT_ZLIB doesn't seem to do anything on Ubuntu 14.04.  It
is suggested it should work on earlier versions
(http://www.ubuntu.com/usn/USN-1898-1/) but there is no mention of it
on newer versions.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Rakesh Kumar
aha ok it is clear now.  

The splitting of buffers for each db is not for the reasons you stated below, 
but for better management of RAM. 
In our current RDBMS we allocate BPs for each database based on its usage/size. 
 With that being said, in case
of PG. having no control on BP is not a big deal.

Also, just curious , is it possible to shut down only one database in an 
instance,
in case a rogue session connected to it is causing havoc. I know there are 
other ways
of achieving it (like killing all sessions and revoking grant priv), but if 
there is an easier
way to knock out a db temporarily, it will be great.


-Original Message-
From: Melvin Davidson 
To: Rakesh Kumar ; pgsql-general 

Sent: Thu, Feb 18, 2016 4:12 pm
Subject: Re: [GENERAL] Multiple databases and shared_buffers



>What do you mean that the data is actually not stored in the shared buffer.  
>From the link you provided :
>"The “shared_buffers” configuration parameter determines how much memory 
>is dedicated to PostgreSQL to use for caching data."


Again, you misunderstand. Cached data (and queries) is for sharing only to the 
same database.

So if user A in Database abc does a "SELECT some_column FROM table1 WHERE col2 
= 4" Then the results/data from that query are available to any other user (who 
has permission) in database abc and does the same 

exact query. However, users from database xyz CANNOT see data and/or results 
from database abc unless they specifically connect to it. Further to the point, 
Beginning with 9.4, PostgreSQL also makes better use of O/S memory for 
shared_buffers. But the bottom line is, you do not need to split shared_buffers 
up among different databases. PostgreSQL just uses it to make queries more 
efficient. There is no security problem because users in one database cannot 
request buffer information about another.


-- 

Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 






Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-18 Thread Bruce Momjian
On Wed, Feb 10, 2016 at 10:34:53AM +0200, Achilleas Mantzios wrote:
> PostgreSQL *is* a reliable DB.
> 
> About checksums in our office master DB that's a fine idea, too bad that
> pg_upgrade doesn't cope with them
> (and upgrading without pg_upgrade is out of the question)

Just to clarify, pg_upgrade handles cases where the old/new clusters
either both have checksums, or neither  you can't change the
checksum setting during pg_upgrade.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Melvin Davidson
>What do you mean that the data is actually not stored in the shared
buffer.
>From the link you provided :
>"The “shared_buffers” configuration parameter determines how much memory
>is dedicated to PostgreSQL to use for caching data."

Again, you misunderstand. Cached data (and queries) is for sharing only to
the same database.
So if user A in Database abc does a "SELECT some_column FROM table1 WHERE
col2 = 4" Then the results/data from that query are available to any other
user (who has permission) in database abc and does the same
exact query. However, users from database xyz CANNOT see data and/or
results from database abc unless they specifically connect to it. Further
to the point, Beginning with 9.4, PostgreSQL also makes better use of O/S
memory for shared_buffers. But the bottom line is, you do not need to split
shared_buffers up among different databases. PostgreSQL just uses it to
make queries more efficient. There is no security problem because users in
one database cannot request buffer information about another.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread John R Pierce

On 2/18/2016 12:04 PM, Rakesh Kumar wrote:
NOTE: It is very important you provide the version of PostgreSQL and 
O/S when addressing this mail list.


Since it is a new project, we are going with:

PG: 9.5.1

OS: RHEL 6.4


off topic for this list, but RHEL 6.4 was a 'snapshot' 3 years ago, and 
is missing 100s and 100s of security and bug updates since then.  as 
soon as you run `yum update`, you should be at the current 6.7 snapshot 
plus any incremental patches since 6.7 was released last July.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Rakesh Kumar




NOTE: It is very important you provide the version of PostgreSQL and O/S when 
addressing this mail list. 

Since it is a new project, we are going with:

PG: 9.5.1


OS: RHEL 6.4





Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Rakesh Kumar
What do you mean that the data is actually not stored in the shared buffer.  
>From the link you provided :
"The “shared_buffers” configuration parameter determines how much memory 
is dedicated to PostgreSQL to use for caching data."

This seems to be the same like other RDBMS. Typically the terminology used is 
Buffer Pool. 
BP is used to cache the changes done to a page and the page cleaning
process periodically flushes the dirty pages to disk.
Checkpoints typically gurantees that at a given time a page in cache is same as 
on the disk and is usually
used as the start time for roll forward recovery.


Does PG follow a different architecture ?




-Original Message-
From: Melvin Davidson 





NOTE: It is very important you provide the version of PostgreSQL and O/S when 
addressing this mail list. That is so others searching the archives in the 
future

can determine it's pertenance.


That being said, It looks to me like you have a misunderstanding about how 
PostgreSQL uses shared_buffers. Data is not actually stored in the 
shared_buffers.
Please refer to the following url which explains it's usage.

http://leopard.in.ua/2013/09/05/postgresql-sessting-shared-memory




-- 

Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 







Re: [GENERAL] BRIN Usage

2016-02-18 Thread Tom Smith
it is for reducing index size as the table become huge.
sorry for confusion, by timestamp, I meant a time series number, not the
sql timestamp type.
I need the unique on the column to ensure no duplicate,   but the btree
index is getting
huge so BRIN seems to solve problem but can not ensure unique

On Thu, Feb 18, 2016 at 2:14 AM, David Rowley 
wrote:

>
> On 18/02/2016 9:34 am, "Tom Smith"  wrote:
> >
> > Hi:
> >
> > I feel it is a stupid question.
> >
> > Can  BRIN index enforce uniqueness?
> > My issue is
> > the column I'd like to apply BRIN index  also needs to be unique
> > (think of timestamp as primary key).
>
> Only btree supports unique.
> Is there a special reason not to use btree? I'm also finding it hard to
> imagine a case where a timestamp primary key is a good idea.
>


[GENERAL] Windows default directory for client certificates

2016-02-18 Thread Lupi Loop
PostgreSQL documentation at
http://www.postgresql.org/docs/9.5/static/libpq-ssl.html
says that when a client certificate is requested by a server, a windows
client psql will use by default the credentials  located at
%APPDATA%\postgresql\postgresql.crt and %APPDATA%\postgresql\postgresql.key

However, my psql client application (v 9.5.0) in a Windows Server 2012 R2
cannot find the certificates in this location and only works when this
location is specifically set using the sslcert and sslkey attributes when
connecting. Is this a bug or am I using a wrong path?

This an example of execution:

---

C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
"postgresql://postgres@localhost/postgres"

psql: FATAL:  connection requires a valid client certificate

FATAL:  no pg_hba.conf entry for host "::1", user "postgres",
database"postgres", SSL off


C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
"postgresql://postgres@localhost/postgres?sslkey=postgresql\postgresql.key&sslcert=postgresql\postgresql.crt"

psql (9.5.0)

SSL connection (protocol: TLSv1.2, cipher:
ECDHE-RSA-AES256-GCM-SHA384,bits: 256, compression: off)

Type "help" for help.

postgres=#



cheers


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Melvin Davidson
On Thu, Feb 18, 2016 at 4:23 AM, Rakesh Kumar  wrote:

> It is a business requirement that we store our clients data in separate
> databases. Our sales folks insist it is non negotiable. Our current
> technology does
> support that and also maintain buffer pools for each db independently.
> That's why I brought this up. Personally I don't think this is a serious
> limitation at all.
>
>
>
>
> -Original Message-
> From: Melvin Davidson 
> To: pgsql-general 
> Sent: Wed, Feb 17, 2016 9:53 pm
> Subject: Re: [GENERAL] Multiple databases and shared_buffers
>
> I think this begs the question "Why do you think you need to separate the
> shared_buffers"?
> What version of PostgreSQL are you using?
> What is your O/S?
> How many CPU's on your server?
> How much memory?
>
> On Wed, Feb 17, 2016 at 5:21 PM, John R Pierce 
> wrote:
>
>> On 2/17/2016 6:54 AM, Data Cruncher wrote:
>>
>> We will be creating multiple databases in a cluster (instance). Is there
>> any way to separate shared_buffers for each database? Looks like not since
>> PG does not allow user created shared buffers.
>>
>>
>>
>> you would need to run multiple instances if you feel you need that level
>> of control over shared_buffers.
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>
>It is a business requirement that we store our clients data in separate
databases. Our sales folks insist it is non negotiable. Our current
technology does
>support that and also maintain buffer pools for each db independently.

NOTE: It is very important you provide the version of PostgreSQL and O/S
when addressing this mail list. That is so others searching the archives in
the future
can determine it's pertenance.

That being said, It looks to me like you have a misunderstanding about how
PostgreSQL uses shared_buffers. Data is not actually stored in the
shared_buffers.
Please refer to the following url which explains it's usage.

http://leopard.in.ua/2013/09/05/postgresql-sessting-shared-memory

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Tom Lane
Victor Blomqvist  writes:
> We just had a major issue on our databases, after a index was replaced a
> user defined function didnt change its query plan to use the new index.

I'm suspicious that this is some variant of the problem discussed a couple
days ago:

http://www.postgresql.org/message-id/flat/CAKt_ZfsvdE3WiHUjg81MT1KzOkebqz_bg=dbhf4l6rkfjte...@mail.gmail.com

However, that theory requires that the index not have been immediately
usable, which implies that it initially had some broken HOT chains,
which really should not have happened if you were simply replacing one
index with an identical one.  (The pre-existing index should've been
enough to ensure HOT chain consistency for its columns.)

Perhaps you were doing something "cute" like replacing a single-column
index with a multi-column one?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Vitaly Burovoy
On 2/18/16, Victor Blomqvist  wrote:
> Hello!
>
> We just had a major issue on our databases, after a index was replaced a
> user defined function didnt change its query plan to use the new index. At
> least this is our theory, since the function in question became much slower
> and as a result brought our system to a halt.
>
> Basically it went:
> 1. create new index (a simple btree on a bigint column index)
> 2. drop old index
> 3. rename new index to old index name
> 4. analyze table
>
> After these steps normally our functions will update their plans and use
> the new index just fine. However this time the function (only one function
> use this particular index) seemed to take forever to complete. This is a
> 40GB table so querying for something not indexed would take a long time.
> Therefore my suspicion is that the function didnt start to use the new
> index.

My guess is that backends somehow cached generic plan[1] and didn't
recalculate it.

> Adding to the strangeness is that if I ran the function manually it was
> fast,

It is because _backends_ (processes) cache plans, not DBMS (i.e. they
are not shared).
So you connected to DB (making a new backend process), run the
function; backend could not find cached plan and create it for itself.
Plan for _your_ connection includes the new index.

> only when called from our application through pg_bouncer it was slow.
> I should also say that the function is only used on our 3 read slaves setup
> to our database.
>
> Things we tried to fix this:
> 1. Analyze table
> 2. Restart our application
> 3. Recreate the function
> 4. Kill the slow running queries with pg_cancel_backend()
>
> These things did not help.

Since pgbouncer reuses connections then backends processes still have
cached plan. If you tried pg_terminate_backend() it could help.

> Instead what helped in the end was to replace the function with an extra
> useless where clause (in the hope that it would force it to create a new
> plan)
>
> So, the function only have a single SELECT inside:
> RETURN QUERY
> SELECT * FROM table
>   WHERE bigint_column = X
>   LIMIT 100 OFFSET 0;
>
> And this is my modification that made it work again:
> RETURN QUERY
> SELECT * FROM table
>   WHERE bigint_column = X AND 1=1
>   LIMIT 100 OFFSET 0;

Yes, it is a new query for PG, and therefore it requires a new plan
because it is not in a cache.

> Obviously we are now worried why this happened

Also my guess you did CREATE INDEX CONCURRENTLY and there is several
cases[2] when it can not be used ("invalid" state or waiting for
unfinished transactions).
When the old index is dropped but the new index is not accessible
while a query/function is running a generated plan does not include
that index. And in case of caching such plan is caching and uses later
without index too.

> and how we can avoid it in
> the future? We run Postgres 9.3 on CentOS 6.

Firstly you can drop the old index not immediately but a little later
depending on yours queries time.
Also after creating the new index (and possible waiting a little) you
can drop index in a _transaction_ and see whether the new index is
used in an EXPLAIN of any query that use it or not. In the first case
do COMMIT, in the second case just do ROLLBACK and leave old index for
using.

> Thanks!
> Victor

[1]http://www.postgresql.org/docs/devel/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
[2]http://www.postgresql.org/docs/devel/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY
-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Victor Blomqvist
The end goal is to get rid of index bloat. If there is a better way to
handle this Im all ears!

/Victor

On Thu, Feb 18, 2016 at 5:21 PM, Oleg Bartunov  wrote:

>
>
> On Thu, Feb 18, 2016 at 11:17 AM, Victor Blomqvist  wrote:
>
>> Hello!
>>
>> We just had a major issue on our databases, after a index was replaced a
>> user defined function didnt change its query plan to use the new index. At
>> least this is our theory, since the function in question became much slower
>> and as a result brought our system to a halt.
>>
>> Basically it went:
>> 1. create new index (a simple btree on a bigint column index)
>> 2. drop old index
>> 3. rename new index to old index name
>>
>
> why do you did this !?
>
>
>> 3. analyze table
>>
>> After these steps normally our functions will update their plans and use
>> the new index just fine. However this time the function (only one function
>> use this particular index) seemed to take forever to complete. This is a
>> 40GB table so querying for something not indexed would take a long time.
>> Therefor my suspicion is that the function didnt start to use the new index.
>>
>> Adding to the strangeness is that if I ran the function manually it was
>> fast, only when called from our application through pg_bouncer it was slow.
>> I should also say that the function is only used on our 3 read slaves setup
>> to our database.
>>
>> Things we tried to fix this:
>> 1. Analyze table
>> 2. Restart our application
>> 3. Recreate the function
>> 4. Kill the slow running queries with pg_cancel_backend()
>>
>> These things did not help.
>>
>> Instead what helped in the end was to replace the function with an extra
>> useless where clause (in the hope that it would force it to create a new
>> plan)
>>
>> So, the function only have a single SELECT inside:
>> RETURN QUERY
>> SELECT * FROM table
>>   WHERE bigint_column = X
>>   LIMIT 100 OFFSET 0;
>>
>> And this is my modification that made it work again:
>> RETURN QUERY
>> SELECT * FROM table
>>   WHERE bigint_column = X AND 1=1
>>   LIMIT 100 OFFSET 0;
>>
>>
>> Obviously we are now worried why this happened and how we can avoid it in
>> the future? We run Postgres 9.3 on CentOS 6.
>>
>> Thanks!
>> Victor
>>
>
>


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread John R Pierce

On 2/18/2016 12:26 AM, Sridhar N Bamandlapally wrote:

If we want transactions in "begin-end" then its fine,

but in this case all these transactions are independent with 
autocommit off,


with autocommit OFF, when you issue the first query, jdbc generates a 
postgresql BEGIN;  this starts a postgresql transaction. To end the 
transaction,  you have to explicitly .commit() or .rollback() 


with autocommit ON, then jdbc lets each query execute standalone, this 
is postgresql's default behavior if you don't use BEGIN...




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple databases and shared_buffers

2016-02-18 Thread Rakesh Kumar
It is a business requirement that we store our clients data in separate 
databases. Our sales folks insist it is non negotiable. Our current technology 
does
support that and also maintain buffer pools for each db independently. That's 
why I brought this up. Personally I don't think this is a serious limitation at 
all.







-Original Message-
From: Melvin Davidson 
To: pgsql-general 
Sent: Wed, Feb 17, 2016 9:53 pm
Subject: Re: [GENERAL] Multiple databases and shared_buffers





I think this begs the question "Why do you think you need to separate the 
shared_buffers"? 

What version of PostgreSQL are you using?

What is your O/S?

How many CPU's on your server?

How much memory?



On Wed, Feb 17, 2016 at 5:21 PM, John R Pierce  wrote:

  

On 2/17/2016 6:54 AM, Data Cruncher  wrote:


We will becreating multiple databases in a cluster (instance). Is there 
   any way to separate shared_buffers for each database? Looks like
not since PG does not allow user created shared buffers.


  


you would need to run multiple instances if you feel you need thatlevel 
of control over shared_buffers.



-- 
john r pierce, recycling bits in santa cruz
  




-- 

Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 






Re: [GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Oleg Bartunov
On Thu, Feb 18, 2016 at 11:17 AM, Victor Blomqvist  wrote:

> Hello!
>
> We just had a major issue on our databases, after a index was replaced a
> user defined function didnt change its query plan to use the new index. At
> least this is our theory, since the function in question became much slower
> and as a result brought our system to a halt.
>
> Basically it went:
> 1. create new index (a simple btree on a bigint column index)
> 2. drop old index
> 3. rename new index to old index name
>

why do you did this !?


> 3. analyze table
>
> After these steps normally our functions will update their plans and use
> the new index just fine. However this time the function (only one function
> use this particular index) seemed to take forever to complete. This is a
> 40GB table so querying for something not indexed would take a long time.
> Therefor my suspicion is that the function didnt start to use the new index.
>
> Adding to the strangeness is that if I ran the function manually it was
> fast, only when called from our application through pg_bouncer it was slow.
> I should also say that the function is only used on our 3 read slaves setup
> to our database.
>
> Things we tried to fix this:
> 1. Analyze table
> 2. Restart our application
> 3. Recreate the function
> 4. Kill the slow running queries with pg_cancel_backend()
>
> These things did not help.
>
> Instead what helped in the end was to replace the function with an extra
> useless where clause (in the hope that it would force it to create a new
> plan)
>
> So, the function only have a single SELECT inside:
> RETURN QUERY
> SELECT * FROM table
>   WHERE bigint_column = X
>   LIMIT 100 OFFSET 0;
>
> And this is my modification that made it work again:
> RETURN QUERY
> SELECT * FROM table
>   WHERE bigint_column = X AND 1=1
>   LIMIT 100 OFFSET 0;
>
>
> Obviously we are now worried why this happened and how we can avoid it in
> the future? We run Postgres 9.3 on CentOS 6.
>
> Thanks!
> Victor
>


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
Ok, let me put this way

in JDBC we have *setAutoCommit( false ) *, and all dmls are independent
transactions

and when any transaction fails then the session not allowing next
transactions

in Java when we do setAutoCommit( false ) its behaving like all
transactions in BEGIN-END block, this is not expected behavior

i guess this is bug





On Thu, Feb 18, 2016 at 2:00 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> > but in this case all these transactions are independent with autocommit
> off,
>
> At database level, there is no "autocommit=off".
> There's just "begin-end".
>
> It is database who forbids .commit, not the JDBC driver.
> Vladimir
>


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Vladimir Sitnikov
> but in this case all these transactions are independent with autocommit off,

At database level, there is no "autocommit=off".
There's just "begin-end".

It is database who forbids .commit, not the JDBC driver.
Vladimir


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
If we want transactions in "begin-end" then its fine,

but in this case all these transactions are independent with autocommit off,

user choice to continue with commit or rollback

Thanks
Sridhar



On Thu, Feb 18, 2016 at 1:43 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> >Is it a bug or do we have other any alternate way to handle this ?
>
> PostgreSQL is strongly against "partial commits to the database". If
> you think a bit about it, it is not that bad.
> You got an error, what is the business case to commit the partial
> transaction then?
>
> Exceptions should not be used for a "control flow", should they?
>
>
> If you want to shoot yourself in a foot for fun and profit, you can
> try https://github.com/pgjdbc/pgjdbc/pull/477.
> What it does, it creates savepoints before each statement, then it
> rollbacks to that savepoint in case of failure.
>
> Vladimir
>


[GENERAL] Query plan not updated after dropped index

2016-02-18 Thread Victor Blomqvist
Hello!

We just had a major issue on our databases, after a index was replaced a
user defined function didnt change its query plan to use the new index. At
least this is our theory, since the function in question became much slower
and as a result brought our system to a halt.

Basically it went:
1. create new index (a simple btree on a bigint column index)
2. drop old index
3. rename new index to old index name
3. analyze table

After these steps normally our functions will update their plans and use
the new index just fine. However this time the function (only one function
use this particular index) seemed to take forever to complete. This is a
40GB table so querying for something not indexed would take a long time.
Therefor my suspicion is that the function didnt start to use the new index.

Adding to the strangeness is that if I ran the function manually it was
fast, only when called from our application through pg_bouncer it was slow.
I should also say that the function is only used on our 3 read slaves setup
to our database.

Things we tried to fix this:
1. Analyze table
2. Restart our application
3. Recreate the function
4. Kill the slow running queries with pg_cancel_backend()

These things did not help.

Instead what helped in the end was to replace the function with an extra
useless where clause (in the hope that it would force it to create a new
plan)

So, the function only have a single SELECT inside:
RETURN QUERY
SELECT * FROM table
  WHERE bigint_column = X
  LIMIT 100 OFFSET 0;

And this is my modification that made it work again:
RETURN QUERY
SELECT * FROM table
  WHERE bigint_column = X AND 1=1
  LIMIT 100 OFFSET 0;


Obviously we are now worried why this happened and how we can avoid it in
the future? We run Postgres 9.3 on CentOS 6.

Thanks!
Victor


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Vladimir Sitnikov
>Is it a bug or do we have other any alternate way to handle this ?

PostgreSQL is strongly against "partial commits to the database". If
you think a bit about it, it is not that bad.
You got an error, what is the business case to commit the partial
transaction then?

Exceptions should not be used for a "control flow", should they?


If you want to shoot yourself in a foot for fun and profit, you can
try https://github.com/pgjdbc/pgjdbc/pull/477.
What it does, it creates savepoints before each statement, then it
rollbacks to that savepoint in case of failure.

Vladimir


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
Hi

We are facing issue with PostgreSQL JDBC behaviour

in Java, under autocommit false mode,

1. In between if any transaction then for next transaction, throws
exception saying "current transaction is aborted, commands ignored until
end of transaction block"

2. Even if exception is suppressed with try-catch then too for next
transaction, throws exception saying "current transaction is aborted,
commands ignored until end of transaction block"

3. The same is not happening with Oracle or SQL-Server, in this with-out
any exception handling it works

Is it a bug or do we have other any alternate way to handle this ?

Please I need some help in this

Thanks
Sridhar