Re: [GENERAL] Left join help

2017-06-23 Thread David G. Johnston
On Fri, Jun 23, 2017 at 3:17 PM, Arup Rakshit 
wrote:

> Hi David,
>
> Can you tell me how can I add ordering between LEFT and INNER JOIN. I
> think also that is where I am wrong, but not sure how to correct the
> ordering.
>

​Generally...

FROM tbl1 LEFT JOIN (tbl2 JOIN ​tbl3 ON ...) tbl2_3 ON ...

In short, put parentheses around the join and give it an alias.

There are ordering rules that can be considered too but frankly I've never
learned them and haven't noticed their absence.  The above is the explicit
way to do things and explicit is generally better.

David J.


Re: [GENERAL] Left join help

2017-06-23 Thread Paul Jungwirth

I tried a
query, but it is not giving me any result. Can anyone help me pls?

SELECT missions.*,
  CASE WHEN submissions.id IS NULL THEN 'incompleted'
  ELSE 'completed' END AS mission_status
FROM "missions" LEFT JOIN submissions ON submissions.mission_id =
missions.id
INNER JOIN members ON members.id = submissions.member_id
WHERE (members.id = 1 AND missions.track_id = 7)


I always think about JOINs as being implemented from top to bottom, and 
you can track the current result rows in your head. So in your case you 
start with one row for each mission. Then you add zero or more rows for 
each submission. Because it's an outer join you keep rows even if they 
don't match.


Then you join to members, but if there is no match, you drop the row 
from the result. But since there are no members with a NULL id (I hope) 
any row where submissions.member_id is NULL have no matches, so all the 
unmatched rows you kept from the outer join fall out here.


Since you know that a submission never has more than one member, it 
would be safe to use a LEFT OUTER JOIN in both places, and that will let 
you preserve incomplete missions all the way through to the final result.


Furthermore, your WHERE drops everything where members.id is not 1. So 
again you are throwing away incomplete missions. I guess you need to 
permit anything where members.id is 1 or NULL.


Finally, your intent seems to be to get one row per mission, but if a 
mission has several submissions you will get duplicates. So maybe for 
complete/incomplete you should just use EXISTS with a correlated subquery.


I hope that helps. Good luck!

Paul


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Left join help

2017-06-23 Thread Arup Rakshit
Hi David,

Can you tell me how can I add ordering between LEFT and INNER JOIN. I think 
also that is where I am wrong, but not sure how to correct the ordering.

Thanks,
A


On Jun 24, 2017, at 3:18 AM, David G. Johnston 
> wrote:

On Friday, June 23, 2017, Arup Rakshit 
> wrote:
FROM "missions" LEFT JOIN submissions ON submissions.mission_id = 
missions.id
INNER JOIN members ON members.id = submissions.member_id

Quick look here but having submissions as part of the inner join with members 
defeats the attempt at left joining it with missions.  You might need to add 
parentheses to put the joins in the correct order, or something.  Why it gives 
zero results we cannot say without a self-contained example.

David J.



Re: [GENERAL] Left join help

2017-06-23 Thread David G. Johnston
On Friday, June 23, 2017, Arup Rakshit  wrote:
>
> FROM "missions" LEFT JOIN submissions ON submissions.mission_id =
> missions.id
> INNER JOIN members ON members.id = submissions.member_id
>

Quick look here but having submissions as part of the inner join with
members defeats the attempt at left joining it with missions.  You might
need to add parentheses to put the joins in the correct order, or
something.  Why it gives zero results we cannot say without a
self-contained example.

David J.


Re: [GENERAL] Chained slaves smaller?

2017-06-23 Thread Jon Erdman
Nevermind. Turns out it was on the wrong timeline and replication was broken. 
It was smaller because it was 77 days behind. (facepalm)

> On Jun 23, 2017, at 2:40 PM, Jon Erdman  wrote:
> 
> Hi,
> 
> I have SR set up in a couple of datacenters, where there’s a master in DC_A 
> with 2 slaves, and a 3rd slave off that master in DC_ B. Also, in DC_B I have 
> 2 slaves chained off the “local master”. Our main database is ~551GB in DC_A 
> and on the replica in B that is subscribed to the real master. However, on 
> one of the chained slaves in DC_B that database is only 484GB. The only thing 
> different about this smaller slave is that it was created by taking a 
> basebackup from the “local master” in DC_B rather than sucking it over the 
> WAN from the true master in DC_A. 
> 
> This makes no sense to me since I thought SR replicas are bit for bit copies, 
> so I’m somewhat concerned. Any ideas how this could be?
> —
> Jon Erdman
> Postgres Zealot



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Left join help

2017-06-23 Thread Arup Rakshit
Hi,

I have this relationship Track has many missions. Missions has many 
submissions. Each Submission has one member and one mission. Say I have track 
id 7. I want to find out which missions under track 7 are submitted by John ( a 
user id say 3 ) and which are not yet. I tried a query, but it is not giving me 
any result. Can anyone help me pls?

SELECT missions.*,
  CASE WHEN submissions.id IS NULL THEN 'incompleted'
  ELSE 'completed' END AS mission_status
FROM "missions" LEFT JOIN submissions ON submissions.mission_id = missions.id
INNER JOIN members ON members.id = submissions.member_id
WHERE (members.id = 1 AND missions.track_id = 7)

The above query really not showing any output. Any help to solve this?

Tables are posted here: http://dpaste.com/1EXF4KQ


Thanks,
A


Re: [GENERAL] Chained slaves smaller?

2017-06-23 Thread Jerry Sievers
Jon Erdman  writes:

> Hi,
>
> I have SR set up in a couple of datacenters, where there’s a master in
> DC_A with 2 slaves, and a 3rd slave off that master in DC_ B. Also, in
> DC_B I have 2 slaves chained off the “local master”. Our main database
> is ~551GB in DC_A and on the replica in B that is subscribed to the
> real master. However, on one of the chained slaves in DC_B that
> database is only 484GB. The only thing different about this smaller
> slave is that it was created by taking a basebackup from the “local
> master” in DC_B rather than sucking it over the WAN from the true
> master in DC_A.

Hey Jon.

They should be pretty near identical unless there are unlogged tables on
master but in such a case all your streamers should be undersized to
account for this, not just a single one.

I'd troubleshoot by...

* Question the size measurement method, df, du etc.  

* Re-do the streamer and/or build an additional one using same method to
  see if reproduces.

* Determine where the descrepancy is on a per-directory level.

I had a system once that had 500G of orphaned heap/index data files
after a hard crash that went away finally after a pg_upgrade.

If your master and all streamers except that newest one are likewise
bloated, this is one possible explanation.

Not sure how plausible though.

In other words,  that extra data was valid, visible and legit at time of
creation of all other streamers but  now somehow stayed behind after
something $interesting happened... and pg_basebackup ignores it.

Pls let us know what you find.

HTH


> This makes no sense to me since I thought SR replicas are bit for bit copies, 
> so I’m somewhat concerned. Any ideas how this could be?
> —
> Jon Erdman
> Postgres Zealot

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Chained slaves smaller?

2017-06-23 Thread Jon Erdman
Hi,

I have SR set up in a couple of datacenters, where there’s a master in DC_A 
with 2 slaves, and a 3rd slave off that master in DC_ B. Also, in DC_B I have 2 
slaves chained off the “local master”. Our main database is ~551GB in DC_A and 
on the replica in B that is subscribed to the real master. However, on one of 
the chained slaves in DC_B that database is only 484GB. The only thing 
different about this smaller slave is that it was created by taking a 
basebackup from the “local master” in DC_B rather than sucking it over the WAN 
from the true master in DC_A. 

This makes no sense to me since I thought SR replicas are bit for bit copies, 
so I’m somewhat concerned. Any ideas how this could be?
—
Jon Erdman
Postgres Zealot

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Igal @ Lucee.org

Hi,

On 6/23/2017 11:47 AM, Joshua D. Drake wrote:

Looks like EnterpriseDB is behind. You can try BigSQL:
https://www.bigsql.org/postgresql/installers/
Thanks, but I prefer the archive version rather than the installer. 
Contrary to "rumors" on the internet, it is very easy to install 
Postgres on Windows even without the installer.  All you need to add is 
a couple of DDLs to the bin directory.


On 6/23/2017 11:58 AM, Andreas Kretschmer wrote:
Yeah, use the community version from postgresql.org ;-) 
The URL I posted is from https://www.postgresql.org/download/windows/ -- 
is there another "community version" somewhere?  Can you please post a link?


On 6/23/2017 12:12 PM, Adrian Klaver wrote:
The tags are wrong, but the binaries. If you click on one of the 
buttons you will be taken to 9.6.3 binaries for the OS/Arch.
Thank you, you are correct.  I guess I should have tried to click on the 
link first, but I thought that it would start a download.


Thank you all for your help,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Jerry Sievers
"Igal @ Lucee.org"  writes:

> Hello,
>
> I expected to find binaries for 9.6.3 at https://www.enterprisedb.com
> /download-postgresql-binaries but I only see 9.6.2.
>
> Am I looking at the wrong place?

I don't know but you should ask the fine people at EDB for best results.

HTH

> Thanks,
>
>
> Igal Sapir
> Lucee Core Developer
> Lucee.org
>
>
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Adrian Klaver

On 06/23/2017 11:04 AM, Igal @ Lucee.org wrote:

Hello,

I expected to find binaries for 9.6.3 at 
https://www.enterprisedb.com/download-postgresql-binaries but I only see 
9.6.2.


Am I looking at the wrong place?


Yes and no. The tags are wrong, but the binaries. If you click on one of 
the buttons you will be taken to 9.6.3 binaries for the OS/Arch.




Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Andreas Kretschmer
Am 23. Juni 2017 20:04:39 MESZ schrieb "Igal @ Lucee.org" :
>Hello,
>
>I expected to find binaries for 9.6.3 at 
>https://www.enterprisedb.com/download-postgresql-binaries but I only
>see 
>9.6.2.
>
>Am I looking at the wrong place?
>
>Thanks,
>
>Igal Sapir
>Lucee Core Developer
>Lucee.org 

Yeah, use the community version from postgresql.org ;-)

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Joshua D. Drake

On 06/23/2017 11:04 AM, Igal @ Lucee.org wrote:

Hello,

I expected to find binaries for 9.6.3 at 
https://www.enterprisedb.com/download-postgresql-binaries but I only see 
9.6.2.


Am I looking at the wrong place?


Looks like EnterpriseDB is behind. You can try BigSQL:

https://www.bigsql.org/postgresql/installers/

JD




Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 




--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Igal @ Lucee.org

Hello,

I expected to find binaries for 9.6.3 at 
https://www.enterprisedb.com/download-postgresql-binaries but I only see 
9.6.2.


Am I looking at the wrong place?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Insertion of large xml files into PostgreSQL 10beta1

2017-06-23 Thread David G. Johnston
On Fri, Jun 23, 2017 at 8:19 AM, Alain Toussaint 
wrote:

> Hello,
>
> I am building up a PostgreSQL server which I intend to load the
> entirety of the pubmed database data (23GB bzip2 compressed, 216GB
> unpacked) which is available in xml form of which, here is an example:
>
> https://www.ncbi.nlm.nih.gov/pubmed/21833294?report=xml=text
>
> I looked at the documentation as well as several examples code for
> loading the data and the one example who nearly succeeded is this
> procedure:
>
> /usr/bin/psql medline
>
> \set :largexmlfile: 'cat /srv/pgsql/pubmed/medline17n0001.xml'
> INSERT INTO samples (xmldata) VALUES :largexmlfile:
>

​I'll assume you've just mis-keyed this from memory since the syntax of the
above doesn't like right.
​

>
> (from reading the list post here:
> https://www.postgresql.org/message-id/20160624083757.GA5459%40msg.df7cb.de
> )
>
> In which, about 334MB of data from medline17n0001.xml will flood the
> monitor.


​If the above general command sequence is done right, and echoing of
commands is turned off, you should not see any of the XML file content
echoed to the output.​


>
> it is possible to turn off validation of the content between the xml
> tags of the files.
>
>
​You can either turn off validation for the entire file or leave it on -
PostgreSQL isn't recognizing tags here (you haven't defined the samples
table for us...).​

​Narrowing down the entire file to a small problem region and posting a
self-contained example, or at least providing the error messages and
content, might help elicit good responses.​  Even if you could load the
data without incident using it make end up proving problematic.  That said
character encodings and sets are not my strong suit.

David J.


[GENERAL] Insertion of large xml files into PostgreSQL 10beta1

2017-06-23 Thread Alain Toussaint
Hello,

I am building up a PostgreSQL server which I intend to load the
entirety of the pubmed database data (23GB bzip2 compressed, 216GB
unpacked) which is available in xml form of which, here is an example:

https://www.ncbi.nlm.nih.gov/pubmed/21833294?report=xml=text

I looked at the documentation as well as several examples code for
loading the data and the one example who nearly succeeded is this
procedure:

/usr/bin/psql medline

\set :largexmlfile: 'cat /srv/pgsql/pubmed/medline17n0001.xml'

INSERT INTO samples (xmldata) VALUES :largexmlfile:

(from reading the list post here:
https://www.postgresql.org/message-id/20160624083757.GA5459%40msg.df7cb.de)

In which, about 334MB of data from medline17n0001.xml will flood the
monitor. I do notice some error code values flooding at some point
during the load and then it end up throwing an error because it
interpret some accented comment in the pubmed files (abstract data in
a non-us language most likely).

I will work out a way to script these commands into a bash script[1]
when I get back home (I'm at work at the moment, returning home at 5pm
EST) with two log files (stdout & stderr) but I would like to know if
it is possible to turn off validation of the content between the xml
tags of the files.

[1] == there is close to 1200 medline files averaging 350MB each.

platform is Linux From Scratch subversion tracked release running out
of a 1.5GB ramdrive plus dhcpcd, Python 2.7.13, libxml2-2.9.4,
libxslt-1.1.29 and PostgreSQL 10 beta 1 with datafiles being
/srv/pgsql/data on a single partition 931.5GB western digital drive
dedicated to PostgreSQL for the moment.

The goal is to build a server but PostgreSQL is running in the
ramdrive at the moment for partition sizing and calculation purpose.

The server is a core2 based old machine, 4GB of ram and the
aforementioned 931.5GB hard disk. It also has an nvidia card intended
for use on text mining application.

Thanks you very much.

Alain


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cookbook Testing

2017-06-23 Thread Adrian Klaver

On 06/23/2017 05:27 AM, PAWAN SHARMA wrote:

Hi All,

Please help to configure kitchen.yml to test the PostgreSQL cookbook 
testing.


#-> kitchen converge
-> Starting Kitchen (v1.16.0)
 >> --Exception---
 >> Class: Kitchen::UserError
 >> Message: Error parsing /tmp/postgresql-master/.kitchen.yml as YAML.


The above seems to indicate that you are not supplying .kitchen.yml as 
proper YAML.


You did not say, but I am assuming you are working with Chef. In that 
case you might want to look at link below for proper form of .kitchen.yml:


https://docs.chef.io/config_yml_kitchen.html

Might also run the file through a linter. A quick search found these:

http://www.yamllint.com/

https://github.com/adrienverge/yamllint


Please run `kitchen diagnose --no-instances --loader' to help debug your 
issue.

 >> --
 >> Please see .kitchen/logs/kitchen.log for more details
 >> Also try running `kitchen diagnose --all` for configuration

-Server type: VMware
OS: Redhat 6 or Redhat 7
PostgreSQL Version: 9.5.7


-Pawan






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which process is actually doing the WAL writes/calls XLogFlush?

2017-06-23 Thread Adrian Klaver

On 06/23/2017 05:50 AM, Daniel Westermann wrote:

Hi all,

as I did not find the answer in the documentation: Which background 
process is actually doing the writes/flushes to the WAL? In the docs 
(https://www.postgresql.org/docs/10/static/wal-configuration.html) it is 
explained which internal functions are responsible for this: 
XLogInsertRecord and XLogFlush but who does call them, especially the 
flush? Is it the process for the user connection itself then which calls 
this after commit?


For all the details see here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/transam/README;h=4ae4715339e707ab5ed879a628aa96b73002ef43;hb=6a18e4bc2d13d077c52cf90a4c6ec68343808ba7



Initially I thought it is the job of the wal_writer but according to 
this blog post 
(http://www.cybertec.at/postgresql-writer-and-wal-writer-processes-explained/) 
this seems not to be true, right? Why do I need the wal_writer at all 
then when synchronous_commit is set to something else than off?


See here:

https://www.postgresql.org/docs/9.6/static/wal-async-commit.html



Thanks for your help
Daniel



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cookbook Testing

2017-06-23 Thread Karsten Hilbert
On Fri, Jun 23, 2017 at 06:33:56PM +0530, PAWAN SHARMA wrote:

> > On Fri, Jun 23, 2017 at 05:57:44PM +0530, PAWAN SHARMA wrote:
> >
> > > Please help to configure kitchen.yml to test the PostgreSQL cookbook
> > > testing.
> > >
> > > #-> kitchen converge
> > > -> Starting Kitchen (v1.16.0)
> > > >> --Exception---
> > > >> Class: Kitchen::UserError
> > > >> Message: Error parsing /tmp/postgresql-master/.kitchen.yml as
> > YAML.
> > > Please run `kitchen diagnose --no-instances --loader' to help debug your
> > > issue.
> > > >> --
> > > >> Please see .kitchen/logs/kitchen.log for more details
> > > >> Also try running `kitchen diagnose --all` for configuration
> >
> > So ?  The output gives you THREE hints already.  Care to do as the suggest
> > ?
> 
> nothing, it will show me the configuration of kitchen.yml

"Nothing" is quite obviously not the same as "configuration
of kitchen.yml". And, it is one of _three_. Also, whatever
"it" _is_.

You'll have to put in more effort, I suppose.

But that's not the track record AFAIR.

Bye,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Cookbook Testing

2017-06-23 Thread PAWAN SHARMA
On Fri, Jun 23, 2017 at 6:10 PM, Karsten Hilbert 
wrote:

> On Fri, Jun 23, 2017 at 05:57:44PM +0530, PAWAN SHARMA wrote:
>
> > Please help to configure kitchen.yml to test the PostgreSQL cookbook
> > testing.
> >
> > #-> kitchen converge
> > -> Starting Kitchen (v1.16.0)
> > >> --Exception---
> > >> Class: Kitchen::UserError
> > >> Message: Error parsing /tmp/postgresql-master/.kitchen.yml as
> YAML.
> > Please run `kitchen diagnose --no-instances --loader' to help debug your
> > issue.
> > >> --
> > >> Please see .kitchen/logs/kitchen.log for more details
> > >> Also try running `kitchen diagnose --all` for configuration
>
> So ?  The output gives you THREE hints already.  Care to do as the suggest
> ?
>
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




nothing, it will show me the configuration of kitchen.yml


[GENERAL] Which process is actually doing the WAL writes/calls XLogFlush?

2017-06-23 Thread Daniel Westermann
Hi all, 

as I did not find the answer in the documentation: Which background process is 
actually doing the writes/flushes to the WAL? In the docs 
(https://www.postgresql.org/docs/10/static/wal-configuration.html) it is 
explained which internal functions are responsible for this: XLogInsertRecord 
and XLogFlush but who does call them, especially the flush? Is it the process 
for the user connection itself then which calls this after commit? 

Initially I thought it is the job of the wal_writer but according to this blog 
post 
(http://www.cybertec.at/postgresql-writer-and-wal-writer-processes-explained/) 
this seems not to be true, right? Why do I need the wal_writer at all then when 
synchronous_commit is set to something else than off? 

Thanks for your help 
Daniel 


Re: [GENERAL] PostgreSQL Cookbook Testing

2017-06-23 Thread Karsten Hilbert
On Fri, Jun 23, 2017 at 05:57:44PM +0530, PAWAN SHARMA wrote:

> Please help to configure kitchen.yml to test the PostgreSQL cookbook
> testing.
> 
> #-> kitchen converge
> -> Starting Kitchen (v1.16.0)
> >> --Exception---
> >> Class: Kitchen::UserError
> >> Message: Error parsing /tmp/postgresql-master/.kitchen.yml as YAML.
> Please run `kitchen diagnose --no-instances --loader' to help debug your
> issue.
> >> --
> >> Please see .kitchen/logs/kitchen.log for more details
> >> Also try running `kitchen diagnose --all` for configuration

So ?  The output gives you THREE hints already.  Care to do as the suggest ?


Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL Cookbook Testing

2017-06-23 Thread PAWAN SHARMA
Hi All,

Please help to configure kitchen.yml to test the PostgreSQL cookbook
testing.

#-> kitchen converge
-> Starting Kitchen (v1.16.0)
>> --Exception---
>> Class: Kitchen::UserError
>> Message: Error parsing /tmp/postgresql-master/.kitchen.yml as YAML.
Please run `kitchen diagnose --no-instances --loader' to help debug your
issue.
>> --
>> Please see .kitchen/logs/kitchen.log for more details
>> Also try running `kitchen diagnose --all` for configuration

-Server type: VMware
OS: Redhat 6 or Redhat 7
PostgreSQL Version: 9.5.7


-Pawan


Re: [GENERAL] Suddenly - LOG: could not open file "postmaster.pid": No such file or directory

2017-06-23 Thread Daniel Verite
Muhammad Hanif Abdul Hamid wrote:

> Maybe an irrelevant question, but I would like to know if there anything
> from postgres process (e.g any parameter that is not set right) that might
> remove the postmaster.pid? 

If using Linux, you may put it under watch with 

# auditctl -w /path/to/postmaster.pid  -k pgpid

If that records too many events, it can be filtered further, for
example this should catch only deletions:

# auditctl -a exit,always -F path=/path/to/postmaster.pid \
-F arch=b64 -S unlink -S unlinkat -k pgpid

Once in effect, the records about which process/user/command
deletes the file and when would be obtained with:

# ausearch -k pgpid


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-23 Thread Harry Ambrose
Hi Everyone,

Still trying to fathom this one. I have added quite a few log lines to a
copy of 9.4.12 and compiled it hoping to find the fault.

Below is from the log (at DEBUG5). Apologies for my name in the log lines,
it was the easiest way to grep them specifically I also apologise that its
a bit messy, i'm not a C dev.

This excerpt is without failure:

127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - UPDATE - Old tuple is 0, new tuple is 0,
table is: 2345873096
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - UPDATE - Old tuple is 0, new tuple is 0,
table is: 2345873096
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - UPDATE - Old tuple is 0, new tuple is 0,
table is: 2345873096
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - index_beginscan_internal scan: 57267920
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - index_beginscan_internal scan: 57267920
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - index_beginscan_internal scan: 57267920
127.0.0.1 2017-06-23 09:45:26.083 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  CommitTransaction
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  name: unnamed; blockState:   STARTED; state: INPROGR,
xid/subid/cid: 73603293/1/0 (used), nestlvl: 1, children:
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  StartTransaction
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 DEBUG:  name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
127.0.0.1 2017-06-23 09:45:26.084 BST  4061 594ccaa6.fdd postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab

This is with the failure, interesting that there are multiple flush's
nearby, could be total coincidence though?

127.0.0.1 2017-06-23 10:28:25.862 BST  1219 594cd5ba.4c3 postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
**.*.**.*** 2017-06-23 10:28:25.996 BST  24765 594bdf98.60bd postgres
walreceiver  0 DEBUG:  write 2F81/E7DA8000 flush 2F81/E7D9 apply
2F81/E7D8FBA0
**.*.**.*** 2017-06-23 10:28:25.996 BST  24765 594bdf98.60bd postgres
walreceiver  0 DEBUG:  write 2F81/E7DA8000 flush 2F81/E7DA8000 apply
2F81/E7D8FBA0

sent 16492 bytes  received 24607 bytes  82198.00 bytes/sec
total size is 16777216  speedup is 408.21
 2017-06-23 10:28:26.014 BST  24752 594bdf95.60b00 DEBUG:  archived
transaction log file "00012F8100E5"
 2017-06-23 10:28:26.018 BST  24752 594bdf95.60b00 DEBUG:
 executing archive command "rsync -e ssh -arv
 /wal/pg_xlog/00012F8100E6 postgres@:/wal/pg_xlog"
127.0.0.1 2017-06-23 10:28:26.101 BST  1219 594cd5ba.4c3 postgres [unknown]
 0 DEBUG:  HARRYAMBROSE - index_beginscan_internal scan: 630163208
127.0.0.1 2017-06-23 10:28:26.101 BST  1219 594cd5ba.4c3 postgres [unknown]
 0 STATEMENT:  VACUUM (FULL, ANALYZE) ctab
sending incremental file list

sent 69 bytes  received 12 bytes  162.00 bytes/sec
total size is 16777216  speedup is 207126.12
 2017-06-23 10:28:26.200 BST  24752 594bdf95.60b00 DEBUG:  archived
transaction log file "00012F8100E6"
 2017-06-23 10:28:26.201 BST  24752 594bdf95.60b00 DEBUG:
 executing archive command "rsync -e ssh -arv
 /wal/pg_xlog/00012F8100E7 postgres@:/wal/pg_xlog"
**.*.**.*** 2017-06-23 10:28:26.203 BST  24765 594bdf98.60bd postgres
walreceiver  0 DEBUG:  write 2F81/E7DC8000 flush 2F81/E7DA8000 apply
2F81/E7DA7FC8
**.*.**.*** 2017-06-23 10:28:26.203 BST  

Re: [GENERAL] pglogical vs. built-in logical replication in pg-10

2017-06-23 Thread Achilleas Mantzios

On 22/06/2017 20:30, Andres Freund wrote:

On 2017-06-22 18:10:40 +0300, Achilleas Mantzios wrote:

Once again having pg_largeobject as a system-catalog prevents LOs
from working smoothly. Neither replication nor having LOs on a
different tablespace (by moving pg_largeobject) works.

I think logical decoding was designed for supporting DML SQL commands
(i.e. a finite set of commands) and not specific functions (lo_*)
which by nature can be arbitrary, infinite and version specific.

That's not really the reason. The first reason its currently unsupported
is that LOs are stored in a system catalog, and currently all system
catalogs are excluded from the change stream.  The second problem is how
exactly to represent the changes - we can't represent it as the whole LO
being changed, as that'd increase the volume of WAL and replicated
writes dramatically.  Thus we need to invent an API that can represent
creation, deletion, and writes to arbitrary offsets, for output plugins.


Thanx for the insight.




I wish PG in some future version will address these quirks so one can operate 
on LOs more smoothly.

You're welcome to help...


Greetings,

Andres Freund




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suddenly - LOG: could not open file "postmaster.pid": No such file or directory

2017-06-23 Thread Muhammad Hanif Abdul Hamid
Hi,

Thanks for your reply, Tom and Michael.

Maybe an irrelevant question, but I would like to know if there anything from 
postgres process (e.g any parameter that is not set right) that might remove 
the postmaster.pid?

FYI, we have 20 other sites with the same database use but have not experienced 
this problem.

Thank you.

-Original Message-
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Jumaat, 23 Jun 2017 1:26 ptg
To: Tom Lane 
Cc: Muhammad Hanif Abdul Hamid ; 
pgsql-general@postgresql.org
Subject: Re: [GENERAL] Suddenly - LOG: could not open file "postmaster.pid": No 
such file or directory

On Fri, Jun 23, 2017 at 1:43 PM, Tom Lane  wrote:
> Muhammad Hanif Abdul Hamid  writes:
>> Within this week, I have faced two times "postmaster.pid" went missing in a 
>> sudden. Nobody deleted it or shutdown the server.
>
> I'd lay very long odds that your second statement is false.
> *Something* removed that file, and it wasn't the postmaster.

Yup, and the postmaster is designed to stop if it finds out that postmaster.pid 
is removed. See that:
commit: 7e2a18a9161fee7e67642863f72b51d77d3e996f
author: Tom Lane 
date: Tue, 6 Oct 2015 17:15:52 -0400
Perform an immediate shutdown if the postmaster.pid file is removed
--
Michael


DISCLAIMER:


This e-mail (including any attachments) is for the addressee(s) only and may be 
confidential, especially as regards personal data. If you are not the intended 
recipient, please note that any dealing, review, distribution, printing, 
copying or use of this e-mail is strictly prohibited. If you have received this 
email in error, please notify the sender immediately and delete the original 
message (including any attachments).

MIMOS Berhad is a research and development institution under the purview of the 
Malaysian Ministry of Science, Technology and Innovation. Opinions, conclusions 
and other information in this e-mail that do not relate to the official 
business of MIMOS Berhad and/or its subsidiaries shall be understood as neither 
given nor endorsed by MIMOS Berhad and/or its subsidiaries and neither MIMOS 
Berhad nor its subsidiaries accepts responsibility for the same. All liability 
arising from or in connection with computer viruses and/or corrupted e-mails is 
excluded to the fullest extent permitted by law.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general