Re: [GENERAL] Application crashing due to idle connection

2014-06-18 Thread David G Johnston
itishree sukla wrote
 Our application is crashing...

It really does help to be more specific with statements of this nature.  In
particular:

1) exactly how many idle connections (and are any of them idle in
transaction)? You ran the queries - provide the numbers reported to you.

2) how many connections do you typically have idle when your application is
running versus how many are idle while it is crashing (whatever that
actually means in your situation)?

3) does the database continue to work fine and only the application hangs or
does a choked server take down the application with it?  Particularly if the
later providing server specs and the relevant PostgreSQL configuration
(especially max connections and memory info) allows people to make better
observations.


 Can any one please give me some clue what cloud be the possible reason

Poorly written application software and/or insufficient connection pooling.


 , and how to get rid of this problem.

(the following are not mutually exclusive)
1. Disconnect from the database when you are not using it.
2. Install a connection pooler (pgbouncer is a good starting point)


 After killing connection this query is also vanishing

What did you expect to happen?

Given the sessions in question are indeed idle the query that you are
seeing just happens to be the last one executed.  

http://www.postgresql.org/docs/9.3/static/monitoring-stats.html (note the
comments for state and query)

It does seem an unusual query to be leaving for last...though I personally
have no idea what it is doing or if it is PostgreSQL initiated (as opposed
to client-initiated).  From your other threads this might be something
PostGIS related - though that would likely fall under client sent.


Scanning your other recent posts it does seem like you tend to be silent if
the provided advice helps you solve the problem on your own; so for me at
least I'll take silence to mean you figured out how to fix your code and
make use of pgbouncer (or something similar) to solve your scaling need.

Also, as a side comment, the list preference for responses is inline-context
(like mine above) or, second best, bottom-post.  Top-posting makes it
difficult for others to following along and catch-up on longer threads.

Good Luck!

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Application-crashing-due-to-idle-connection-tp5807688p5807690.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GIST optimization to limit calls to operator on sub nodes

2014-06-18 Thread Pujol Mathieu

Hello,
My question is about GIST index.
I made my own index to handle specific data and operators. It works 
pretty fine but I wonder if it was possible to optimize it.
When I run my operator on a GIST node (in the method 
gist_range_consistent) it returns NotConsistent / MaybeConsistent / 
FullyConsistent.
NotConsistent - means that all subnodes could be ignored, 
gist_range_consistent return false
MaybeConsistent - means that at least one subnode/leaf will be 
consistent, gist_range_consistent return true
FullyConsistent - means that all subnodes/leaves will be consistent, 
gist_range_consistent return true


So like with the recheck flag I would like to know if there is a way 
to notify postgres that it is not necessary to rerun my operator on 
subnodes, to speedup the search.


For example, consider the following gist tree
  R
/ \
   Na  Nb
 /   \   /\
La1  La2Lb1  Lb2

If all nodes return FullyConsistent, postgres will run tests in that 
Order : R, Na, Nb, La1, La2, Lb1, Lb2, thanks to recheck flag it will 
not test rows associated to leaves Lxx.
My goal is that postgres run test on R and then skip tests on other 
nodes. So is there a way to do that in the GIST API ? Or could I share 
data from R to Nx and then From Na to Lax and Nb to Lbx ?

Thanks,
Mathieu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-18 Thread Khangelani Gama
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, June 18, 2014 4:32 AM
To: Khangelani Gama; Andres Freund; Tom Lane
Cc: Alban Hertroys; Ziggy Skalski; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 06/17/2014 10:37 AM, Khangelani Gama wrote:
 -Original Message-
 From: Andres Freund [mailto:and...@2ndquadrant.com]
 Sent: Tuesday, June 17, 2014 4:38 PM
 To: Tom Lane
 Cc: Alban Hertroys; Khangelani Gama; Ziggy Skalski;
 pgsql-general@postgresql.org
 Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of
 file , ERROR: missing data for column

 On 2014-06-17 10:25:45 -0400, Tom Lane wrote:
 Alban Hertroys haram...@gmail.com writes:
 Or just connect your 9.2.4 pg_dump remotely to your 8.4 server.

 FWIW, the described behavior sounded like the dump file had gotten
 truncated somewhere along the line.  Maybe it's just necessary to
 re-do the transfer of the existing dump file.

 Hi Tom, I tried the dump several times, and dumped it more than once
 but no luck.


My suspicion is the XML field two places before the fpp_cde field.
Interested in seeing if the 9.4 dump will handle it better.


Adrian you might be right I have just tried to pg_restore on the same
server(8.4.2), I get the same errors related to xml. Can someone advise if
the following XML file as an example is in the correct state or not:



?xml version=1.0 encoding=UTF-8?\r













 +
 nonFinancialEventVer01
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xsi:noNamespaceSchemaLocation=accountEjb.xsdactH01uiD01userId
/deviceId //uiD01transData01actionType5
143/actionTypeisFinancialfalse/isFinancialdocDate1385903062657/docDatedocNo4585427/docNobuildNumber2013-11-08
:11:14:16/buildNumberisCreditfalse/isCreditisCashTypefalse/isC
ashType/transData01seqD01audit4585427/auditfinProcPer201311/finProcPerbranchCode0023/branchCodegrpCode21~21~21/grpCodehpDoc1807125/hpDoc/seqD01/actH01accountData01br
anch0023/branchaccount103558/accountcontract1/contractvalueData01value0.00/value/valueData01statusData01finProc201311/finProcstatusINACTIVE/statusbalance0.00/balance
arrears0.00/arrearsdue0.00/duefupStat
/inst101.21/instmthNotPay0/mthNotPayprovArrMth0/provArrMthprovInstPay0/provInstPayprovAmt0.00/provAmtprovPcnt0.0/provPcntde
bMtd0.00/debMtdcrdMtd0.00/crdMtdpayMtd0.00/payMtdintMtd0.00/intMtdrebFin0.00/rebFinrebIns0.00/rebInsfinProv0.00/finProvinsProv0.00/insProvprovNPTyp
/depMtd0.00/d
epMtdopenArrears0.00/openArrearsrebClub0.00/rebClubnxtMthOpnRecency0/nxtMthOpnRecencyfinRebMtd0.00/finRebMtdinsRebMtd0.00/insRebMtdempericaScore0/empericaScoreavgDelq0/a
vgDelqworstAge8224/worstAgemonthsDelq0/monthsDelqopenNowDue0.0/openNowDuequeuePriority
/triadQueuebdc03/triadQueuehpfstatNcaIntMtd0.00/hpfstatNcaIntMtdhpfstatNcaIntLtd0.00
/hpfstatNcaIntLtdhpfstatAddIntMtd0.00/hpfstatAddIntMtdhpfstatAddIntLtd0.00/hpfstatAddIntLtdhpfstatCrLifeInsMtd0.00/hpfstatCrLifeInsMtdhpfstatCrLifeInsLtd0.00/hpfstatCrLifeInsLtdhp
fstatAddCrLifeInsMtd0.00/hpfstatAddCrLifeInsMtdhpfstatAddCrLifeInsLtd0.00/hpfstatAddCrLifeInsLtdhpfstatClubMtd0.00/hpfstatClubMtdhpfstatClubLtd0.00/hpfstatClubLtdhpfstatServiceFeeMt
d0.00/hpfstatServiceFeeMtdhpfstatServiceFeeLtd0.00/hpfstatServiceFeeLtdhpfstatOptInsMtd0.00/hpfstatOptInsMtdhpfstatOptInsLtd0.00/hpfstatOptInsLtdhpfstatWoff0.00/hpfstatWoffhpfst
atdebltd1437.06/hpfstatdebltdhpfstatcrdltd-25.01/hpfstatcrdltdhpfstatpayltd-1412.05/hpfstatpayltdhpfstatintltd0.00/hpfstatintltdhpfstatdepltd400.00/hpfstatdepltdcontDelq0/cont
DelqhpfstatLastPayDte94840560/hpfstatLastPayDtehpfstatIndupBal0.00/hpfstatIndupBalhpfstatIndupLtd0.00/hpfstatIndupLtd/statusData01/accountData01docValueData01branch0023/br
anchaccount103558/accountvalue0.00/value/docValueData01/nonFinancialEventVer01\r+
















--
Adrian Klaver
adrian.kla...@aklaver.com


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-18 Thread Khangelani Gama
-Original Message-
From: Khangelani Gama [mailto:kg...@argility.com]
Sent: Wednesday, June 18, 2014 1:06 PM
To: 'Adrian Klaver'; 'Andres Freund'; 'Tom Lane'
Cc: 'Alban Hertroys'; 'Ziggy Skalski'; 'pgsql-general@postgresql.org'
Subject: RE: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, June 18, 2014 4:32 AM
To: Khangelani Gama; Andres Freund; Tom Lane
Cc: Alban Hertroys; Ziggy Skalski; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 06/17/2014 10:37 AM, Khangelani Gama wrote:
 -Original Message-
 From: Andres Freund [mailto:and...@2ndquadrant.com]
 Sent: Tuesday, June 17, 2014 4:38 PM
 To: Tom Lane
 Cc: Alban Hertroys; Khangelani Gama; Ziggy Skalski;
 pgsql-general@postgresql.org
 Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of
 file , ERROR: missing data for column

 On 2014-06-17 10:25:45 -0400, Tom Lane wrote:
 Alban Hertroys haram...@gmail.com writes:
 Or just connect your 9.2.4 pg_dump remotely to your 8.4 server.

 FWIW, the described behavior sounded like the dump file had gotten
 truncated somewhere along the line.  Maybe it's just necessary to
 re-do the transfer of the existing dump file.

 Hi Tom, I tried the dump several times, and dumped it more than once
 but no luck.


My suspicion is the XML field two places before the fpp_cde field.
Interested in seeing if the 9.4 dump will handle it better.


Adrian you might be right I have just tried to pg_restore on the same
server(8.4.2), I get the same errors related to xml. Can someone advise if
the following XML file as an example is in the correct state or not:



?xml version=1.0 encoding=UTF-8?\r













 +
 nonFinancialEventVer01
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xsi:noNamespaceSchemaLocation=accountEjb.xsdactH01uiD01userId
/deviceId //uiD01transData01actionType5
143/actionTypeisFinancialfalse/isFinancialdocDate1385903062657/docDatedocNo4585427/docNobuildNumber2013-11-08
:11:14:16/buildNumberisCreditfalse/isCreditisCashTypefalse/isC
ashType/transData01seqD01audit4585427/auditfinProcPer201311/
ashTypefinProcPerbranchCode0023/branchCodegrpCode21~21~21/grpCo
ashTypedehpDoc1807125/hpDoc/seqD01/actH01accountData01br
anch0023/branchaccount103558/accountcontract1/contractvalueD
anchata01value0.00/value/valueData01statusData01finProc20131
anch1/finProcstatusINACTIVE/statusbalance0.00/balance
arrears0.00/arrearsdue0.00/duefupStat
/inst101.21/instmthNotPay0/mthNotPayprovArrMth0/provArrMth
provInstPay0/provInstPayprovAmt0.00/provAmtprovPcnt0.0/provP
cntde
bMtd0.00/debMtdcrdMtd0.00/crdMtdpayMtd0.00/payMtdintMtd0.00
bMtd/intMtdrebFin0.00/rebFinrebIns0.00/rebInsfinProv0.00/f
bMtdinProvinsProv0.00/insProvprovNPTyp /depMtd0.00/d
epMtdopenArrears0.00/openArrearsrebClub0.00/rebClubnxtMthOpnRe
epMtdcency0/nxtMthOpnRecencyfinRebMtd0.00/finRebMtdinsRebMtd0.
epMtd00/insRebMtdempericaScore0/empericaScoreavgDelq0/a
vgDelqworstAge8224/worstAgemonthsDelq0/monthsDelqopenNowDue0.
vgDelq0/openNowDuequeuePriority
vgDelq/triadQueuebdc03/triadQueuehpfstatNcaIntMtd0.00/hpfstatNc
vgDelqaIntMtdhpfstatNcaIntLtd0.00
/hpfstatNcaIntLtdhpfstatAddIntMtd0.00/hpfstatAddIntMtdhpfstatAddIntLtd0.00/hpfstatAddIntLtdhpfstatCrLifeInsMtd0.00/hpfstatCrLifeInsMtdhpfstatCrLifeInsLtd0.00/hpfstatCrLifeInsLtdhp
fstatAddCrLifeInsMtd0.00/hpfstatAddCrLifeInsMtdhpfstatAddCrLifeInsLt
fstatAddCrLifeInsMtdd0.00/hpfstatAddCrLifeInsLtdhpfstatClubMtd0.00
fstatAddCrLifeInsMtd/hpfstatClubMtdhpfstatClubLtd0.00/hpfstatClubL
fstatAddCrLifeInsMtdtdhpfstatServiceFeeMt
d0.00/hpfstatServiceFeeMtdhpfstatServiceFeeLtd0.00/hpfstatServiceF
deeLtdhpfstatOptInsMtd0.00/hpfstatOptInsMtdhpfstatOptInsLtd0.00
d/hpfstatOptInsLtdhpfstatWoff0.00/hpfstatWoffhpfst
atdebltd1437.06/hpfstatdebltdhpfstatcrdltd-25.01/hpfstatcrdltdhp
atdebltdfstatpayltd-1412.05/hpfstatpayltdhpfstatintltd0.00/hpfsta
atdebltdtintltdhpfstatdepltd400.00/hpfstatdepltdcontDelq0/cont
DelqhpfstatLastPayDte94840560/hpfstatLastPayDtehpfstatIndupBal
Delq0.00/hpfstatIndupBalhpfstatIndupLtd0.00/hpfstatIndupLtd/sta
DelqtusData01/accountData01docValueData01branch0023/br
anchaccount103558/accountvalue0.00/value/docValueData01/nonF
anchinancialEventVer01\r+








Is \r + and \r+ valid on this XML file? , I need to tell application
Developers if this right or not.







--
Adrian Klaver
adrian.kla...@aklaver.com


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its 

[GENERAL] Troubles with Postgresql performance

2014-06-18 Thread Denis Mironchuk
Hello. I had a strange issue with postgresql 9.3 server. Load on server
rapidly increased, queries which took about 4ms were executed for 100ms and
creation a foreign key on empty table blocked all queries to server. After
server restart everithing became fine again, the same foreign key was
created less then in a second, and queries worked fast again. What may be
the reason?

I understand that this information is not enough to give some reasonable
answer but still will be grateful for any advise.


Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-18 Thread Adrian Klaver

On 06/18/2014 04:05 AM, Khangelani Gama wrote:

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, June 18, 2014 4:32 AM
To: Khangelani Gama; Andres Freund; Tom Lane
Cc: Alban Hertroys; Ziggy Skalski; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of file ,
ERROR: missing data for column

On 06/17/2014 10:37 AM, Khangelani Gama wrote:

-Original Message-
From: Andres Freund [mailto:and...@2ndquadrant.com]
Sent: Tuesday, June 17, 2014 4:38 PM
To: Tom Lane
Cc: Alban Hertroys; Khangelani Gama; Ziggy Skalski;
pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore: custom archiver unexpected end of
file , ERROR: missing data for column

On 2014-06-17 10:25:45 -0400, Tom Lane wrote:

Alban Hertroys haram...@gmail.com writes:

Or just connect your 9.2.4 pg_dump remotely to your 8.4 server.


FWIW, the described behavior sounded like the dump file had gotten
truncated somewhere along the line.  Maybe it's just necessary to
re-do the transfer of the existing dump file.


Hi Tom, I tried the dump several times, and dumped it more than once
but no luck.



My suspicion is the XML field two places before the fpp_cde field.
Interested in seeing if the 9.4 dump will handle it better.


Adrian you might be right I have just tried to pg_restore on the same
server(8.4.2), I get the same errors related to xml. Can someone advise if
the following XML file as an example is in the correct state or not:


The other strange thing is:

ERROR: missing data for column fpp_cde
CONTEXT: COPY audit, line 2565159: 4585427 21~21~21 5143 0 2013-12-01 
15:03:18.336718 ?xml version=1.0 encoding=UTF-8?\r\nnonF...
STATEMENT: COPY audit (aud_id, grp_cde, act_typ, aud_ver, usr_id, 
aud_device_id, aud_ts, aud_xml,


aud_id   4585427
grp_code 21~21~21
act_type 5143
aud_ver  0
usr_id 2013-12-01 15:03:18.336718  Is this not aud_ts?
aud_device_id  ?xml version=1.0 encoding=UTF-8?\r\nnonF... ? 
Is this not aud_xml?


Which begs the question where did usr_id and aud_device_id go?

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (Relatively) Oversized Checkpoint

2014-06-18 Thread Nathaniel Talbott
 What is the sustained volume of disk output you experience, for
 example from vmstat snapshots?

Unfortunately our metrics are... less than comprehensive at this point, so
I don't have data for that at the moment. Going to work to rectify, but
will take some time.


 10% of 8GB (which is sounds like the mount of of RAM you have) is
 nearly a gig, which is a lot of dirty data to keep around.  I'd use
 dirty_background_bytes to set it, as it give finer control than
 dirty_background_ratio does, and set it maybe 100MB (but depends on
 your IO system)

After realizing that often (but not always) these events seem to happen a
few minutes after an auto vacuum run, I'm inclined to agree. I'm trying
100Mb for now; we'll see overnight whether that helps.


 Series 3 kernels before 3.8 seem to have a poor (but vague) reputation
 for running database-type workloads.

Super helpful to know. Getting the OS upgraded on these boxes is on the
short list anyhow, and I'll make sure we end up on at least 3.8 by the time
we're done.


 How many drives of what RPM in what RAID configuration, and does the
 RAID controller have BBU.  I personally don't know from drivers, but
 other people on the list do, so that info could be useful as well.

/proc/driver/cciss$ cat cciss0
cciss0: HP Smart Array P400i Controller
Board ID: X
Firmware Version: 7.22
IRQ: 64
Logical drives: 1
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 191
Max # commands on controller since init: 310
Max SG entries since init: 128
Sequential access devices: 0

cciss/c0d0:  127.99GB RAID 1(1+0)

More:

  ~$ sudo hpacucli controller slot=0 show

  Smart Array P400i in Slot 0 (Embedded)
 Bus Interface: PCI
 Slot: 0
 Serial Number: X
 Cache Serial Number: X
 RAID 6 (ADG) Status: Disabled
 Controller Status: OK
 Hardware Revision: D
 Firmware Version: 7.22
 Rebuild Priority: Medium
 Expand Priority: Medium
 Surface Scan Delay: 15 secs
 Surface Scan Mode: Idle
 Wait for Cache Room: Disabled
 Surface Analysis Inconsistency Notification: Disabled
 Post Prompt Timeout: 0 secs
 Cache Board Present: True
 Cache Status: OK
 Cache Ratio: 100% Read / 0% Write
 Drive Write Cache: Disabled
 Total Cache Size: 256 MB
 Total Cache Memory Available: 208 MB
 No-Battery Write Cache: Disabled
 Battery/Capacitor Count: 0
 SATA NCQ Supported: True

Physical drives:

   ~$ sudo hpacucli controller slot=0 pd all show detail

   Smart Array P400i in Slot 0 (Embedded)

  array A

 physicaldrive 1I:1:1
Port: 1I
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 128.0 GB
Firmware Revision: DXM04B0Q
Serial Number: X
Model: ATA Samsung SSD 840
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 23
Maximum Temperature (C): 70
PHY Count: 1
PHY Transfer Rate: 1.5Gbps

 physicaldrive 1I:1:2
Port: 1I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SATA
Size: 128.0 GB
Firmware Revision: DXM04B0Q
Serial Number: X
Model: ATA Samsung SSD 840
SATA NCQ Capable: True
SATA NCQ Enabled: True
Current Temperature (C): 21
Maximum Temperature (C): 70
PHY Count: 1
PHY Transfer Rate: 1.5Gbps

--
Nathaniel


On Mon, Jun 16, 2014 at 5:01 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Mon, Jun 16, 2014 at 12:40 PM, Nathaniel Talbott
 nathan...@spreedly.com wrote:
  So the fsync (by the kernel) of a single file took 64 seconds.  This
  single event explains almost all of the overrun and the performance
  problems.
 
  So the 10x number of buffers being written is a red herring and/or
  correlates with a slow fsync?

 Two theories there:

 Either the uniformly high level of activity is not actually uniform
 and it was extra high during that checkpoint.

 Or, much of the writing that happens is normally being done by the
 background writer rather than the checkpoint writer. Severe IO
 problems caused the background writer to freeze up or slow down,
 leaving the work to be done by the checkpoint writer instead.  Once
 the checkpoint writer hits the fsync phase of its work, it then gets
 blocked by the same IO problems that were affecting the background
 writer.  So in this theory, the shift of the work load from background
 writer to checkpoint writer is just a symptom of the IO problems.

 Unfortunately there is little good way to assess those theories,
 unless you have a lot 

Re: [GENERAL] pg_restore: custom archiver unexpected end of file , ERROR: missing data for column

2014-06-18 Thread Khangelani Gama
 Adrian you might be right I have just tried to pg_restore on the same
 server(8.4.2), I get the same errors related to xml. Can someone
 advise if the following XML file as an example is in the correct state or
 not:

The other strange thing is:

ERROR: missing data for column fpp_cde
CONTEXT: COPY audit, line 2565159: 4585427 21~21~21 5143 0 2013-12-01
15:03:18.336718 ?xml version=1.0 encoding=UTF-8?\r\nnonF...
STATEMENT: COPY audit (aud_id, grp_cde, act_typ, aud_ver, usr_id,
aud_device_id, aud_ts, aud_xml,

aud_id   4585427
grp_code 21~21~21
act_type 5143
aud_ver  0
usr_id 2013-12-01 15:03:18.336718  Is this not aud_ts?
aud_device_id  ?xml version=1.0 encoding=UTF-8?\r\nnonF... ?
Is this not aud_xml?

Which begs the question where did usr_id and aud_device_id go?

I  have tried to restore the dump which I created using postgres 9 binaries
from postgres 8 database, I got more data issues. The problem it's the data
inside the database that's broken. Application developers are looking at the
problem. Thanks very much for your help , much appreciated.


--
Adrian Klaver
adrian.kla...@aklaver.com


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Troubles with Postgresql performance

2014-06-18 Thread Kevin Grittner
Denis Mironchuk denis.mironc...@ardas.dp.ua wrote:

 Hello. I had a strange issue with postgresql 9.3 server. Load on
 server rapidly increased, queries which took about 4ms were
 executed for 100ms and creation a foreign key on empty table
 blocked all queries to server. After server restart everithing
 became fine again, the same foreign key was created less then in
 a second, and queries worked fast again. What may be the reason?

 I understand that this information is not enough to give some
 reasonable answer but still will be grateful for any advise.

There are too many possibilities to list, really.  My advice would
be to be ready to capture as much information as possible if it
happens again.  At the top of my list would be the output from
selecting everything in the pg_stat_activity and pg_locks views, as
well as the output from `vmstat 1 30` (assuming a platform which
supports that).  If you post that with the information suggested on
this page, people should be able to offer helpful advice:

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

Of particular interest would be the OS, exact PostgreSQL version
(from SELECT version();), a hardware description, and the
PostgreSQL configuration settings.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replacing a production db

2014-06-18 Thread Nicolás Lichtmaier
Is there a way to replace a production database with another as part of a
new release of our website?

Where I work we have a scheme of pushing code+db releases to testing and
then to production. Most of our databases use MySQL and I was told they
can just rename the db and it works. We are adopting PostgreSQL for some
new developments and we'd like to do something similar. I've tried loading
the dump in a single transaction, but that has many problems as the
database is fairly big for that (some GBs). Is there a trick I'm missing
here?

Thanks a lot!

Nicolás.-


Re: [GENERAL] Replacing a production db

2014-06-18 Thread Andrew Sullivan
On Wed, Jun 18, 2014 at 05:05:22PM -0300, Nicolás Lichtmaier wrote:

 Where I work we have a scheme of pushing code+db releases to testing and
 then to production. Most of our databases use MySQL and I was told they
 can just rename the db and it works. We are adopting PostgreSQL for some
 new developments and we'd like to do something similar. I've tried loading
 the dump in a single transaction, but that has many problems as the
 database is fairly big for that (some GBs). Is there a trick I'm missing
 here?

I guess I don't understand why you'd need to rename the database.
What is the problem you're trying to solve?

Is the idea that you have changes in the schema in the new database
that are not reflected in the old database?  If so, what do you do
with all the data?  Doesn't that have to be in the new schema somehow?

One thing you can do with PostgreSQL that you can't do with MySQL is
change the schema in a transaction.  So you could make the schema
changes that way.

If the idea is instead to run two schemas in parallel (so that you can
have old and new versions of the application running at the same
time), what you really want to do us use the schema (or namespace)
support in Postgres.  Be careful with this, however, as it is easy to
make a system so convoluted that nobody can understand it.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing a production db

2014-06-18 Thread Bill Moran
On Wed, 18 Jun 2014 17:05:22 -0300 Nicolás Lichtmaier 
nico.lichtma...@gmail.com wrote:

 Is there a way to replace a production database with another as part of a
 new release of our website?
 
 Where I work we have a scheme of pushing code+db releases to testing and
 then to production. Most of our databases use MySQL and I was told they
 can just rename the db and it works. We are adopting PostgreSQL for some
 new developments and we'd like to do something similar. I've tried loading
 the dump in a single transaction, but that has many problems as the
 database is fairly big for that (some GBs). Is there a trick I'm missing
 here?

As someone who's fought with release engineering for a few years now,
let me start off by saying that you're probably doing it wrong.

That being said, you can rename databases in Postgres just like you
can in MySQL (in fact, it's more reliable in Postgres) so I don't
understand why you can't continue to do it that way.

-- 
Bill Moran wmo...@potentialtech.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing a production db

2014-06-18 Thread Nicolás Lichtmaier
I'm probably doing this wrong, but I couldn't find any resources to learn
how to get this right.

Can the renaming be done without disruption to current connections? From
what I've read you have to disconnect everyone before renaming a database.


2014-06-18 17:47 GMT-03:00 Bill Moran wmo...@potentialtech.com:

 On Wed, 18 Jun 2014 17:05:22 -0300 Nicolás Lichtmaier 
 nico.lichtma...@gmail.com wrote:

  Is there a way to replace a production database with another as part of a
  new release of our website?
 
  Where I work we have a scheme of pushing code+db releases to testing
 and
  then to production. Most of our databases use MySQL and I was told they
  can just rename the db and it works. We are adopting PostgreSQL for some
  new developments and we'd like to do something similar. I've tried
 loading
  the dump in a single transaction, but that has many problems as the
  database is fairly big for that (some GBs). Is there a trick I'm missing
  here?

 As someone who's fought with release engineering for a few years now,
 let me start off by saying that you're probably doing it wrong.

 That being said, you can rename databases in Postgres just like you
 can in MySQL (in fact, it's more reliable in Postgres) so I don't
 understand why you can't continue to do it that way.

 --
 Bill Moran wmo...@potentialtech.com



Re: [GENERAL] Replacing a production db

2014-06-18 Thread Tim Clarke
On 18/06/14 21:50, Nicolás Lichtmaier wrote:
 I'm probably doing this wrong, but I couldn't find any resources to
 learn how to get this right.

 Can the renaming be done without disruption to current connections?
 From what I've read you have to disconnect everyone before renaming a
 database.

I'd change the pointer instead; in your client or front-end application
at your release point, change the reference to which database to work
with to the new one.

-- 
Tim Clarke



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing a production db

2014-06-18 Thread John R Pierce

On 6/18/2014 1:05 PM, Nicolás Lichtmaier wrote:
Is there a way to replace a production database with another as part 
of a new release of our website?


Where I work we have a scheme of pushing code+db releases to testing 
and then to production. Most of our databases use MySQL and I was 
told they can just rename the db and it works. We are adopting 
PostgreSQL for some new developments and we'd like to do something 
similar. I've tried loading the dump in a single transaction, but that 
has many problems as the database is fairly big for that (some GBs). 
Is there a trick I'm missing here?


we do updates of our production systems with .sql files that are version 
deltas, eg, they alter existing tables and add new ones, possibly 
including new 'static data'.   if the update requires code changes, then 
we shut the app(s) down, run the database upgrade, install the new code, 
and restart the app(s).   Usually we try to do our schema updates so the 
old code will continue to work with the new schema, so the new code can 
be cut in incrementally with virtually zero downtime.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing a production db

2014-06-18 Thread Andy Colson

On 6/18/2014 3:05 PM, Nicolás Lichtmaier wrote:

Is there a way to replace a production database with another as part of
a new release of our website?

Where I work we have a scheme of pushing code+db releases to testing
and then to production. Most of our databases use MySQL and I was told
they can just rename the db and it works. We are adopting PostgreSQL for
some new developments and we'd like to do something similar. I've tried
loading the dump in a single transaction, but that has many problems as
the database is fairly big for that (some GBs). Is there a trick I'm
missing here?

Thanks a lot!

Nicolás.-


In the past I enjoyed using mysql on our website.  (hopefully the 
Secret Service sarcasm detector went off).  I loved it when mysql has 
gone away, and when mysqldump created a dumpfile that couldn't be 
restored.  I loved how DDL was not transaction safe.  There were many 
times we had to switch off a website and fix the production database.


We use PG now, our website is 90% read-only, and we get two types of 
updates.  Full and partial.


I create an update shcema, and copy all the data into it.  Once its 
ready, depending on the type I:


Full Update:
  begin;
  drop table public.general;
  alter table update.general set schema public;
  .. drop next table
  .. move it from update to public .. etc
  commit;
  drop schema update cascade;

Partial:
  begin
  delete from public.general where magickey in
 (select magickey from update.general);
  insert into public.general
 select * from update.general;
  ...  copy data for other tables ...
  drop schema update cascade;

The updates are done in a single transaction so website visitors see 
either the old data, or the new.  Using this method, and PG, I have 
never once had to show the This website is being updated and will be 
back in a moment page.  (In fact, I don't even have one of those pages 
anymore).



-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about detecting database changes

2014-06-18 Thread Dave Peticolas
Hi, I have a question about getting some information about database changes.

Let's suppose I have a particular transaction ID that I happen to know has
been rolled back.

Is it possible to ask the database if there have been any committed
transactions subsequent
to that rolled-back transaction? Basically I'm would like to know if I will
see the same state
of data that I could see at the beginning of that transaction in the
transaction I am in right
now. I am using Postgres 9.2, but if it's only possible in a later version
I would be interested
in knowing that too.

Thanks very much!

-- 
--Dave Peticolas


[GENERAL] Overlapping ranges

2014-06-18 Thread Jason Long
I have a large table of access logs to an application.  

I want is to find all rows that overlap startdate and enddate with any
other rows.

The query below seems to work, but does not finish unless I specify a
single id.  

select distinct a1.id
from t_access a1, 
t_access a2 
where tstzrange(a1.startdate, a1.enddate)  
  tstzrange(a2.startdate, a2.enddate) 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Overlapping ranges

2014-06-18 Thread Rob Sargent

On 06/18/2014 05:47 PM, Jason Long wrote:

I have a large table of access logs to an application.

I want is to find all rows that overlap startdate and enddate with any
other rows.

The query below seems to work, but does not finish unless I specify a
single id.

select distinct a1.id
from t_access a1,
 t_access a2
where tstzrange(a1.startdate, a1.enddate) 
   tstzrange(a2.startdate, a2.enddate)




I'm sure you're best bet is a windowing function, but your descriptions 
suggests there is no index on start/end date columns.  Probably want 
those in any event.


Re: [GENERAL] Overlapping ranges

2014-06-18 Thread Jason Long
On Wed, 2014-06-18 at 18:08 -0600, Rob Sargent wrote: 

 On 06/18/2014 05:47 PM, Jason Long wrote:
 
 
  I have a large table of access logs to an application.  
  
  I want is to find all rows that overlap startdate and enddate with any
  other rows.
  
  The query below seems to work, but does not finish unless I specify a
  single id.  
  
  select distinct a1.id
  from t_access a1, 
  t_access a2 
  where tstzrange(a1.startdate, a1.enddate)  
tstzrange(a2.startdate, a2.enddate) 
  
  
  
  
 
 I'm sure you're best bet is a windowing function, but your
 descriptions suggests there is no index on start/end date columns.
 Probably want those in any event.


There are indexs on startdate and enddate.
If I specify a known a1.id=1234 then the query returns all records that
overlap it, but this takes 1.7 seconds.

There are about 2 million records in the table.

I will see what I come up with on the window function.

If anyone else has some suggestions let me know.

I get with for EXPLAIN ANALYZE the id specified.

Nested Loop  (cost=0.43..107950.50 rows=8825 width=84) (actual
time=2803.932..2804.558 rows=11 loops=1)
   Join Filter: (tstzrange(a1.startdate, a1.enddate) 
tstzrange(a2.startdate, a2.enddate))
   Rows Removed by Join Filter: 1767741
   -  Index Scan using t_access_pkey on t_access a1  (cost=0.43..8.45
rows=1 width=24) (actual time=0.016..0.019 rows=1 loops=1)
 Index Cond: (id = 1928761)
   -  Seq Scan on t_access a2  (cost=0.00..77056.22 rows=1764905
width=60) (actual time=0.006..1200.657 rows=1767752 loops=1)
 Filter: (enddate IS NOT NULL)
 Rows Removed by Filter: 159270
Total runtime: 2804.599 ms


and for EXPLAIN without the id specified.  EXPLAIN ANALYZE will not
complete without the id specified.

Nested Loop  (cost=0.00..87949681448.20 rows=17005053815 width=84)
   Join Filter: (tstzrange(a1.startdate, a1.enddate) 
tstzrange(a2.startdate, a2.enddate))
   -  Seq Scan on t_access a2  (cost=0.00..77056.22 rows=1764905
width=60)
 Filter: (enddate IS NOT NULL)
   -  Materialize  (cost=0.00..97983.33 rows=1927022 width=24)
 -  Seq Scan on t_access a1  (cost=0.00..77056.22 rows=1927022
width=24)



[GENERAL] Global value/global variable?

2014-06-18 Thread Edson Richter
It is possible to define a global value/variable in PostgreSQL in a way that I 
can use it in any query/view/function?For example, I do have a connection 
string I use for dblink connections in several places (specially, inside 
views).Then, if I want to change the connection string, I do have to change 
every view manually.If I can set a kind of global variable, then I just use it 
inside every view - then, at my application startup (or even at PostgreSQL 
startup, if I can set this string at postgresql.conf level), would set this 
global variable to point the current string.
Your enlightment will be really welcome.
Regards,
Edson
  

Re: [GENERAL] Global value/global variable?

2014-06-18 Thread Ian Barwick

On 19/06/14 11:50, Edson Richter wrote:

It is possible to define a global value/variable in PostgreSQL in a way that I 
can use it in any query/view/function?
For example, I do have a connection string I use for dblink connections in 
several places (specially, inside views).
Then, if I want to change the connection string, I do have to change every view 
manually.
If I can set a kind of global variable, then I just use it inside every view - then, at 
my application startup (or even at PostgreSQL startup, if I can set this string at 
postgresql.conf level), would set this global variable to point the current 
string.

Your enlightment will be really welcome.


There's no such thing as a global variable, but why not use a table
to store any global configuration values? You can always do something like this:

  SELECT dblink_connect('myconn', (select connstr from dblink_conf))


Regards

Ian Barwick

--
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Global value/global variable?

2014-06-18 Thread Tom Lane
Ian Barwick i...@2ndquadrant.com writes:
 On 19/06/14 11:50, Edson Richter wrote:
 It is possible to define a global value/variable in PostgreSQL in a way that 
 I can use it in any query/view/function?

 There's no such thing as a global variable, but why not use a table
 to store any global configuration values?

Actually, people do regularly (ab)use custom GUC variables for this
purpose.  But a table is not a bad solution.  The GUC solution does
not scale to more than order-of-a-hundred values.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general