Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2015-02-13 Thread Guy Helmer

 On Feb 12, 2015, at 3:21 PM, Day, David d...@redcom.com wrote:
 
 Update/Information sharing on my pursuit of  segmentation faults
  
 FreeBSD 10.0-RELEASE-p12 amd64
 Postgres version 9.3.5
  
 Below are three postgres core files generated from two different machine ( 
 Georgia and Alabama ) on Feb 11. 
 These cores would not be caused  from an  environment update issue that I 
 last suspected might be causing the segfaults
 So I am kind of back to square one in terms of thinking what is occurring.
  
 ?  I am not sure that I understand the associated time events in the  
 postgres log file output.  Is this whatever happens to be running on the 
 other postgress forked process when the cored  process was detected ? 
 If this is the case then I have probably been reading to much from the 
 content of the postgres log file at the time of core.
 This probably just represents collateral damage of routine transactions that 
 were in other forked  processes at the time one of the processes cored ?
  
 Therefore I would now just assert  that postgres has a sporadic segmentation 
 problem,  no known way to reliably cause it 
 and am uncertain as to how to proceed to resolve it. 

. . .

  Georgia-Core 8:38 -  Feb 11
 [New process 101032]
 [New Thread 802c06400 (LWP 101032)]
 Core was generated by `postgres'.
 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 (gdb) bt
 #0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #1  0x00080c4cab49 in Perl_sv_clear () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #2  0x00080c4cb13a in Perl_sv_free2 () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #3  0x00080c4e5102 in Perl_free_tmps () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #4  0x00080bcfedea in plperl_destroy_interp () from 
 /usr/local/lib/postgresql/plperl.so
 #5  0x00080bcfec05 in plperl_fini () from 
 /usr/local/lib/postgresql/plperl.so
 #6  0x006292c6 in ?? ()
 #7  0x0062918d in proc_exit ()
 #8  0x006443f3 in PostgresMain ()
 #9  0x005ff267 in PostmasterMain ()
 #10 0x005a31ba in main ()
 (gdb) info threads
   Id   Target Id Frame
 * 2Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
 Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 * 1Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
 Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
  

Given two of the coredumps are in down in libperl and this is FreeBSD 10.0 
amd64, have you seen this? 

https://rt.perl.org/Public/Bug/Display.html?id=122199 
https://rt.perl.org/Public/Bug/Display.html?id=122199

Michael Moll suggested trying setting vm.pmap.pcid_enabled to 0 but I don’t 
recall seeing if that helped.

Guy




Re: [GENERAL] encrypting data stored in PostgreSQL

2014-04-10 Thread Guy Helmer
On Apr 9, 2014, at 3:40 PM, CS_DBA cs_...@consistentstate.com wrote:

 Hi All;
 
 We have a client with this requirement:
 
 At rest data must be encrypted with a unique client key
 
 Any thoughts on how to pull this off for PostgreSQL stored data?

Does there happen to be a Postgresql proxy, such as a modified pgbouncer, that 
implements column-wise and/or row-wise encryption and decryption using keys 
specific to the authenticated user? It seems like a reasonable way to implement 
an encryption layer that would provide protection against a number of threats 
without requiring modification to the higher layers of the application or to 
the Postgresql server.

Guy



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL connection has been closed unexpectedly

2013-08-16 Thread Guy Helmer
On Aug 15, 2013, at 5:41 AM, Stuart Ford stuart.f...@glide.uk.com wrote:

 Dear community
 
 We have a problem on our development database server, which supports a PHP
 application, which connects to it from a different server. Sometimes,
 around 1 in 4 page loads, it fails and reports the following error message:
 
 FATAL: terminating connection due to administrator command SSL connection
 has been closed unexpectedly
 
 Reloading the page usually works, sometimes doesn't, sometimes it requires
 several more refresh attempts before it magically works again. The odd
 thing is that we also have a live platform that is set up in the same way,
 and this does not occur, thankfully, but I expect it could.
 
 I've tried turning off all SSL features on the development platform, but
 oddly, the same problem persists. I've also tried whacking the logging
 level up to debug5, but still nothing appears in the PG logs when the
 problem occurs.
 
 Does anybody have any idea what could be happening here?
 
 Many thanks in advance
 
 Stuart Ford

Any chance you are using HP ProCurve switches? I believe we have seen these 
switches corrupt SSL connections when systems use flow control signaling. 
Utterly bizarre behavior, but we've seen it at multiple customer sites.

Guy



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum, vacuum full and problems releasing disk space

2012-05-10 Thread Guy Helmer
BTW, it's not a problem to query data across multiple days as long as you query 
from the parent table -- Postgresql will use the child table constraints to 
search all the child tables that could contain data.

Guy

On May 10, 2012, at 11:01 AM, Horaci Macias wrote:

 thanks Guy. I had thought about using per-day tables (although I didn't know 
 about child tables) but my problem is that some entries are related and they 
 can span several minutes, so my worry is that I end up not finding all the 
 right entries when I search for entries that happen close to the end of day / 
 start of day time.
 Anyway, worth a thought for sure so thanks.
 
 H
 
 On 10/05/12 16:42, Guy Helmer wrote:
 On May 10, 2012, at 4:31 AM, Horaci Macias wrote:
 
 Hi everybody,
 
 I'm running postgres 9.1 and having disk space problems.
 My application captures information 24x7 and stores it into the database. 
 This includes several bytea and can be ~5M entries a day, so the size can 
 be an issue after several days.
 My application also cleans up entries older than 10 days; it does this 
 every night and the delete operations are happening successfully. I cannot 
 truncate the tables as they contain both stale and active data.
 The database is able to store all the entries for ~15 days without 
 problems, but for some reason the deletion of old entries is not freeing up 
 the space (or the insertion of new entries is not reusing the space used by 
 old entries) because after running the application for ~20days I run out of 
 space on disk.
 I've been reading on this forum and the postgres documentation; vacuum full 
 is not recommended and apparently vacuum should be all I need. I'm using 
 autovacuum but this doesn't seem to be solving the problem (perhaps because 
 while vacuum is running the application keeps inserting entries 24x7?)
 
 Just to clarify, I don't really care if the disk space is returned to the 
 OS; what I need though is to be sure that I can keep a window of 10 days of 
 records (assuming of course my HD is big enough for those 10 days, which 
 seems to be the case).
 
 Some questions:
 * Although not being generally recommended, I've read that vacuum full is 
 sometimes the only choice when large deletions are in place in order to 
 maintain the database. Is this the case here?
 * Should I try to have a maintenance window and stop all inserts/writes 
 while vacuum is running? If so, is there any way to configure at what time 
 vacuum will be executed by autovacuum or should I rely on cron-type jobs 
 for this? and is there any way to prevent external connections at certain 
 times of day to make sure inserts/writes don't happen while vacuum is 
 going, or again I should use cron-type jobs for this?
 * Any other suggestions/ideas to troubleshoot this or any pointers to 
 further documentation?
 I would expect a plain VACUUM to make unused space available for re-use -- 
 not sure why it would not be helping.
 
 Since Postgresql can have tables that are children of tables, a neat trick 
 in this situation is to create per-day child tables and insert the new data 
 directly into the appropriate per-day table; with this approach, deleting 
 old data is accomplished by simply dropping outdated tables and thereby 
 avoiding VACUUM completely. With constraints on the child tables, Postgresql 
 can optimize a query on the parent table by knowing what child table has 
 data from what day and will only check child tables that would have data for 
 a given query.
 
 For example, I have a table called data_tbl, and child per-day tables like 
 data_tbl_ts_20120509. The child table data_tbl_ts_20120509 has constraint:
 data_tbl_20120509_ts_check CHECK (ts= '2012-05-08 
 19:00:00-05'::timestamp with time zone AND ts  '2012-05-09 
 19:00:00-05'::timestamp with time zone)
 (each child table has data from 00:00:00 GMT to 23:59:59 GMT for that day)
 
 Each day on my systems, a cron job creates the child table, constraints on 
 the child table, and index(es) for the child table to hold the next day's 
 data, and another cron job drops any outdated child tables. I believe the 
 command to create the example child table above would have been:
 
 CREATE TABLE data_tbl_ts_20120509 (CHECK (ts= '2012-05-08 
 19:00:00-05'::timestamp with time zone AND ts  '2012-05-09 
 19:00:00-05'::timestamp with time zone)) INHERITS (data_tbl)
 
 (followed by any necessary GRANT commands to provide access to the new child 
 table)
 
 Hope this helps,
 Guy
 
 



This message has been scanned by ComplianceSafe, powered by Palisade's 
PacketSure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum, vacuum full and problems releasing disk space

2012-05-10 Thread Guy Helmer
On May 10, 2012, at 4:31 AM, Horaci Macias wrote:

 Hi everybody,
 
 I'm running postgres 9.1 and having disk space problems.
 My application captures information 24x7 and stores it into the database. 
 This includes several bytea and can be ~5M entries a day, so the size can be 
 an issue after several days.
 My application also cleans up entries older than 10 days; it does this every 
 night and the delete operations are happening successfully. I cannot truncate 
 the tables as they contain both stale and active data.
 The database is able to store all the entries for ~15 days without problems, 
 but for some reason the deletion of old entries is not freeing up the space 
 (or the insertion of new entries is not reusing the space used by old 
 entries) because after running the application for ~20days I run out of space 
 on disk.
 I've been reading on this forum and the postgres documentation; vacuum full 
 is not recommended and apparently vacuum should be all I need. I'm using 
 autovacuum but this doesn't seem to be solving the problem (perhaps because 
 while vacuum is running the application keeps inserting entries 24x7?)
 
 Just to clarify, I don't really care if the disk space is returned to the OS; 
 what I need though is to be sure that I can keep a window of 10 days of 
 records (assuming of course my HD is big enough for those 10 days, which 
 seems to be the case).
 
 Some questions:
 * Although not being generally recommended, I've read that vacuum full is 
 sometimes the only choice when large deletions are in place in order to 
 maintain the database. Is this the case here?
 * Should I try to have a maintenance window and stop all inserts/writes 
 while vacuum is running? If so, is there any way to configure at what time 
 vacuum will be executed by autovacuum or should I rely on cron-type jobs for 
 this? and is there any way to prevent external connections at certain times 
 of day to make sure inserts/writes don't happen while vacuum is going, or 
 again I should use cron-type jobs for this?
 * Any other suggestions/ideas to troubleshoot this or any pointers to further 
 documentation?

I would expect a plain VACUUM to make unused space available for re-use -- not 
sure why it would not be helping.

Since Postgresql can have tables that are children of tables, a neat trick in 
this situation is to create per-day child tables and insert the new data 
directly into the appropriate per-day table; with this approach, deleting old 
data is accomplished by simply dropping outdated tables and thereby avoiding 
VACUUM completely. With constraints on the child tables, Postgresql can 
optimize a query on the parent table by knowing what child table has data from 
what day and will only check child tables that would have data for a given 
query.

For example, I have a table called data_tbl, and child per-day tables like 
data_tbl_ts_20120509. The child table data_tbl_ts_20120509 has constraint:
data_tbl_20120509_ts_check CHECK (ts = '2012-05-08 
19:00:00-05'::timestamp with time zone AND ts  '2012-05-09 
19:00:00-05'::timestamp with time zone)
(each child table has data from 00:00:00 GMT to 23:59:59 GMT for that day)

Each day on my systems, a cron job creates the child table, constraints on the 
child table, and index(es) for the child table to hold the next day's data, and 
another cron job drops any outdated child tables. I believe the command to 
create the example child table above would have been:

CREATE TABLE data_tbl_ts_20120509 (CHECK (ts = '2012-05-08 
19:00:00-05'::timestamp with time zone AND ts  '2012-05-09 
19:00:00-05'::timestamp with time zone)) INHERITS (data_tbl)

(followed by any necessary GRANT commands to provide access to the new child 
table)

Hope this helps,
Guy




This message has been scanned by ComplianceSafe, powered by Palisade's 
PacketSure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Invalid memory alloc request

2009-08-25 Thread Guy Helmer
On systems running Postgresql 8.3.6, I have a nightly backup using 
pg_dump that failed on two machines overnight with this error:


pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  invalid memory alloc request size 
1376006425
pg_dump: The command was: COPY public.tablename (ts, fn, data, textdata) TO 
stdout;

The two machines where this occurred have similar, but not identical, 
data in this table.  Both machines are running FreeBSD 7.1 amd64.  I ran 
a query SELECT SUM(LENGTH(data)), SUM(LENGTH(textdata)) FROM tablename 
to sum the length of the data and textdata columns (both of type bytea) 
in this table, and the results were around 140MB for the data column and 
500MB for the textdata.


I have seen reports of invalid memory alloc request errors in the mail 
list archives, but those were considered likely to be file corruption or 
memory errors.  Since this happened simultaneously on two separate 
machines, I would not expect these issues to be likely causes.


Any suggestions?

Thanks,
Guy Helmer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid memory alloc request

2009-08-25 Thread Guy Helmer

Tom Lane wrote:

Guy Helmer ghel...@palisadesys.com writes:
  
On systems running Postgresql 8.3.6, I have a nightly backup using 
pg_dump that failed on two machines overnight with this error:



  

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  invalid memory alloc request size 
1376006425
pg_dump: The command was: COPY public.tablename (ts, fn, data, textdata) TO 
stdout;



Normally I'd say data corruption, but it is odd if you got the
identical message from two different machines.  Can you reproduce
it with a debugger attached?  If so, a backtrace from the call of
errfinish might be useful.

regards, tom lane
  

Yes, here is the backtrace.

Thanks for any help!

#0  errfinish (dummy=0) at elog.c:322
#1  0x0070e605 in elog_finish (elevel=20,
   fmt=0x80f120 invalid memory alloc request size %lu) at elog.c:971
#2  0x0072e9a6 in MemoryContextAlloc (context=0x802d2e250, 
size=1376006425)
   at 
mcxt.c:510  
#3  0x006bbbf4 in byteaout (fcinfo=0x7fffdaf0) at 
varlena.c:197
#4  0x00714692 in FunctionCall1 (flinfo=0x802ccff18, 
arg1=34395645151) at fmgr.c:1254
#5  0x00715c45 in OutputFunctionCall (flinfo=0x802ccff18, 
val=34395645151)
   at 
fmgr.c:1887 
#6  0x0051301c in CopyOneRowTo (cstate=0x802c4bc80, tupleOid=0, 
values=0x802c4bfb0,
   nulls=0x802cbf880 ) at 
copy.c:1488   
#7  0x00512deb in CopyTo (cstate=0x802c4bc80) at 
copy.c:1402   
#8  0x00512959 in DoCopyTo (cstate=0x802c4bc80) at 
copy.c:1252 
#9  0x00512489 in DoCopy 
(stmt=0x801bfec28,
   queryString=0x801bfe030 COPY public.tablename (ts, caid, casn, fn, 
data, textdata) TO stdout;) at 
copy.c:1148
#10 0x00633fa3 in ProcessUtility (parsetree=0x801bfec28,
   queryString=0x801bfe030 COPY public.tablename (ts, caid, casn, fn, 
data, textdata) TO stdout;, params=0x0, isTopLevel=1 '\001', 
dest=0x801bfe850,

   completionTag=0x7fffe540 ) at utility.c:712
#11 0x0063267e in PortalRunUtility (portal=0x802c61030, 
utilityStmt=0x801bfec28,
   isTopLevel=1 '\001', dest=0x801bfe850, completionTag=0x7fffe540 
) at pquery.c:1173

---Type return to continue, or q return to quit---
#12 0x006327e3 in PortalRunMulti (portal=0x802c61030, 
isTopLevel=1 '\001',
   dest=0x801bfe850, altdest=0x801bfe850, completionTag=0x7fffe540 
) at pquery.c:1266
#13 0x00631e70 in PortalRun (portal=0x802c61030, 
count=9223372036854775807,

   isTopLevel=1 '\001', dest=0x801bfe850, altdest=0x801bfe850,
   completionTag=0x7fffe540 ) at pquery.c:813
#14 0x0062c2ff in exec_simple_query (
   query_string=0x801bfe030 COPY public.tablename (ts, caid, casn, fn, 
data, textdata) TO stdout;) at postgres.c:1004

#15 0x0063025a in PostgresMain (argc=4, argv=0x801b3a780,
   username=0x801b3a750 pgsql) at postgres.c:3631
#16 0x005f739f in BackendRun (port=0x801b0a000) at postmaster.c:3207
#17 0x005f691a in BackendStartup (port=0x801b0a000) at 
postmaster.c:2830

#18 0x005f4526 in ServerLoop () at postmaster.c:1274
#19 0x005f3f5c in PostmasterMain (argc=4, argv=0x7fffeb58) 
at postmaster.c:1029

#20 0x0058f34a in main (argc=4, argv=0x7fffeb58) at main.c:188





Re: [GENERAL] Invalid memory alloc request

2009-08-25 Thread Guy Helmer

Tom Lane wrote:

Guy Helmer ghel...@palisadesys.com writes:
  

Tom Lane wrote:


Normally I'd say data corruption, but it is odd if you got the
identical message from two different machines.  Can you reproduce
it with a debugger attached?  If so, a backtrace from the call of
errfinish might be useful.
  


  

Yes, here is the backtrace.



Well, that looks just about like you'd expect for a bytea column.

Hmm ... you mentioned 500MB total in the textdata column.  Is it
possible that that's nearly all in one entry?  It's conceivable
that the text representation of the entry is simply too large.
(The next question of course would be how you got the entry in there,
but maybe it was submitted in binary protocol, or built by
concatenation.)

regards, tom lane
  
On the system where I captured the backtrace, there are several 
400MB-long entries in the textdata column.  I inserted these entries by 
doing an INSERT (..., textdata) VALUES (..., $1), mmap'ed the data 
from a file into memory, and executed the command using PQexecParams().


Is there a quantifiable limit to the size of values I insert into a 
bytea column?  I haven't found a limit documented anywhere...


Thanks,
Guy

--