Re: Fwd: Re: [GENERAL] SSDD reliability

2011-05-05 Thread Florian Weimer
* Greg Smith:

 Intel claims their Annual Failure Rate (AFR) on their SSDs in IT
 deployments (not OEM ones) is 0.6%.  Typical measured AFR rates for
 mechanical drives is around 2% during their first year, spiking to 5%
 afterwards.  I suspect that Intel's numbers are actually much better
 than the other manufacturers here, so a SSD from anyone else can
 easily be less reliable than a regular hard drive still.

I'm a bit concerned with usage-dependent failures.  Presumably, two SDDs
in a RAID-1 configuration are weared down in the same way, and it would
be rather inconvenient if they failed at the same point.  With hard
disks, this doesn't seem to happen; even bad batches fail pretty much
randomly.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 64-bit PostgreSQL AMI for CentOS 5.5?

2011-05-05 Thread Kunal Ashar

I'm attempting to get PostgreSQL-PostGIS in the Amazon EC2 cloud.
Does anyone have any information on where I can find a 64-bit AMI for 
Linux CentOS 5.5 + PostgreSQL 9.0.4 + PostGIS 1.5.2?

Thanks in advance,
Kunal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread tushar nehete
Hi,
I gone through the steps from bucardo sites as,

[root@billingtest1 Bucardo-4.4.3]# perl Makefile.PL
WARNING: LICENSE is not a known parameter.
Warning: prerequisite DBD:g 2.0 not found. We have 1.49.
Warning: prerequisite ExtUtils::MakeMaker 6.32 not found. We have 6.30.
'LICENSE' is not a known MakeMaker parameter name.
Writing Makefile for Bucardo
[root@billingtest1 Bucardo-4.4.3]# make
cp bucardo_ctl blib/script/bucardo_ctl
/usr/bin/perl -MExtUtils::MY -e MY-fixin(shift) blib/script/bucardo_ctl

Manifying blib/man1/bucardo_ctl.1pm
Manifying blib/man3/Bucardo.3pm
[root@billingtest1 Bucardo-4.4.3]# make install
Installing /usr/bin/bucardo_ctl
Writing
/usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/Bucardo/.packlist

Appending installation info to
/usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/perllocal.pod


I gone through the script and found this is what is happening behind

bucardo=#
bucardo=#
bucardo=# CREATE OR REPLACE FUNCTION bucardo.plperlu_test()
bucardo-# RETURNS TEXT
bucardo-# LANGUAGE plperlu
bucardo-# AS $bc$
bucardo$# return 'Pl/PerlU was successfully installed';
bucardo$# $bc$;
ERROR: didn't get a return item from mksafefunc
bucardo=#

so there must be something wrong at mksafefunc or in that perl file.

Any solution?

On Wed, May 4, 2011 at 1:14 PM, tushar nehete tpneh...@gmail.com wrote:

 Hi Thanks to ALL,
 John I tried Perl built into RHEL 5.5 but i got some errors so I download
 activeperl 5.12 and
 installed it.
 After that when start installation I stuck with the error,

 FAILED! (psql:/usr/local/share/bucardo/bucardo.schema:40: ERROR:  didn't
 get a returINSTALLATION n item from mksafefunc )

 Can any one help to deal with this error  !!!

 Thanks,
 Tushar




 On Wed, May 4, 2011 at 12:59 PM, Greg Smith g...@2ndquadrant.com wrote:

 Merlin Moncure wrote:

 I know some people do some cool, usable things with that stuff,
 but the whole concept seems awfully awkward to me.  I suppose I'm a
 crotchety, cane shaking fundamentalist...


 It's possible--do you sometimes find yourself yelling at young developers,
 telling them to stop replicating in your yard?

 --
 Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Andrew Sullivan
On Thu, May 05, 2011 at 03:07:20PM +0530, tushar nehete wrote:
 Hi,
 I gone through the steps from bucardo sites as,
 
 [root@billingtest1 Bucardo-4.4.3]# perl Makefile.PL
 WARNING: LICENSE is not a known parameter.
 Warning: prerequisite DBD:g 2.0 not found. We have 1.49.
 Warning: prerequisite ExtUtils::MakeMaker 6.32 not found. We have 6.30.

I don't know anything about Bucardo, but it sure looks to me like you
need to do some upgrading before continuing past this point.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Queries Regarding Postgresql Replication

2011-05-05 Thread Nithya Rajendran
Hi ,
I have some couple of queries while doing postgres replication,

=== How to find whether current postgres is running as master or slave?

 What is the procedure to make the failed master as working slave?

Could you please clarify above queries.

Thanks and Regards
Nithya R
Lead Engineer| Tel-Pw-LTE| AMB-5
HCL Technologies | Chennai- 600 058
[cid:image001.jpg@01CC0B46.535BC080]



::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---
inline: image001.jpg

Re: [GENERAL] permission denied for schema even as superuser.

2011-05-05 Thread Chris Young
Apologies, I have already solved this.

I forgot to grant usage on schema nms to foo; where foo is the owner of
mainview_teststatusevent.

Cheers,
Chris

On Thu, May 5, 2011 at 7:03 PM, Chris Young ch...@chris.net.au wrote:

 Greetings,

 I'm trying to perform the following query, but receive a perplexing error,
 even as superuser (postgres):

 umdb_db=# insert into mainview_teststatusevent
 (timestamp,host_id,test_id,old_status_id,new_status_id) values(now(),
 726,4,6,1);
 ERROR:  permission denied for schema nms
 LINE 1: SELECT 1 FROM ONLY nms.mainview_status x WHERE id OPER...
^
 QUERY:  SELECT 1 FROM ONLY nms.mainview_status x WHERE id
 OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

 Could you please let me know how I might troubleshoot and/or fix this
 'permission denied' error? I'm running 9.0.4.

 In case it's relevant, the events leading up to this error are that I tried
 migrating all my tables and sequences from the public schama to my newly
 created nms schema. I simply did this:

 create schema nms;
 alter table foo set schema nms;
 alter sequence foo_id_seq set schema nms;
 For all my tables and sequences.

 The query worked when the tables were in the public schema, but not after I
 tried to change them to the new schema.

 I already searched the mailing list for this issue, and I could only find
 old posts (circa 2007) that suggested it might be a bug, and I expect it's
 more likely I've just done something stupid or missed something out.

 Cheers,
 Chris Young



Re: Fwd: Re: [GENERAL] SSDD reliability

2011-05-05 Thread David Boreham

On 5/5/2011 2:36 AM, Florian Weimer wrote:


I'm a bit concerned with usage-dependent failures.  Presumably, two SDDs
in a RAID-1 configuration are weared down in the same way, and it would
be rather inconvenient if they failed at the same point.  With hard
disks, this doesn't seem to happen; even bad batches fail pretty much
randomly.

fwiw this _can_ happen with traditional drives : we had a bunch of WD 
300G velociraptor
drives that had a firmware bug related to a 32-bit counter roll-over. 
This happened at
exactly the same time for all drives in a machine (because the counter 
counted since

power-up time). Needless to say this was quite frustrating !



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDD reliability

2011-05-05 Thread David Boreham

On 5/4/2011 11:50 PM, Toby Corkindale wrote:


In what way has the SMART read failed?
(I get the relevant values out successfully myself, and have Munin 
graph them.)
Mis-parse :) It was my _attempts_ to read SMART that failed. 
Specifically, I was able to read a table of numbers from the drive, but 
none of the numbers looked particularly useful or likely to be a time 
to live number. Similar to traditional drives, where you get this table 
of numbers that are either zero or random, that you look at saying 
Huh?, all of which are flagged as failing. Perhaps I'm using the 
wrong SMART groking tools ?





I do have to wonder if this Portman Wills guy was somehow Doing It 
Wrong to get a 100% failure rate over eight disks..



There are people out there who are especially highly charged.
So if he didn't wear out the drives, the next most likely cause I'd 
suspect is that he ESD zapped them.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-05-05 Thread Chris Curvey


 Hmm, that's pretty interesting.  Possibly it's just another
 manifestation of something scribbling past the end of its allocated
 chunk, but my credence for that theory is way down if there's no add-on
 code involved.  Does the postmaster log show any messages like
WARNING: detected write past chunk end in ...
 before the crash?

regards, tom lane


Nope.  Just a bunch of messages saying checkpoints occurring too
frequently, then server process (PID 123456) was terminated by signal 11:
 Segmentation fault


Re: [GENERAL] Rearranging simple where clauses

2011-05-05 Thread Igor Neyman


-Original Message-
From: Michael Graham [mailto:mgra...@bloxx.com] 
Sent: Wednesday, May 04, 2011 11:59 AM
To: pgsql-general@postgresql.org
Subject: Re: Rearranging simple where clauses

On Wed, 2011-05-04 at 11:49 -0400, Tom Lane wrote:
 Well, you failed to show us any concrete examples of the cases you
 were looking at, but no I don't think the planner necessarily likes
 all the constants on one side.  Most likely the win cases are where
 one side of a WHERE-condition operator exactly matches an index, so
 you'd need to be looking for places where rearrangement could make
 that happen. 

The reason I never showed you any was because I don't have any I was
just curious.  But yeah making one side match an index exactly is
probably the biggest win.

I.N.

I think, it'll be probably the only win, not the biggest - sometimes big, 
sometimes small.

But, what if there are more than one index based on the column in question? - 
Which one optimizer is supposed to satisfy by rearranging where clause?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDD reliability

2011-05-05 Thread Scott Ribe
On May 4, 2011, at 9:34 PM, David Boreham wrote:

 So ok, yeah...I said that chips don't just keel over and die mid-life
 and you came up with the one counterexample in the history of
 the industry

Actually, any of us who really tried could probably come up with a dozen 
examples--more if we've been around for a while. Original design cutting 
corners on power regulation; final manufacturers cutting corners on specs; 
component manufacturers cutting corners on specs or selling outright 
counterfeit parts...

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Generating fields in views with search/replace?

2011-05-05 Thread Asfand Qazi (Sanger Institute)
Hi,

Say I have a table with fields 'template' and 'original_letter', and
'new_selected_letter'.  'template' could be 'abcdefg0abcdefg',
original_letter could be 'A' and new_selected_letter could be 'B'.

I want a view where I see 2 fields: 'original' as 'abcdefgAabcdefg'
and 'new_selected' as 'abcdefgBabcdefg', where the view has replaced
the '0' with original_letter or new_selected_letter respectively.

Sorry for the silly example, but is there a way for me to do this in
Postgresql?  I'm using Postgresql 9

Thanks.  Regards,
   Asfand Qazi

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generating fields in views with search/replace?

2011-05-05 Thread Merlin Moncure
On Thu, May 5, 2011 at 9:09 AM, Asfand Qazi (Sanger Institute)
aq2.san...@gmail.com wrote:
 Hi,

 Say I have a table with fields 'template' and 'original_letter', and
 'new_selected_letter'.  'template' could be 'abcdefg0abcdefg',
 original_letter could be 'A' and new_selected_letter could be 'B'.

 I want a view where I see 2 fields: 'original' as 'abcdefgAabcdefg'
 and 'new_selected' as 'abcdefgBabcdefg', where the view has replaced
 the '0' with original_letter or new_selected_letter respectively.

 Sorry for the silly example, but is there a way for me to do this in
 Postgresql?  I'm using Postgresql 9

The mechanics of making a view do something like that is trivial, but
your example doesn't make clear why the character '0' is special and
is the one translated?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDD reliability

2011-05-05 Thread David Boreham

On 5/5/2011 8:04 AM, Scott Ribe wrote:


Actually, any of us who really tried could probably come up with a dozen 
examples--more if we've been around for a while. Original design cutting 
corners on power regulation; final manufacturers cutting corners on specs; 
component manufacturers cutting corners on specs or selling outright 
counterfeit parts...


These are excellent examples of failure causes for electronics, but they are
not counter-examples. They're unrelated to the discussion about SSD
early lifetime hard failures.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] undead index

2011-05-05 Thread Cédric Villemain
2011/5/4 Jens Wilke jens.wi...@affinitas.de:

 This index was deleted several weeks ago.

[...]

 after pg_dumpall|psql from 8.4 to 9.0 the undead index revived on the target
 DB:

I understood that you droped an index and when you dump/restore you
get your index again.
Did I miss something ?

-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot reproduce why a query is slow

2011-05-05 Thread John Cheng
Hi,
We have certain types of query that seems to take about 900ms to run
according to postgres logs. When I try to run the same query via
command line with EXPLAIN ANALYZE, the query finishes very quickly.
What should I do to try to learn more about why it is running slowly?

The query is a bit complex, as it is generated by code, but I tried to
format it for easier reading. I've also replaced actual data with fake
data to protected personal information.

EXPLAIN ANALYZE
SELECT lm.lead_id as leadId,
lm.reporting_date as createdDay,
lrd.processing_state as processingState,
lrd.dealer_code as dealerCode
FROM lead_matching lm inner join lead_reporting_data lrd
on lrd.lead_id = lm.lead_id
WHERE ((lm.reporting_date = '2011-04-05') AND (lm.reporting_date =
'2011-05-05')
AND (lrd.dealer_region = 'SO') AND (lrd.dealer_area = '02')
AND (lm.lead_id  2645059)
AND (lrd.processing_state  'REJECTED') AND
((lrd.processing_state = 'NEW') OR (lrd.processing_state =
'PROCESSING') OR (lrd.processing_state = 'DELIVER') OR
(lrd.processing_state = 'DELIVERED') OR (lrd.processing_state =
'DONE') OR (lrd.processing_state = 'ERROR'))
AND ((lm.customer_phone  '{55}') OR
 (lm.customer_email  '{lisasm...@example.com}') OR
 ((lm.customer_lname = 'smith') AND (lm.customer_postal_code
 '{,}') AND
((lm.customer_fname = 'l') OR (lm.customer_address 
'{,}')
ORDER BY lm.reporting_date asc, lrd.lead_id asc

Sort  (cost=17382.02..17382.06 rows=16 width=31) (actual
time=54.762..54.762 rows=0 loops=1)
   Sort Key: lm.reporting_date, lrd.lead_id
   Sort Method:  quicksort  Memory: 25kB
   -  Nested Loop  (cost=3070.50..17381.70 rows=16 width=31) (actual
time=54.749..54.749 rows=0 loops=1)
 -  Bitmap Heap Scan on lead_matching lm
(cost=3070.50..7256.87 rows=1157 width=12) (actual time=54.746..54.746
rows=0 loops=1)
   Recheck Cond: (((customer_phone 
'{55}'::text[]) OR (customer_email 
'{lisasm...@example.com}'::text[]) OR ((customer_lname)::text =
'smith'::text)) AND (reporting_date = '2011-04-05'::date) AND
(reporting_date = '2011-05-05'::date))
   Filter: ((lead_id  2645059) AND ((customer_phone 
'{55}'::text[]) OR (customer_email 
'{lisasm...@example.com}'::text[]) OR (((customer_lname)::text =
'smith'::text) AND (customer_postal_code  '{,}'::text[]) AND
((customer_fname = 'l'::bpchar) OR (customer_address 
'{,}'::text[])
   -  BitmapAnd  (cost=3070.50..3070.50 rows=1164
width=0) (actual time=54.358..54.358 rows=0 loops=1)
 -  BitmapOr  (cost=649.49..649.49 rows=26456
width=0) (actual time=0.814..0.814 rows=0 loops=1)
   -  Bitmap Index Scan on
lead_matching_phone_idx  (cost=0.00..315.54 rows=13182 width=0)
(actual time=0.055..0.055 rows=4 loops=1)
 Index Cond: (customer_phone 
'{55}'::text[])
   -  Bitmap Index Scan on
lead_matching_email_idx  (cost=0.00..327.57 rows=13182 width=0)
(actual time=0.029..0.029 rows=1 loops=1)
 Index Cond: (customer_email 
'{lisasm...@example.com}'::text[])
   -  Bitmap Index Scan on
lead_matching_name_idx  (cost=0.00..5.51 rows=92 width=0) (actual
time=0.726..0.726 rows=1225 loops=1)
 Index Cond: ((customer_lname)::text =
'smith'::text)
 -  Bitmap Index Scan on
lead_matching_reporting_date_idx  (cost=0.00..2420.48 rows=116019
width=0) (actual time=52.931..52.931 rows=114892 loops=1)
   Index Cond: ((reporting_date =
'2011-04-05'::date) AND (reporting_date = '2011-05-05'::date))
 -  Index Scan using lead_reporting_data_pkey on
lead_reporting_data lrd  (cost=0.00..8.74 rows=1 width=19) (never
executed)
   Index Cond: (lrd.lead_id = lm.lead_id)
   Filter: (((lrd.processing_state)::text 
'REJECTED'::text) AND ((lrd.dealer_region)::text = 'SO'::text) AND
((lrd.dealer_area)::text = '02'::text) AND
(((lrd.processing_state)::text = 'NEW'::text) OR
((lrd.processing_state)::text = 'PROCESSING'::text) OR
((lrd.processing_state)::text = 'DELIVER'::text) OR
((lrd.processing_state)::text = 'DELIVERED'::text) OR
((lrd.processing_state)::text = 'DONE'::text) OR
((lrd.processing_state)::text = 'ERROR'::text)))


The lead_matching and lead_reporting_data table both contain 2637054
records. They look like:

   Table public.lead_matching
Column|  Type  | Modifiers
--++---
 lead_id  | bigint | not null
 reporting_date   | date   |
 customer_fname   | character(1)   |
 customer_lname   | character varying(128) |
 customer_postal_code | text[] |
 customer_email   | text[] |
 customer_phone   | text[]  

[GENERAL] permission denied for schema even as superuser.

2011-05-05 Thread Chris Young
Greetings,

I'm trying to perform the following query, but receive a perplexing error,
even as superuser (postgres):

umdb_db=# insert into mainview_teststatusevent
(timestamp,host_id,test_id,old_status_id,new_status_id) values(now(),
726,4,6,1);
ERROR:  permission denied for schema nms
LINE 1: SELECT 1 FROM ONLY nms.mainview_status x WHERE id OPER...
   ^
QUERY:  SELECT 1 FROM ONLY nms.mainview_status x WHERE id
OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

Could you please let me know how I might troubleshoot and/or fix this
'permission denied' error? I'm running 9.0.4.

In case it's relevant, the events leading up to this error are that I tried
migrating all my tables and sequences from the public schama to my newly
created nms schema. I simply did this:

create schema nms;
alter table foo set schema nms;
alter sequence foo_id_seq set schema nms;
For all my tables and sequences.

The query worked when the tables were in the public schema, but not after I
tried to change them to the new schema.

I already searched the mailing list for this issue, and I could only find
old posts (circa 2007) that suggested it might be a bug, and I expect it's
more likely I've just done something stupid or missed something out.

Cheers,
Chris Young


[GENERAL] 64-bit PostgreSQL AMI for CentOS 5.5?

2011-05-05 Thread Kunal Ashar

I'm attempting to get PostgreSQL-PostGIS in the Amazon EC2 cloud.
Does anyone have any information on where I can find a 64-bit AMI for 
Linux CentOS 5.5 + PostgreSQL 9.0.4 + PostGIS 1.5.2?

Thanks in advance,
Kunal





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 64-bit PostgreSQL AMI for CentOS 5.5?

2011-05-05 Thread Kunal Ashar

I'm attempting to get PostgreSQL-PostGIS in the Amazon EC2 cloud.
Does anyone have any information on where I can find a 64-bit AMI for 
Linux CentOS 5.5 + PostgreSQL 9.0.4 + PostGIS 1.5.2?

Thanks in advance,
Kunal




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] permission denied for schema even as superuser.

2011-05-05 Thread John Cheng
Could it be triggering a function that is defined with SECURITY
DEFINER and the definer of the function does not have the right
permissions?

On Thu, May 5, 2011 at 4:03 AM, Chris Young ch...@chris.net.au wrote:
 Greetings,
 I'm trying to perform the following query, but receive a perplexing error,
 even as superuser (postgres):
 umdb_db=# insert into mainview_teststatusevent
 (timestamp,host_id,test_id,old_status_id,new_status_id) values(now(),
 726,4,6,1);
 ERROR:  permission denied for schema nms
 LINE 1: SELECT 1 FROM ONLY nms.mainview_status x WHERE id OPER...
                            ^
 QUERY:  SELECT 1 FROM ONLY nms.mainview_status x WHERE id
 OPERATOR(pg_catalog.=) $1 FOR SHARE OF x
 Could you please let me know how I might troubleshoot and/or fix this
 'permission denied' error? I'm running 9.0.4.
 In case it's relevant, the events leading up to this error are that I tried
 migrating all my tables and sequences from the public schama to my newly
 created nms schema. I simply did this:
 create schema nms;
 alter table foo set schema nms;
 alter sequence foo_id_seq set schema nms;
 For all my tables and sequences.
 The query worked when the tables were in the public schema, but not after I
 tried to change them to the new schema.
 I already searched the mailing list for this issue, and I could only find
 old posts (circa 2007) that suggested it might be a bug, and I expect it's
 more likely I've just done something stupid or missed something out.
 Cheers,
 Chris Young



-- 
---
John L Cheng

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-05-05 Thread Tom Lane
Chris Curvey ch...@chriscurvey.com writes:
 Hmm, that's pretty interesting.  Possibly it's just another
 manifestation of something scribbling past the end of its allocated
 chunk, but my credence for that theory is way down if there's no add-on
 code involved.  Does the postmaster log show any messages like
 WARNING: detected write past chunk end in ...
 before the crash?

 Nope.  Just a bunch of messages saying checkpoints occurring too
 frequently, then server process (PID 123456) was terminated by signal 11:
  Segmentation fault

Hmph.  Well, I'm out of ideas here.  I think you're going to need to put
some effort into producing a self-contained test case so that others can
debug this ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] permission denied for schema even as superuser.

2011-05-05 Thread Tom Lane
Chris Young ch...@chris.net.au writes:
 Greetings,
 I'm trying to perform the following query, but receive a perplexing error,
 even as superuser (postgres):

 umdb_db=# insert into mainview_teststatusevent
 (timestamp,host_id,test_id,old_status_id,new_status_id) values(now(),
 726,4,6,1);
 ERROR:  permission denied for schema nms
 LINE 1: SELECT 1 FROM ONLY nms.mainview_status x WHERE id OPER...
^
 QUERY:  SELECT 1 FROM ONLY nms.mainview_status x WHERE id
 OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

 Could you please let me know how I might troubleshoot and/or fix this
 'permission denied' error? I'm running 9.0.4.

That's a foreign key checking query.  FK checks are done as the owner of
the target table, not as the user who did the original query.  So your
problem is that the owner of mainview_teststatusevent lacks permissions
to access the other table (or more specifically, the schema it's in).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reproduce why a query is slow

2011-05-05 Thread John Cheng
On Thu, May 5, 2011 at 8:54 AM, Andrew Sullivan a...@crankycanuck.ca wrote:
 On Thu, May 05, 2011 at 08:02:46AM -0700, John Cheng wrote:

 We have certain types of query that seems to take about 900ms to run
 according to postgres logs. When I try to run the same query via
 command line with EXPLAIN ANALYZE, the query finishes very quickly.

 Just a couple ideas.

 First, when you do this via command line, presumably the conditions
 that set up the query aren't present.  Is it possible that there's
 been a lot of activity on the table leading to dead rows that have
 been cleaned up by autovacuum by the time you come along?  (Or that
 the table has otherwise changed so that you are getting the benefit of
 indexes that the query wasn't using?)

 I especially note that

 WHERE ((lm.reporting_date = '2011-04-05') AND (lm.reporting_date =
 '2011-05-05')
     AND (lrd.dealer_region = 'SO') AND (lrd.dealer_area = '02')
     AND (lm.lead_id  2645059)
     AND (lrd.processing_state  'REJECTED') AND
 ((lrd.processing_state = 'NEW') OR (lrd.processing_state =
 'PROCESSING') OR (lrd.processing_state = 'DELIVER') OR
 (lrd.processing_state = 'DELIVERED') OR (lrd.processing_state =
 'DONE') OR (lrd.processing_state = 'ERROR'))

 these all look like the sort of status values that might change as the
 result of batch operations.

 Similarly, you might be running into I/O limits.  If this is a large
 report that is running at the same time as batch loads and so on of
 updates, you can find the query is very slow just because the machine
 is busy.

 Finally, you're not standing in line behind any locks, are you?

 Anyway, those are the things I'd start with.

 A

 --
 Andrew Sullivan
 a...@crankycanuck.ca

I have a couple of queries that allow me to see the active locks in
the database. It might help me see if these queries are blocked by
other locking queries.

In terms of IO limits, there are no other reports that are running.
What is the appropriate way to see if IO is the issue? I think the
900ms time is due to the database fetching data from disk. Can I force
the command line version to not use the memory cache and see if it
takes around 900ms in that case?

-- 
---
John L Cheng

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reproduce why a query is slow

2011-05-05 Thread Andrew Sullivan
On Thu, May 05, 2011 at 09:27:47AM -0700, John Cheng wrote:

 I have a couple of queries that allow me to see the active locks in
 the database. It might help me see if these queries are blocked by
 other locking queries.

Yes.  The pg_locks view is your friend here.


 In terms of IO limits, there are no other reports that are running.
 What is the appropriate way to see if IO is the issue? I think the
 900ms time is due to the database fetching data from disk. Can I force
 the command line version to not use the memory cache and see if it
 takes around 900ms in that case?

No, but you could run iostat during the period that this is happening
and see whether it shows you a big spike at that time.  Also, the
pg_statio_user_tables data might help you.  I think to make useful
inferences, you'd probably have to clear the statistics before and
after this runs, but you could do that and then compare what you get
when you look at it by hand.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] dblink() from GridSQL

2011-05-05 Thread Sam Nelson
Hi List,

We have a customer who is trying to migrate a few PostgresPlus instances to
GridSQL clusters.  They have a process that pulls data from another server
using dblink every night, and we're trying to replicate that on the GridSQL
instance, but grid is being a bit of a pain.

Grid doesn't seem to allow functions in from statements, and, of course, it
spits out errors about functions returning records being called in the wrong
context if we just try select dblink(foo, bar); (we had to try it).

Has anyone else run into this specific issue?  Is there a known workaround?
 Any ideas on what else we should try?
---
===
Samuel Nelson
Consistent State
www.consistentstate.com
303-955-0509
===


Re: [GENERAL] dblink() from GridSQL

2011-05-05 Thread Merlin Moncure
On Thu, May 5, 2011 at 12:13 PM, Sam Nelson s...@consistentstate.com wrote:
 Hi List,
 We have a customer who is trying to migrate a few PostgresPlus instances to
 GridSQL clusters.  They have a process that pulls data from another server
 using dblink every night, and we're trying to replicate that on the GridSQL
 instance, but grid is being a bit of a pain.
 Grid doesn't seem to allow functions in from statements, and, of course, it
 spits out errors about functions returning records being called in the wrong
 context if we just try select dblink(foo, bar); (we had to try it).
 Has anyone else run into this specific issue?  Is there a known workaround?
  Any ideas on what else we should try?

have you considered wrapping the output of the dblink query in a view?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dblink() from GridSQL

2011-05-05 Thread Scott Mead
On Thu, May 5, 2011 at 1:40 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Thu, May 5, 2011 at 12:13 PM, Sam Nelson s...@consistentstate.com
 wrote:
  Hi List,
  We have a customer who is trying to migrate a few PostgresPlus instances
 to
  GridSQL clusters.  They have a process that pulls data from another
 server
  using dblink every night, and we're trying to replicate that on the
 GridSQL
  instance, but grid is being a bit of a pain.
  Grid doesn't seem to allow functions in from statements, and, of course,
 it
  spits out errors about functions returning records being called in the
 wrong
  context if we just try select dblink(foo, bar); (we had to try it).
  Has anyone else run into this specific issue?


GridSQL itself doesn't support functions.


  Is there a known workaround?
   Any ideas on what else we should try?


You'd have to present the data to be partitioned to the gsql controller for
partitioning to happen properly, or use the high-speed import that it comes
with.  Could you dump the data to an intermediary csv and then push it at
the import utility?

--Scott


 have you considered wrapping the output of the dblink query in a view?

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] psql tunneling thru a middle server

2011-05-05 Thread jtkells
Im trying to connect my WS to a postgresql database (destination) via
a middle server 
I.e.
WS    Middle  Database 
server  server 
  172.x.2.4   172.x.4.12


I can create a SSH tunnel from my WS to the middle server but not sure
how or if I can make the middle server pass connections to and from my
ws and database.  BTW,  they run the DB on a non standard postrgres
port.  Any help would be appreciated
Thanks 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple table relationship constraints

2011-05-05 Thread Jack Christensen
What is the best way to handle multiple table relationships where 
attributes of the tables at the ends of the chain must match?


Example:

CREATE TABLE achievements(
achievement_id serial PRIMARY KEY,
...
);

CREATE TABLE achievement_versions(
achievement_version_id serial PRIMARY KEY,
achievement_id integer NOT NULL REFERENCES achievements,
...
);

CREATE TABLE achievement_attempts(
achievement_attempt_id serial PRIMARY KEY,
achievement_version_id integer NOT NULL REFERENCES achievement_versions,
...
);

CREATE TABLE actions(
action_id serial PRIMARY KEY,
...
)

CREATE TABLE achievement_attempt_actions(
achievement_attempt_id integer NOT NULL REFERENCES achievement_attempts,
action_id integer NOT NULL REFERENCES actions,
PRIMARY KEY( achievement_attempt_id, action_id)
);


The achievement_attempt_actions table links actions to 
achievement_attempts. For a link to be valid a number of attributes of 
actions must match attributes of achievements and achievement_attempts. 
This means an update to any of these 5 tables could invalidate the 
chain. How can I eliminate the possibility for this type of erroneous data?


I have come up with 4 possibilities.

1. Composite keys -- I could include all the attributes that must match 
on all the tables through the chain and let foreign key constraints 
handle it. This could work but it feels wrong to be duplicating 
attributes. It also is inconvenient (but possible) with my ORM.


2. Triggers -- I can use triggers to check every change on all 5 tables 
that could possibly cause an invalid chain. I have done this before and 
it does work -- but it can be error prone.


3. Check a materialized view -- Add triggers to all 5 tables to keep a 
materialized view up to date. Check constraints could validate the 
materialized view.


4. Validate application side -- this can work well, but it leaves the 
hole of a bug in the application or a direct SQL statement going bad.



Anyone have any advice on the best way to handle this?

--
Jack Christensen
ja...@hylesanderson.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 3:20 PM, Jack Christensen ja...@hylesanderson.eduwrote:

 What is the best way to handle multiple table relationships where
 attributes of the tables at the ends of the chain must match?

 Example:

 CREATE TABLE achievements(
 achievement_id serial PRIMARY KEY,
 ...
 );

 CREATE TABLE achievement_versions(
 achievement_version_id serial PRIMARY KEY,
 achievement_id integer NOT NULL REFERENCES achievements,
 ...
 );

 CREATE TABLE achievement_attempts(
 achievement_attempt_id serial PRIMARY KEY,
 achievement_version_id integer NOT NULL REFERENCES achievement_versions,
 ...
 );

 CREATE TABLE actions(
 action_id serial PRIMARY KEY,
 ...
 )

 CREATE TABLE achievement_attempt_actions(
 achievement_attempt_id integer NOT NULL REFERENCES achievement_attempts,
 action_id integer NOT NULL REFERENCES actions,
 PRIMARY KEY( achievement_attempt_id, action_id)
 );


 The achievement_attempt_actions table links actions to
 achievement_attempts. For a link to be valid a number of attributes of
 actions must match attributes of achievements and achievement_attempts. This
 means an update to any of these 5 tables could invalidate the chain. How can
 I eliminate the possibility for this type of erroneous data?


I might not be understanding your question, but isn't that what your foreign
key references do? For example, you can't update achievement_attempt_id in
the achievement_attempt table if there is an achievement_attempt_actions
record that refers to it since that would break the reference. (Not that you
want to be updating primary key values in the first place...)
-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] psql tunneling thru a middle server

2011-05-05 Thread Rick Chu
On Thu, 2011-05-05 at 15:05 -0400, jtke...@verizon.net wrote:
 Im trying to connect my WS to a postgresql database (destination) via
 a middle server 
 I.e.
 WS    Middle  Database 
 server  server 
   172.x.2.4   172.x.4.12
 
 
 I can create a SSH tunnel from my WS to the middle server but not sure
 how or if I can make the middle server pass connections to and from my
 ws and database.  BTW,  they run the DB on a non standard postrgres
 port.  Any help would be appreciated
 Thanks 
 

1. From your WS open two terminal(ssh) windows.

2. From Terminal#1 startup the ssh tunnel using: 
ssh -L 5432:database.com:8432 yo...@middle.com -p 20

3. From Terminal#2 connect to the database via the tunnel:
psql -p 5432 mydatabasename -U you12 -h localhost


RickC



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generating fields in views with search/replace?

2011-05-05 Thread Bosco Rama
Asfand Qazi (Sanger Institute) wrote:
 
 Say I have a table with fields 'template' and 'original_letter', and
 'new_selected_letter'.  'template' could be 'abcdefg0abcdefg',
 original_letter could be 'A' and new_selected_letter could be 'B'.
 
 I want a view where I see 2 fields: 'original' as 'abcdefgAabcdefg'
 and 'new_selected' as 'abcdefgBabcdefg', where the view has replaced
 the '0' with original_letter or new_selected_letter respectively.

Well, in 8.4.7 you'd use something like:

   create view xyz as
  select regexp_replace(template, '0', original_letter) as original,
 regexp_replace(template, '0', new_selected_letter) as new_selected
 from template_table;

Should be the same in 9.x.

(See docs for more info on the regexp_replace() function)

HTH.

Later,
Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread David Johnston
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Jack Christensen
 Sent: Thursday, May 05, 2011 3:20 PM
 To: pgsql
 Subject: [GENERAL] Multiple table relationship constraints
 
 4. Validate application side -- this can work well, but it leaves the hole
of a
 bug in the application or a direct SQL statement going bad.
 
 
 Anyone have any advice on the best way to handle this?
 

Not totally following the usage though I have come across similar
requirements before.  A variant of #4 would be to remove
INSERT/UPDATE/DELETE permissions on the relevant tables and write SECURITY
DEFINER functions to perform those actions instead. You can additionally
leave the constraints loose and have the function query the tables
post-modification to make sure they are still valid (kind of like the
materialized view option but without a permanent table).

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Jack Christensen

On 5/5/2011 2:28 PM, Rick Genter wrote:
On Thu, May 5, 2011 at 3:20 PM, Jack Christensen 
ja...@hylesanderson.edu mailto:ja...@hylesanderson.edu wrote:


What is the best way to handle multiple table relationships where
attributes of the tables at the ends of the chain must match?

Example:

CREATE TABLE achievements(
achievement_id serial PRIMARY KEY,
...
);

CREATE TABLE achievement_versions(
achievement_version_id serial PRIMARY KEY,
achievement_id integer NOT NULL REFERENCES achievements,
...
);

CREATE TABLE achievement_attempts(
achievement_attempt_id serial PRIMARY KEY,
achievement_version_id integer NOT NULL REFERENCES
achievement_versions,
...
);

CREATE TABLE actions(
action_id serial PRIMARY KEY,
...
)

CREATE TABLE achievement_attempt_actions(
achievement_attempt_id integer NOT NULL REFERENCES
achievement_attempts,
action_id integer NOT NULL REFERENCES actions,
PRIMARY KEY( achievement_attempt_id, action_id)
);


The achievement_attempt_actions table links actions to
achievement_attempts. For a link to be valid a number of
attributes of actions must match attributes of achievements and
achievement_attempts. This means an update to any of these 5
tables could invalidate the chain. How can I eliminate the
possibility for this type of erroneous data?


I might not be understanding your question, but isn't that what your 
foreign key references do? For example, you can't update 
achievement_attempt_id in the achievement_attempt table if there is an 
achievement_attempt_actions record that refers to it since that would 
break the reference. (Not that you want to be updating primary key 
values in the first place...)
The trick is there are additional attributes of actions and achievements 
such as a category that must match for the link to be valid. These 
attributes are not part of the primary key of either record and can and 
do change.

--
Rick Genter
rick.gen...@gmail.com mailto:rick.gen...@gmail.com




--
Jack Christensen
ja...@hylesanderson.edu



Re: [GENERAL] SSDD reliability

2011-05-05 Thread Greg Smith

On 05/05/2011 10:35 AM, David Boreham wrote:

On 5/5/2011 8:04 AM, Scott Ribe wrote:


Actually, any of us who really tried could probably come up with a 
dozen examples--more if we've been around for a while. Original 
design cutting corners on power regulation; final manufacturers 
cutting corners on specs; component manufacturers cutting corners on 
specs or selling outright counterfeit parts...


These are excellent examples of failure causes for electronics, but 
they are

not counter-examples. They're unrelated to the discussion about SSD
early lifetime hard failures.


That's really optimistic.  For all we know, these problems are the 
latest incarnation of something like the bulging capacitor plague circa 
5 years ago.  Some part that is unique to the SSDs other than the flash 
cells that there's a giant bad batch of.


I think your faith in PC component manufacturing is out of touch with 
the actual field failure rates for this stuff, which is produced with 
enormous cost cutting pressure driving tolerances to the bleeding edge 
in many cases.  The equipment of the 80's and 90's you were referring to 
ran slower, and was more expensive so better quality components could be 
justified.  The quality trend at the board and component level has been 
trending for a long time toward cheap over good in almost every case 
nowadays.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 3:50 PM, Jack Christensen ja...@hylesanderson.eduwrote:

 The trick is there are additional attributes of actions and achievements
 such as a category that must match for the link to be valid. These
 attributes are not part of the primary key of either record and can and do
 change.


So your data is denormalized? (The category appears in 2 tables?) Don't do
that. Create a view that joins your two tables together instead if you need
a single entity that contains data from multiple sources. Then you won't
have any of the data integrity issues you're worried about.

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] dblink() from GridSQL

2011-05-05 Thread Sam Nelson
Grid passes functions off to underlying databases.  Unfortunately, it
doesn't do so when the functions are in the from clause.  If it did, that
would work.

But I digress.  We're attempting to try either the csv import (which would
require a new script, but no biggie) or a data pull on the underlying
database to a table that only exists on one node.
---
===
Samuel Nelson
Consistent State
www.consistentstate.com
303-955-0509
===


On Thu, May 5, 2011 at 12:36 PM, Scott Mead sco...@openscg.com wrote:


 On Thu, May 5, 2011 at 1:40 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Thu, May 5, 2011 at 12:13 PM, Sam Nelson s...@consistentstate.com
 wrote:
  Hi List,
  We have a customer who is trying to migrate a few PostgresPlus instances
 to
  GridSQL clusters.  They have a process that pulls data from another
 server
  using dblink every night, and we're trying to replicate that on the
 GridSQL
  instance, but grid is being a bit of a pain.
  Grid doesn't seem to allow functions in from statements, and, of course,
 it
  spits out errors about functions returning records being called in the
 wrong
  context if we just try select dblink(foo, bar); (we had to try it).
  Has anyone else run into this specific issue?


 GridSQL itself doesn't support functions.


  Is there a known workaround?
   Any ideas on what else we should try?


 You'd have to present the data to be partitioned to the gsql controller for
 partitioning to happen properly, or use the high-speed import that it comes
 with.  Could you dump the data to an intermediary csv and then push it at
 the import utility?

 --Scott


 have you considered wrapping the output of the dblink query in a view?

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Jack Christensen

On 5/5/2011 2:53 PM, Rick Genter wrote:
On Thu, May 5, 2011 at 3:50 PM, Jack Christensen 
ja...@hylesanderson.edu mailto:ja...@hylesanderson.edu wrote:


The trick is there are additional attributes of actions and
achievements such as a category that must match for the link to be
valid. These attributes are not part of the primary key of either
record and can and do change.


So your data is denormalized? (The category appears in 2 tables?) 
Don't do that. Create a view that joins your two tables together 
instead if you need a single entity that contains data from multiple 
sources. Then you won't have any of the data integrity issues you're 
worried about.
It's not denormalized. It is an attribute that both tables have that 
have to match for it to be a valid link.


Here's a contrived example:

CREATE TABLE dorms(
  dorm_id serial PRIMARY KEY,
  gender varchar NOT NULL,
  ...
);

CREATE TABLE people(
  person_id serial PRIMARY KEY,
  gender varchar NOT NULL,
  ...
);

CREATE TABLE room_assignments(
  person_id integer NOT NULL REFERENCES people,
  dorm_id integer NOT NULL REFERENCES dorms,
  ...
);

Men should only be assignable to men's dorms and women should only be 
assignable to women's dorms. On occasion a person's or dorm's gender 
needs to be updated. I want to make sure that doesn't cause a room 
assignment to become invalid. In this example, adding gender to 
room_assignments and using composite foreign keys is fairly straight 
forward -- but in my actual domain I have 5+ tables and 2+ attributes 
involved in the relationship.


--
Rick Genter
rick.gen...@gmail.com mailto:rick.gen...@gmail.com




--
Jack Christensen
ja...@hylesanderson.edu



Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 4:14 PM, Jack Christensen ja...@hylesanderson.eduwrote:

 It's not denormalized. It is an attribute that both tables have that have
 to match for it to be a valid link.

 Here's a contrived example:

 CREATE TABLE dorms(
   dorm_id serial PRIMARY KEY,
   gender varchar NOT NULL,
   ...
 );

 CREATE TABLE people(
   person_id serial PRIMARY KEY,
   gender varchar NOT NULL,
   ...
 );

 CREATE TABLE room_assignments(
   person_id integer NOT NULL REFERENCES people,
   dorm_id integer NOT NULL REFERENCES dorms,
   ...
 );

 Men should only be assignable to men's dorms and women should only be
 assignable to women's dorms. On occasion a person's or dorm's gender needs
 to be updated. I want to make sure that doesn't cause a room assignment to
 become invalid. In this example, adding gender to room_assignments and using
 composite foreign keys is fairly straight forward -- but in my actual domain
 I have 5+ tables and 2+ attributes involved in the relationship.


Hm. I think the way I would handle this is to put the business logic for
inserting/updating into the room_assignments table into one or more
functions and have a special user that owns the tables and owns the
functions and declare the functions to be SECURITY DEFINER. Revoke
INSERT/UPDATE/DELETE access to the tables from all other users. Then you
grant your regular users EXECUTE access to the functions. The functions run
as the user that created them, so they will have direct INSERT/UPDATE/DELETE
access to the tables while your regular users won't.

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] SSDD reliability

2011-05-05 Thread Scott Marlowe
On Thu, May 5, 2011 at 1:54 PM, Greg Smith g...@2ndquadrant.com wrote:

 I think your faith in PC component manufacturing is out of touch with the
 actual field failure rates for this stuff, which is produced with enormous
 cost cutting pressure driving tolerances to the bleeding edge in many cases.
  The equipment of the 80's and 90's you were referring to ran slower, and
 was more expensive so better quality components could be justified.  The
 quality trend at the board and component level has been trending for a long
 time toward cheap over good in almost every case nowadays.

Modern CASE tools make this more and more of an issue.  You can be in
a circuit design program, right click on a component and pick from a
dozen other components with lower tolerances and get a SPICE
simulation that says initial production line failure rates will go
from 0.01% to 0.02%.  Multiply that times 100 components and it seems
like a small change.  But all it takes is one misstep and you've got a
board with a theoretical production line failure rate of 0.05 that's
really 0.08, and the first year failure rate goes from 0.5% to 2 or 3%
and the $2.00 you saved on all components on the board times 1M units
goes right out the window.

BTW, the common term we used to refer to things that fail due to weird
and unforseen circumstances were often referred to as P.O.M.
dependent, (phase of the moon) because they'd often cluster around
certain operating conditions that were unobvious until you collected
and collated a large enough data set.  Like hard drives that have
abnormally high failure rates at altitudes above 4500ft etc.  Seem
fine til you order 1,000 for your Denver data center and they all
start failing.  It could be anything like that.  SSDs that operate
fine until they're in an environment with constant % humidity below
15% and boom they start failing like mad.  It's impossible to test for
all conditions in the field, and it's quite possible that
environmental factors affect some of these SSDs we've heard about.
More research is necessary to say why someone would see such
clustering though.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2011-05-05 Thread Ovidiu Farauanu
unsubscribe


[GENERAL] multiple sequence number for one column

2011-05-05 Thread mirthcyy
hi group,

we need help on one postgresql locking issue:

Originally we have a table like below;

id bigint not null nextval('xxx)',
customer_id int not null,
insert_record_date timestamp not null
...

so this id column is using a sequence number that applies to all
customers. And it's the primary key of the table

recently we made the change to use id and customer_id as the composite
primary key. And id will have a sequence number within each
customer_id. So now we can't use sequence number any more. To insert
the data into the new table, we have to calculate the id for that
particular customer_id first like

SELECT INTO v_ID COLESCSE(MAX(ID),0)+1
WHERE Customer_ID=P_A_Customer_ID

And then insert into this table with the id getting from the above
query. We also used:

PERFORM pg_advisory_lock('Schema.TABLE'::regclass::integer,
P_A_Customer_ID


Then we found out this lock doesn't work. If two transactions for the
same customer comes in very closely, the second one will try to get
the max(id) from the table while the first one is still working on the
insert and then it will be a problem.

We thought of using SHARE UPDATE EXCLUSIVE but it will interfere with
regular maintenance like Vacuum and also it will block other updates
etc.

Is there a good way to solve this issue?

Thanks a lot






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple sequence number for one column

2011-05-05 Thread Merlin Moncure
On Thu, May 5, 2011 at 1:54 PM, mirthcyy mirth...@gmail.com wrote:
 hi group,

 we need help on one postgresql locking issue:

 Originally we have a table like below;

 id bigint not null nextval('xxx)',
 customer_id int not null,
 insert_record_date timestamp not null
 ...

 so this id column is using a sequence number that applies to all
 customers. And it's the primary key of the table

 recently we made the change to use id and customer_id as the composite
 primary key. And id will have a sequence number within each
 customer_id. So now we can't use sequence number any more. To insert
 the data into the new table, we have to calculate the id for that
 particular customer_id first like

 SELECT INTO v_ID COLESCSE(MAX(ID),0)+1
 WHERE Customer_ID=P_A_Customer_ID

 And then insert into this table with the id getting from the above
 query. We also used:

 PERFORM pg_advisory_lock('Schema.TABLE'::regclass::integer,
 P_A_Customer_ID


 Then we found out this lock doesn't work. If two transactions for the
 same customer comes in very closely, the second one will try to get
 the max(id) from the table while the first one is still working on the
 insert and then it will be a problem.

 We thought of using SHARE UPDATE EXCLUSIVE but it will interfere with
 regular maintenance like Vacuum and also it will block other updates
 etc.

 Is there a good way to solve this issue?

see here: http://www.varlena.com/GeneralBits/130.php for a lot of good
ideas. max(id) is going to be problematic at best.  Fundamentally, any
non sequence approach is going to require some form of locking :(.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: Re: [GENERAL] SSDD reliability

2011-05-05 Thread Greg Smith

On 05/04/2011 08:31 PM, David Boreham wrote:
Here's my best theory at present : the failures ARE caused by cell 
wear-out, but the SSD firmware is buggy in so far as it fails to boot 
up and respond to host commands due to the wear-out state. So rather 
than the expected outcome (SSD responds but has read-only behavior), 
it appears to be (and is) dead. At least to my mind, this is a more 
plausible explanation for the reported failures vs. the alternative 
(SSD vendors are uniquely clueless at making basic electronics 
subassemblies), especially considering the difficulty in testing the 
firmware under all possible wear-out conditions.


One question worth asking is : in the cases you were involved in, was 
manufacturer failure analysis performed (and if so what was the 
failure cause reported?).


Unfortunately not.  Many of the people I deal with, particularly the 
ones with budgets to be early SSD adopters, are not the sort to return 
things that have failed to the vendor.  In some of these shops, if the 
data can't be securely erased first, it doesn't leave the place.  The 
idea that some trivial fix at the hardware level might bring the drive 
back to life, data intact, is terrifying to many businesses when drives 
fail hard.


Your bigger point, that this could just easily be software failures due 
to unexpected corner cases rather than hardware issues, is both a fair 
one to raise and even more scary.


Intel claims their Annual Failure Rate (AFR) on their SSDs in IT 
deployments (not OEM ones) is 0.6%.  Typical measured AFR rates for 
mechanical drives is around 2% during their first year, spiking to 5% 
afterwards.  I suspect that Intel's numbers are actually much better 
than the other manufacturers here, so a SSD from anyone else can 
easily be less reliable than a regular hard drive still.


Hmm, this is speculation I don't support (non-intel vendors have a 10x 
worse early failure rate). The entire industry uses very similar 
processes (often the same factories). One rogue vendor with a bad 
process...sure, but all of them ??




I was postulating that you only have to be 4X as bad as Intel to reach 
2.4%, and then be worse than a mechanical drive for early failures.  If 
you look at http://labs.google.com/papers/disk_failures.pdf you can see 
there's a 5:1 ratio in first-year AFR just between light and heavy usage 
on the drive.  So a 4:1 ratio between best and worst manufacturer for 
SSD seemed possible.  Plenty of us have seen particular drive models 
that were much more than 4X as bad as average ones among regular hard 
drives.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple sequence number for one column

2011-05-05 Thread David Johnston

 
 Is there a good way to solve this issue?
 
 Thanks a lot
 

Yes, multiple actually.

...

one db centric solution is to create a sequence for each customer id and then 
nextval that specific sequence when you need a new id.

Would need to know more about the number of customers and how often you expect 
to assign new IDs - to each - to recommend other possibilities.


David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SMART attributes for SSD (was: SSDD reliability)

2011-05-05 Thread Toby Corkindale

On 05/05/11 22:50, David Boreham wrote:

On 5/4/2011 11:50 PM, Toby Corkindale wrote:


In what way has the SMART read failed?
(I get the relevant values out successfully myself, and have Munin
graph them.)



Mis-parse :) It was my _attempts_ to read SMART that failed.
Specifically, I was able to read a table of numbers from the drive, but
none of the numbers looked particularly useful or likely to be a time
to live number. Similar to traditional drives, where you get this table
of numbers that are either zero or random, that you look at saying
Huh?, all of which are flagged as failing. Perhaps I'm using the
wrong SMART groking tools ?


I run:
sudo smartctl -a /dev/sda

And amongst the usual values, I also get:
232 Available_Reservd_Space 0x0002   100   048   000Old_age   Always 
  -   9011683733561
233 Media_Wearout_Indicator 0x0002   100   000   000Old_age   Always 
  -   0


The media wearout indicator is the useful one.

Plus some unknown attributes:
229 Unknown_Attribute   0x0002   100   000   000Old_age   Always 
  -   21941823264152
234 Unknown_Attribute   0x0002   100   000   000Old_age   Always 
  -   953583437830
235 Unknown_Attribute   0x0002   100   000   000Old_age   Always 
  -   1476591679



I found some suggested definitions for those attributes, but they didn't 
seem to match up with my values once I decoded them, so mine must be 
proprietary.


-Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: Re: [GENERAL] SSDD reliability

2011-05-05 Thread Toby Corkindale

On 05/05/11 18:36, Florian Weimer wrote:

* Greg Smith:


Intel claims their Annual Failure Rate (AFR) on their SSDs in IT
deployments (not OEM ones) is 0.6%.  Typical measured AFR rates for
mechanical drives is around 2% during their first year, spiking to 5%
afterwards.  I suspect that Intel's numbers are actually much better
than the other manufacturers here, so a SSD from anyone else can
easily be less reliable than a regular hard drive still.


I'm a bit concerned with usage-dependent failures.  Presumably, two SDDs
in a RAID-1 configuration are weared down in the same way, and it would
be rather inconvenient if they failed at the same point.  With hard
disks, this doesn't seem to happen; even bad batches fail pretty much
randomly.


Actually I think it'll be the same as with hard disks.
ie. A batch of drives with sequential serial numbers will have a fairly 
similar average lifetime, but they won't pop their clogs all on the same 
day. (Unless there is an outside influence - see note 1)


The wearing-out of SSDs is not as exact as people seem to think. If the 
drive is rated for 10,000 erase cycles, then that is meant to be a 
MINIMUM amount. So most blocks will get more than that amount, and maybe 
a small number will die before that amount. I guess it's a probability 
curve, engineered such that 95% or some other high percentage will 
outlast that count. (and the SSDs have reserved blocks which are 
introduced to take over from failing blocks, invisibly to the end-user 
-since it can always read from the failing-to-erase block)


Note 1:
I have seen an array that was powered on continuously for about six 
years, which killed half the disks when it was finally powered down, 
left to cool for a few hours, then started up again.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queries Regarding Postgresql Replication

2011-05-05 Thread Josh Kupershmidt
On Thu, May 5, 2011 at 7:33 AM, Nithya Rajendran r-nit...@hcl.com wrote:

[Disclaimer: I have minimal experience with hot standby, the below is
just from reading the docs]

 === How to find whether current postgres is running as master or slave?

SELECT pg_is_in_recovery();
will tell you (should be false for the master, true for the slave).

  What is the procedure to make the failed master as working slave?

You mean, after you've failed over from A to B, how to bring A back up
as a slave of B? You should be able to just follow the steps of
setting up a master-slave pair, with B as the new master, and A the
new slave.

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reproduce why a query is slow

2011-05-05 Thread John Cheng
On Thu, May 5, 2011 at 10:01 AM, Andrew Sullivan a...@crankycanuck.ca wrote:
 On Thu, May 05, 2011 at 09:27:47AM -0700, John Cheng wrote:

 I have a couple of queries that allow me to see the active locks in
 the database. It might help me see if these queries are blocked by
 other locking queries.

 Yes.  The pg_locks view is your friend here.

I'm not sure locks is an issue here. I typically see

  now  |  relname   |
transactionid |   locktype|  pid  |  client_addr   |
activity_start |   mode   | granted
---++---+---+---++---+--+-
 2011-05-05 21:36:02.739645-05 | inbound_smtp_lead_id_idx   |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | mb_sale_sale_date_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead   |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowShareLock | t
 2011-05-05 21:36:02.739645-05 | lead   |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowExclusiveLock | t
 2011-05-05 21:36:02.739645-05 | sale_matching_zipname_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_source_idx|
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_source_idx|
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowExclusiveLock | t
 2011-05-05 21:36:02.739645-05 | inbound_smtp_pkey  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | mb_sale_dealer_region_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_modified_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_modified_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowExclusiveLock | t
 2011-05-05 21:36:02.739645-05 | metadata   |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | dealer_metadata_pkey   |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | dealer_dealer_code_key |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | mb_sale_dealer_code_idx|
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | sale_matching_email_idx|
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | locality_data_postal_code_norm_idx |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | sale_matching_alt_phone_idx|
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | config_xml_current |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_processing_state_step_idx |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_processing_state_step_idx |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowExclusiveLock | t
 2011-05-05 21:36:02.739645-05 | mb_sale_model_idx  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | AccessShareLock  | t
 2011-05-05 21:36:02.739645-05 | lead_matching  |
 | relation  | 21049 | 172.16.172.233 | 2011-05-05
21:36:02.724541-05 | RowExclusiveLock | t
 2011-05-05 21:36:02.739645-05 | metadata_key_value_idx |
 | relation  | 21049 | 

Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Mon, May 02, 2011 at 11:31:28PM -0700, John R Pierce wrote:
 AFAIK, the only postgres replication systems that even pretend to  
 support master-master are things like Bucardo that do the replication at  
 the SQL layer, by sending all update/insert/delete commands to both  
 servers, and under certain sequences of concurrent queries, you could  
 end up with different results on the two servers.

Actually, Bucardo doesn't do statement replication. It, like Slony for
instance, replicates data, not SQL statements. And as you pointed out, it does
do bidirectional replication in a way that's sufficient for some use cases. 

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread John R Pierce

On 05/05/11 8:05 PM, Joshua Tolley wrote:

Actually, Bucardo doesn't do statement replication. It, like Slony for
instance, replicates data, not SQL statements. And as you pointed out, it does
do bidirectional replication in a way that's sufficient for some use cases.



does it use triggers for replication, similar to Slony, then?  
obviously, it can't be doing WAL level replication or it wouldn't be 
able to do any sort of master-master.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Thu, May 05, 2011 at 03:07:20PM +0530, tushar nehete wrote:
 Warning: prerequisite DBD:Pg 2.0 not found. We have 1.49.
 Warning: prerequisite ExtUtils::MakeMaker 6.32 not found. We have 6.30.

You need to install DBD::Pg, version 2.0 or greater. You also need to install
ExtUtils::MakeMaker version 6.32 or greater. These are both Perl packages,
available several different ways. Sometimes your operating system will
provide sufficiently recent versions through its own packaging system (e.g.
yum install perl-DBD-Pg); the more difficult way is to get it through CPAN,
per instructions here: http://www.cpan.org/modules/INSTALL.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread Joshua Tolley
On Thu, May 05, 2011 at 08:13:55PM -0700, John R Pierce wrote:
 On 05/05/11 8:05 PM, Joshua Tolley wrote:
 Actually, Bucardo doesn't do statement replication. It, like Slony for
 instance, replicates data, not SQL statements. And as you pointed out, it 
 does
 do bidirectional replication in a way that's sufficient for some use cases.

 does it use triggers for replication, similar to Slony, then?   
 obviously, it can't be doing WAL level replication or it wouldn't be  
 able to do any sort of master-master.

Exactly. It doesn't function exactly like Slony does under the hood, of
course, but it is trigger based. One notable difference between Bucardo and
Slony is that whereas Slony's triggers store the entire row data in a separate
log table when something changes, Bucardo stores only the primary key. As a
result, Bucardo doesn't apply each transaction to the replica databases, but
rather a set of all transactions that took place on the source since the last
time it synchronized things. For whatever that's worth.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Bidirectional replication

2011-05-05 Thread John R Pierce

On 05/05/11 8:14 PM, Joshua Tolley wrote:

On Thu, May 05, 2011 at 03:07:20PM +0530, tushar nehete wrote:

Warning: prerequisite DBD:Pg 2.0 not found. We have 1.49.
Warning: prerequisite ExtUtils::MakeMaker 6.32 not found. We have 6.30.

You need to install DBD::Pg, version 2.0 or greater. You also need to install
ExtUtils::MakeMaker version 6.32 or greater. These are both Perl packages,
available several different ways. Sometimes your operating system will
provide sufficiently recent versions through its own packaging system (e.g.
yum install perl-DBD-Pg); the more difficult way is to get it through CPAN,
per instructions here: http://www.cpan.org/modules/INSTALL.html


if you do get it into your mind that you need a newer version of perl 
than was supplied with RHEL 5 or whatever, do NOT replace the system 
perl in /usr/lib/perl ... instead, build your own perl to run in 
/usr/local/perl5 or /opt/perl5 or something.


I only see perl-DBD-Pg 1.49 in the RHEL repos, and I don't see 
perl-ExtUtils-MakeMaker in there at all (or in EPEL or in RpmForge). so 
you might be stuck with going the CPAN route.   This will likely require 
you to install the development tools (gcc etc) as well as perl-devel


If you want to do it cleanly, there exist scripts to turn CPAN modules 
into RPMs, so your system files remain under RPM management... otherwise 
a future yum upgrade could step on what you've manually installed.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Debug Contrib/cube code

2011-05-05 Thread Nick Raj
Hi,
I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we
able to debug that cube code?  Because there is no .configure  file to
enable debug. Is there is any way to change make file to enable debug?

Thanks
Nick