[GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Toby Corkindale
I've done some testing of PostgreSQL on different filesystems, and with 
different filesystem mount options.


I found that xfs and ext4 both performed similarly, with ext4 just a few 
percent faster; and I found that adjusting the mount options only gave 
small improvements, except for the barrier options. (Which come with a 
hefty warning)


I also tested btrfs, and was disappointed to see it performed 
*dreadfully* - even with the recommended options for database loads.


Best TPS I could get out of ext4 on the test machine was 2392 TPS, but 
btrfs gave me just 69! This is appalling performance. (And that was with 
nodatacow and noatime set)


I'm curious to know if anyone can spot anything wrong with my testing?
I note that the speed improvement from datacow to nodatacow was only 
small - can I be sure it was taking effect? (Although cat /proc/mounts 
reported it had)


The details of how I was running the test, and all the results, are here:
http://blog.dryft.net/2011/04/effects-of-filesystems-and-mount.html

I wouldn't run btrfs in production systems at the moment anyway, but I 
am curious about the current performance.

(Tested on Ubuntu Server - Maverick - Kernel 2.6.35-28)

Cheers,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to force an insert before Raise Exception?

2011-04-21 Thread Craig Ringer
On 19/04/11 21:39, giova wrote:
 Hi.
 
 I made a function that Raise exception with some conditions.
 No problem with that, it is the goal.
 
 My problem is that i want to do an INSERT into a log table before to raise
 the exception. But RAISE EXCEPTION cancels my Insert.
 
 How to force the insert to not being cancelled please

You can't have part of a transaction cancelled without cancelling all of
it. The INSERT is part of the transaction.

If you really need to insert a record in a log table before rolling back
a transaction, you need to use dblink to do the insert in another
transaction.

Personally, I'd enable PostgreSQL's CSV logging and then process the log
files, so you can include your logging info in the Pg logs. Using
logging tables will be way more hassle than it's worth.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions about Partitioning

2011-04-21 Thread Craig Ringer
On 19/04/11 23:56, Phoenix Kiula wrote:
 While I fix some bigger DB woes, I have learned a lesson. Huge indexes
 and tables are a pain.
 
 Which makes me doubly keen on looking at partitioning.
 
 Most examples I see online are partitioned by date. As in months, or
 quarter, and so on. This doesn't work for me as I don't have too much
 logic required based on time.
 
 The biggest, highest volume SELECT in my database happens through an
 alias column. This is an alphanumeric column. The second-biggest
 SELECT happens through the userid column -- because many users check
 their account every day.

If  user id - alias and/or alias - user id lookups are really hot,
consider moving them to a subtable, so you don't have to worry about
whether to partition by user id or alias, and so that the table is
really small, easily cached, and fast to scan. For example:

CREATE TABLE user_alias (
   alias VARCHAR(42) PRIMARY KEY,
   user_id integer REFERENCES maintable(id)
);

If you like you can retain the alias column in maintable, making
that a REFERENCE to user_alias(alias) so you force a 1:1 relationship
and don't have to JOIN on user_alias to get alias data for a user. The
downside of that is that the circular/bidirectional reference requires
you to use 'DEFERRABLE INITIALLY DEFERRED' on one or both references to
be able to insert, and that can cause memory use issues if you do really
big batch inserts and deletes on those tables.

 1. Which column should I partition by -- the alias because it's the
 largest contributor of queries? This should be OK, but my concern is
 that when user_id queries are happening, then the data for the same
 user will come through many subtables that are partitioned by alias

See above: consider splitting the user-id-to-alias mapping out into
another table.

 3. If I partition using a%, b% etc up to z% as the partition
 condition, is this an issue

It might be worth examining the distribution of your data and
partitioning on constraints that distribute the data better. There'll be
a lot more cs than zs.

That said, it might not be worth the complexity and you'd have to check
if the constraint exclusion code was smart enough to figure out the
conditions. I don't have much experience with partitioning and have
never tried or tested partitioning on a LIKE pattern.

 6. Triggers - how do they affect speed?

A constraint is not a trigger, they're different. SELECTs on partitioned
tables are not affected by triggers.

For INSERT, UPDATE and DELETE, where you're redirecting INSERTs into the
parent table into the appropriate partition, then speed might be a
concern. It probably doesn't matter. If you find it to be an issue, then
rather then re-writing the trigger in C, you're probably better off just
INSERTing directly into the appropriate subtable and thus bypassing the
trigger.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Needs Suggestion

2011-04-21 Thread Alban Hertroys
On 20 Apr 2011, at 19:11, SUBHAM ROY wrote:

 By doing \timing in psql, we enable the timing and then when we type the 
 query we are able to see its execution time. 
 Similarly, is there any way to view the number I/Os and memory usage by a 
 particular query.

You seem to be unfamiliar with the EXPLAIN ANALYSE command.

 And also the timing result that gets displayed, in which log file does it get 
 recorded?

None, as you're doing your measurements client-side. EXPLAIN ANALYSE is 
server-side.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4dafd8671173731696!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Merlin Moncure
On Thu, Apr 21, 2011 at 2:22 AM, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
 I've done some testing of PostgreSQL on different filesystems, and with
 different filesystem mount options.

 I found that xfs and ext4 both performed similarly, with ext4 just a few
 percent faster; and I found that adjusting the mount options only gave small
 improvements, except for the barrier options. (Which come with a hefty
 warning)

 I also tested btrfs, and was disappointed to see it performed *dreadfully* -
 even with the recommended options for database loads.

 Best TPS I could get out of ext4 on the test machine was 2392 TPS, but btrfs
 gave me just 69! This is appalling performance. (And that was with nodatacow
 and noatime set)

 I'm curious to know if anyone can spot anything wrong with my testing?
 I note that the speed improvement from datacow to nodatacow was only small -
 can I be sure it was taking effect? (Although cat /proc/mounts reported it
 had)

 The details of how I was running the test, and all the results, are here:
 http://blog.dryft.net/2011/04/effects-of-filesystems-and-mount.html

 I wouldn't run btrfs in production systems at the moment anyway, but I am
 curious about the current performance.
 (Tested on Ubuntu Server - Maverick - Kernel 2.6.35-28)

your nobarrier options are not interesting -- hardware sync is not
being flushed.  the real numbers are in the 230 range.   not sure why
brtfs is doing so badly -- maybe try comparing on single disk volume
vs raid 0?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Toby Corkindale

On 21/04/11 17:28, Merlin Moncure wrote:

On Thu, Apr 21, 2011 at 2:22 AM, Toby Corkindale
toby.corkind...@strategicdata.com.au  wrote:

I've done some testing of PostgreSQL on different filesystems, and with
different filesystem mount options.

I found that xfs and ext4 both performed similarly, with ext4 just a few
percent faster; and I found that adjusting the mount options only gave small
improvements, except for the barrier options. (Which come with a hefty
warning)

I also tested btrfs, and was disappointed to see it performed *dreadfully* -
even with the recommended options for database loads.

Best TPS I could get out of ext4 on the test machine was 2392 TPS, but btrfs
gave me just 69! This is appalling performance. (And that was with nodatacow
and noatime set)

I'm curious to know if anyone can spot anything wrong with my testing?
I note that the speed improvement from datacow to nodatacow was only small -
can I be sure it was taking effect? (Although cat /proc/mounts reported it
had)

The details of how I was running the test, and all the results, are here:
http://blog.dryft.net/2011/04/effects-of-filesystems-and-mount.html

I wouldn't run btrfs in production systems at the moment anyway, but I am
curious about the current performance.
(Tested on Ubuntu Server - Maverick - Kernel 2.6.35-28)


your nobarrier options are not interesting -- hardware sync is not
being flushed.  the real numbers are in the 230 range.   not sure why
brtfs is doing so badly -- maybe try comparing on single disk volume
vs raid 0?


Note that some documentation recommends disabling barriers IFF you have 
battery-backed write-cache hardware, which is often true on higher-end 
hardware.. thus the measured performance is interesting to know.


Quoted from the mount man page:
Write barriers enforce proper on-disk ordering of journal commits, 
making volatile disk write caches safe to use, at some performance

penalty. If your disks are battery-backed in one way or
another, disabling barriers may safely improve performance.


Cheers,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Henry C.
 I've done some testing of PostgreSQL on different filesystems, and with
 different filesystem mount options.

Since Pg is already journalling, why bother duplicating (and pay the
performance penalty, whatever that penalty may be) the effort for no real
gain (except maybe a redundant sense of safety)?  ie, use a
non-journalling battle-tested fs like ext2.

Regards
Henry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Tiruvenkatasamy Baskaran
Hi,
Which version of postgresql supports replication on RHEL6?
RHEL version : 2.6.32-71.el6.x86_64
Regards,
Tiru


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Vibhor Kumar

On Apr 21, 2011, at 4:42 PM, Tiruvenkatasamy Baskaran wrote:

 Which version of postgresql supports replication on RHEL6?
 RHEL version : 2.6.32-71.el6.x86_64

If you are talking about inbuld replication, then from PG9.0 onwards. Else 
slony-I replication tool is available for replication of PG Database

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Karsten Hilbert
Does anyone have any suggestions regarding the below ?

Thanks,
Karsten

On Mon, Apr 18, 2011 at 03:53:16PM +0200, Karsten Hilbert wrote:

 Hello all,
 
 since (according to the docs) PostgreSQL does not propagate
 INSERTs from child tables unto parent tables the below does
 not work, unfortunately.
 
 What is the suggested approach for this situation ? (there
 will be more tables like icd10 holding other coding
 systems of fairly diverse nature but all of them sharing
 .code and .term: LOINC, ATC, ICPC-2, ICD-9, ...).
 
 Thanks,
 Karsten
 (www.gnumed.de)
 
 
 begin;
 
 create table code_root (
   pk_code_root serial primary key,
   code text not null,
   term text not null
 );
 
 create table icd10 (
   pk serial primary key,
   version text not null
 ) inherits (code_root);
 
 create table disease (
   pk serial primary key,
   disease_name text not null
 );
 
 create table lnk_codes2epi (
   pk serial primary key,
   fk_disease integer not null
   references disease(pk)
   on update cascade
   on delete cascade,
   fk_code integer not null
   references code_root(pk_code_root)
   on update restrict
   on delete restrict
 );
 
 insert into icd10 (code, term, version) values ('J99.9', 'Grippe', 
 'ICD-10-GM');
 insert into disease (disease_name) values ('URTI/flu');
 select * from code_root;

  pk_code_root | code  |  term  
 --+---+
 1 | J99.9 | Grippe
 (1 Zeile)
 
 select * from icd10;

  pk_code_root | code  |  term  | pk |  version  
 --+---+++---
 1 | J99.9 | Grippe |  1 | ICD-10-GM
 (1 Zeile)
 
 select * from disease;

  pk | disease_name 
 +--
   1 | URTI/flu
 (1 Zeile)
 
 insert into lnk_codes2epi (fk_disease, fk_code) values (
   (select pk from disease where disease_name = 'URTI/flu'),
   (select pk_code_root from code_root where code = 'J99.9')
 );
 psql:x-check-delete.sql:47: ERROR:  insert or update on table lnk_codes2epi 
 violates foreign key constraint lnk_codes2epi_fk_code_fkey
 DETAIL:  Key (fk_code)=(1) is not present in table code_root.
 
 rollback;

-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Defining input function for new datatype

2011-04-21 Thread Nick Raj
Hi,
I am defining a new data type called mpoint
i.e.
typedef struct mpoint
{
Point p;
Timestamp t;
} mpoint;

For defining input/output function

1 Datum mpoint_in(PG_FUNCTION_ARGS)
2 {
3
4mpoint *result;
5char *pnt=(char *)malloc (sizeof (20));
6char *ts=(char *)malloc (sizeof (20));
7result= (mpoint *) palloc(sizeof(mpoint));
8char *st = PG_GETARG_CSTRING(0);
9mpoint_decode(st,pnt,ts);
// st breaks down into pnt that corresponds to Point and ts corresponds to
Timestamp
10
11  result-p = point_in(PointerGetDatum(pnt));//
point_in (input function for point that assigns x, y into point)
12  result- t = timestamp_in(PointerGetDatum(ts)); // similar
for timestamp
13
14  PG_RETURN_MPOINT_P(result);
15   }

line no 11 warning: passing argument 1 of ‘point_in’ makes pointer from
integer without a cast
 ../../../include/utils/geo_decls.h:191: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’
line no 11 error: incompatible types when assigning to type ‘Point’ from
type ‘Datum’
line no 12 warning: passing argument 1 of ‘timestamp_in’ makes pointer from
integer without a cast
 ../../../include/utils/timestamp.h:205: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’

Can anybody figure out what kind of mistake i am doing?
Also, why it got related to 'FunctionCallInfo' ?

Thanks
Nick


Re: [GENERAL] Installing PGDG on a fresh CentOS 5.6

2011-04-21 Thread Alexander Farber
Thank you Tom, yum install postgresql84 has worked for CentOS 5.6/64 bit

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different views of remote server

2011-04-21 Thread Bob Pawley

Hi Adrian

I looked at the table in JEdit. The binary strings for the missing fields 
are considerably longer than the others.


What limiting factor have I probably exceeded??

Bob

-Original Message- 
From: Adrian Klaver

Sent: Wednesday, April 20, 2011 4:14 PM
To: pgsql-general@postgresql.org
Cc: Bob Pawley
Subject: Re: [GENERAL] Different views of remote server

On Wednesday, April 20, 2011 11:09:59 am Bob Pawley wrote:

Hi

This is probably going to turn out to be me doing something stupid, but-

I have two computers, one of which I use as a remote server for my
database.

When I connect to the remote database through my interface there are 
errors

that suggest a problem with the data in one of the tables.

Using PgAdmin to view the remote table there is indeed some information
missing (5 out of 16 geoms).

When I use the PGAdmin on the remote computer, using a local connection,
this information is not missing it is intact.

I don’t think it is a PgAdmin problem because when I do a ‘select the_geom’
as a remote query the information is missing where it is not missing the
PgAdmin on the other computer as a local query.

Also connecting to the remote DB using Quantum GIS shows the “missing”
information as being present and normal.

I have also checked with a server display app and the appropriate number 
of

servers are present (only one present on the remote computer).

I’ve run out of ideas - Would anyone have any thoughts of what might be
going on???


What is the schema for the table? What exactly is the data and do the 5
'missing' data differ markedly from the other data?



Bob


--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Karsten Hilbert
On Thu, Apr 21, 2011 at 01:36:51PM +0200, Karsten Hilbert wrote:

 Does anyone have any suggestions regarding the below ?

If you guys happen to think this could be a
please-do-my-class-assignment-for-me question -- I'd be
glad to read up on things if someone clues me in on the
relevant keywords to look up !

 Thanks,
 Karsten
 
 On Mon, Apr 18, 2011 at 03:53:16PM +0200, Karsten Hilbert wrote:
 
  Hello all,
  
  since (according to the docs) PostgreSQL does not propagate
  INSERTs from child tables unto parent tables the below does
  not work, unfortunately.
  
  What is the suggested approach for this situation ? (there
  will be more tables like icd10 holding other coding
  systems of fairly diverse nature but all of them sharing
  .code and .term: LOINC, ATC, ICPC-2, ICD-9, ...).
  
  Thanks,
  Karsten
  (www.gnumed.de)
  
  
  begin;
  
  create table code_root (
  pk_code_root serial primary key,
  code text not null,
  term text not null
  );
  
  create table icd10 (
  pk serial primary key,
  version text not null
  ) inherits (code_root);
  
  create table disease (
  pk serial primary key,
  disease_name text not null
  );
  
  create table lnk_codes2epi (
  pk serial primary key,
  fk_disease integer not null
  references disease(pk)
  on update cascade
  on delete cascade,
  fk_code integer not null
  references code_root(pk_code_root)
  on update restrict
  on delete restrict
  );
  
  insert into icd10 (code, term, version) values ('J99.9', 'Grippe', 
  'ICD-10-GM');
  insert into disease (disease_name) values ('URTI/flu');
  select * from code_root;
 
   pk_code_root | code  |  term  
  --+---+
  1 | J99.9 | Grippe
  (1 Zeile)
  
  select * from icd10;
 
   pk_code_root | code  |  term  | pk |  version  
  --+---+++---
  1 | J99.9 | Grippe |  1 | ICD-10-GM
  (1 Zeile)
  
  select * from disease;
 
   pk | disease_name 
  +--
1 | URTI/flu
  (1 Zeile)
  
  insert into lnk_codes2epi (fk_disease, fk_code) values (
  (select pk from disease where disease_name = 'URTI/flu'),
  (select pk_code_root from code_root where code = 'J99.9')
  );
  psql:x-check-delete.sql:47: ERROR:  insert or update on table 
  lnk_codes2epi violates foreign key constraint lnk_codes2epi_fk_code_fkey
  DETAIL:  Key (fk_code)=(1) is not present in table code_root.
  
  rollback;
 
 -- 
 GPG key ID E4071346 @ gpg-keyserver.de
 E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Defining input function for new datatype

2011-04-21 Thread Pavel Stehule
Hello

2011/4/21 Nick Raj nickrajj...@gmail.com:
 Hi,
 I am defining a new data type called mpoint
 i.e.
 typedef struct mpoint
 {
     Point p;
     Timestamp t;
 } mpoint;

 For defining input/output function

 1 Datum mpoint_in(PG_FUNCTION_ARGS)
 2 {
 3
 4    mpoint *result;
 5    char *pnt=(char *)malloc (sizeof (20));
 6    char *ts=(char *)malloc (sizeof (20));
 7    result= (mpoint *) palloc(sizeof(mpoint));
 8    char *st = PG_GETARG_CSTRING(0);
 9    mpoint_decode(st,pnt,ts);
 // st breaks down into pnt that corresponds to Point and ts corresponds to
 Timestamp
 10
 11  result-p = point_in(PointerGetDatum(pnt));    //
 point_in (input function for point that assigns x, y into point)
 12  result- t = timestamp_in(PointerGetDatum(ts)); // similar
 for timestamp
 13
 14  PG_RETURN_MPOINT_P(result);
 15   }

 line no 11 warning: passing argument 1 of ‘point_in’ makes pointer from
 integer without a cast
  ../../../include/utils/geo_decls.h:191: note: expected
 ‘FunctionCallInfo’ but argument is of type ‘unsigned int’
 line no 11 error: incompatible types when assigning to type ‘Point’ from
 type ‘Datum’
 line no 12 warning: passing argument 1 of ‘timestamp_in’ makes pointer from
 integer without a cast
  ../../../include/utils/timestamp.h:205: note: expected
 ‘FunctionCallInfo’ but argument is of type ‘unsigned int’


you are missing a important header files.

 Can anybody figure out what kind of mistake i am doing?
 Also, why it got related to 'FunctionCallInfo' ?

see on definition of PG_FUNCTION_ARGS macro

Regards

Pavel Stehule


 Thanks
 Nick


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Andres Freund
On Thursday, April 21, 2011 12:16:04 PM Henry C. wrote:
  I've done some testing of PostgreSQL on different filesystems, and with
  different filesystem mount options.
 
 Since Pg is already journalling, why bother duplicating (and pay the
 performance penalty, whatever that penalty may be) the effort for no real
 gain (except maybe a redundant sense of safety)?  ie, use a
 non-journalling battle-tested fs like ext2.
Don't. The fsck on reboot will eat way too much time.

Using metadata only journaling is ok though. In my opinion the problem with 
btrfs is more the overhead of COW, but thats an impression from several kernel 
version ago, so...

Andres

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Tiruvenkatasamy Baskaran
Hi Vibhor Kumar,

 We downloaded postgresql-9.0.4.tar source from postgresql.org.

We installed postgresql db from source on machine1(master)  and machine2(slave).

RHEL version : 2.6.32-71.el6.x86_64

In order to enable replication between master and slave, we did necessary 
configuration
changes on master DB then started the master DB.

Also did necessary configuration changes on slave DB then started the slave DB.

Got the following messages in log file on the master DB.
LOG:  database system was shut down at 2011-04-01 14:27:37 IST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  replication connection authorized: user=postgres host=10.128.16.52 
port=52324
cp:   cannot stat `/usr/local/pgsql/data/pg_xlogarch/00010001': 
No such file or directory
LOG:  archive command failed with exit code 1
Got the following messages in log file on the slave DB.
LOG:  database system was interrupted; last known up at 2011-03-24 12:29:15 IST
LOG:  entering standby mode
cp: cannot stat `/usr/local/pgsql/data/pg_xlogarch/00010002': 
No such file or directory
LOG:  redo starts at 0/220
LOG:  record with zero length at 0/2B0
cp: cannot stat `/usr/local/pgsql/data/pg_xlogarch/00010002': 
No such file or directory
LOG:  streaming replication successfully connected to primary
FATAL:  the database system is starting up
FATAL:  the database system is starting up
Master DB is running and able to query table. But slave DB is not running and 
not able to query table.

Could you tell me why slave DB is not running?
If you need more details on how we configured master and slave DB in 
replication mode , we will provide.

Regards,
Tiru


-Original Message-
From: Vibhor Kumar [mailto:vibhor.ku...@enterprisedb.com]
Sent: Thursday, April 21, 2011 5:17 PM
To: Tiruvenkatasamy Baskaran
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Which version of postgresql supports replication on 
RHEL6?


On Apr 21, 2011, at 4:42 PM, Tiruvenkatasamy Baskaran wrote:

 Which version of postgresql supports replication on RHEL6?
 RHEL version : 2.6.32-71.el6.x86_64

If you are talking about inbuilt replication, then from PG9.0 onwards. Else 
slony-I replication tool is available for replication of PG Database

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com



  
::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Vibhor Kumar

On Apr 21, 2011, at 6:35 PM, Tiruvenkatasamy Baskaran wrote:
 
 Got the following messages in log file on the master DB.
 LOG:  database system was shut down at 2011-04-01 14:27:37 IST
 LOG:  database system is ready to accept connections
 LOG:  autovacuum launcher started
 LOG: replication connection authorized: user=postgres host=10.128.16.52 
 port=52324
 cp:   cannot stat 
 `/usr/local/pgsql/data/pg_xlogarch/00010001': No such file or 
 directory
 LOG:  archive command failed with exit code 1
 Got the following messages in log file on the slave DB.
 LOG:  database system was interrupted; last known up at 2011-03-24 12:29:15 
 IST
 LOG:  entering standby mode
 cp: cannot stat `/usr/local/pgsql/data/pg_xlogarch/00010002': 
 No such file or directory
 LOG:  redo starts at 0/220
 LOG:  record with zero length at 0/2B0
 cp: cannot stat `/usr/local/pgsql/data/pg_xlogarch/00010002': 
 No such file or directory
 LOG:  streaming replication successfully connected to primary
 FATAL:  the database system is starting up
 FATAL:  the database system is starting up
 Master DB is running and able to query table. But slave DB is not running and 
 not able to query table.

Please confirm if you have made Slave DB using PITR/Hot Backup of Master. Also, 
please let me know the steps which you have followed.

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Tiruvenkatasamy Baskaran
HI,
Which version of postgresql supports replication on RHEL6?
RHEL version : 2.6.32-71.el6.x86_64
Regards,
Tiru


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Which version of postgresql supports replication on RHEL6?

2011-04-21 Thread Vibhor Kumar

On Apr 21, 2011, at 4:23 PM, Tiruvenkatasamy Baskaran wrote:

  Which version of postgresql supports replication on RHEL6?
 RHEL version : 2.6.32-71.el6.x86_64

Why are you re-posting your question, if it has been answered?

Thanks  Regards,
Vibhor Kumar
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different views of remote server

2011-04-21 Thread Adrian Klaver
On Thursday, April 21, 2011 5:20:13 am Bob Pawley wrote:
 Hi Adrian
 
 I looked at the table in JEdit. The binary strings for the missing fields
 are considerably longer than the others.
 
 What limiting factor have I probably exceeded??

Per Scotts post, are you sure you are only looking at one database. You mention 
verifying with a server display app(?), but you might to do as Scott suggested 
and look at the connection strings. Also from my previous post, what is the 
data? Also what query are you running?

 
 Bob
 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble loading Perl modules from postgresql.conf

2011-04-21 Thread Chris Greenhill
Just an update for those interested. I found an insecure work around for
pre-loading any modules I may need by editing the sitecustomize.pl file,
essentially adding any use's and requires I need. As I said, probably not
secure since I hear there's been issues with sitecustomize.pl 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Greenhill
Sent: Wednesday, April 20, 2011 12:09 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Trouble loading Perl modules from postgresql.conf

 

Hello,

 

I'm having difficulty pre-loading Perl modules and my own libraries. When I
edit something like this into the postgresql.conf:

 

custom_variable_classes = 'plperl'

plperl.on_init = 'use MyModule;'

 

and restart the server it doesn't seem to load the modules (they don't
appear in %INC and my functions error out). Is there something else I need
to do? I'm using server 9.0.4 on a WinXP system with Perl 5.10. 

 

Thanks for any help

-Chris

 

 



Re: [GENERAL] Defining input function for new datatype

2011-04-21 Thread Tom Lane
Nick Raj nickrajj...@gmail.com writes:
 1 Datum mpoint_in(PG_FUNCTION_ARGS)
 2 {
 3
 4mpoint *result;
 5char *pnt=(char *)malloc (sizeof (20));
 6char *ts=(char *)malloc (sizeof (20));

(1) You should *not* use malloc here.  There is seldom any reason to use
malloc directly at all in functions coded for Postgres.  Use palloc,
or expect memory leaks.

(2) sizeof(20) almost certainly doesn't mean what you want.  It's most
likely 4 ...

 11  result-p = point_in(PointerGetDatum(pnt));//
 point_in (input function for point that assigns x, y into point)

You need to use DirectFunctionCallN when trying to call a function that
obeys the PG_FUNCTION_ARGS convention, as point_in does.  And the result
is a Datum, which means you're going to need to apply a DatumGetWhatever
macro to get a bare Point or Timestamp from these functions.

Look around in the PG sources for examples.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Greg Smith

On 04/20/2011 01:50 AM, Toby Corkindale wrote:
Also, the number of erase cycles you can get, over the whole disk, is 
quite large on modern disks!


So large that you'll probably go decades before you wear the disk out, 
even with continual writes.


Don't buy into the SSD FUD myths..


There is no FUD being spread here.  Particularly given the PostgreSQL 
WAL write pattern, it's not impossible to wear out a SSD placed there in 
a small number of years.  A system with a trivial but not completely 
idle workload will generate one 16MB WAL segment every 5 minutes, which 
works out to 4.5GB/day of writes.  That's the baseline--the reality is 
much, much higher than that on most systems.  The fact that every row 
update can temporarily use more than 8K means that actual write 
throughput on the WAL can be shockingly large.  The smallest customer I 
work with regularly has a 50GB database, yet they write 20GB of WAL 
every day.  You can imagine how much WAL is generated daily on systems 
with terabyte databases.


As for what this translates into in the real world, go read 
http://archives.postgresql.org/message-id/BANLkTi=GsyBfq+ApWPR_qCA7AN+NqT=z...@mail.gmail.com 
as one worked out sample.  Anyone deploying PostgreSQL onto MLC can't 
necessarily ignore this issue.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Adrian Klaver
On Thursday, April 21, 2011 4:36:51 am Karsten Hilbert wrote:
 Does anyone have any suggestions regarding the below ?

The only thing I can come up with is to eliminate the FK :
 fk_code integer not null
   references code_root(pk_code_root)
   on update restrict
   on delete restrict
on lnk_codes2epi and replace it with a trigger that essentially does the same 
thing ; check for presence of pk_code_root.

 
 Thanks,
 Karsten
 


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [ADMIN] How to uninstall PostgreSQL 8.4 on both Windows XP and Windows 7

2011-04-21 Thread Kevin Grittner
Mlondolozi Ncapayi mlon...@gmail.com wrote:
 
 I installed PostgreSql 8.4 and now I  want to delete/ uninstall it
 completely to start a new fresh installation.
 Can you please give me clear instructions on how to do that or
 maybe a script that I can run.
 
That's going to depend entirely on how you installed it, which is
something you didn't tell us.  Did you build from source, use a
one-click installer, or something else?  Details like exact version
number, the URL from which you got the software, etc., would help.
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Florian Weimer
* Greg Smith:

 The fact that every row update can temporarily use more than 8K means
 that actual write throughput on the WAL can be shockingly large.  The
 smallest customer I work with regularly has a 50GB database, yet they
 write 20GB of WAL every day.  You can imagine how much WAL is
 generated daily on systems with terabyte databases.

Interesting.  Is there an easy way to monitor WAL traffic in away?  It
does not have to be finegrained, but it might be helpful to know if
we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
database, should the question of SSDs ever come up.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Adrian Klaver
On Thursday, April 21, 2011 8:33:45 am Florian Weimer wrote:
 * Greg Smith:
  The fact that every row update can temporarily use more than 8K means
  that actual write throughput on the WAL can be shockingly large.  The
  smallest customer I work with regularly has a 50GB database, yet they
  write 20GB of WAL every day.  You can imagine how much WAL is
  generated daily on systems with terabyte databases.
 
 Interesting.  Is there an easy way to monitor WAL traffic in away?  It
 does not have to be finegrained, but it might be helpful to know if
 we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
 database, should the question of SSDs ever come up.

They are found in $DATA/pg_xlog so checking the size of that directory 
regularly 
would get you the information.


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Florian Weimer
* Adrian Klaver:

 Interesting.  Is there an easy way to monitor WAL traffic in away?  It
 does not have to be finegrained, but it might be helpful to know if
 we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
 database, should the question of SSDs ever come up.

 They are found in $DATA/pg_xlog so checking the size of that
 directory regularly would get you the information.

But log files are recycled, so looking at the directory alone does not
seem particularly helpful.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to realize ROW_NUMBER() in 8.3?

2011-04-21 Thread David Fetter
On Wed, Apr 20, 2011 at 11:51:25AM -0400, Emi Lu wrote:
 Hello,
 
 ROW_NUMBER() is only ready in 8.4. For 8.3, is there a simple way to
 get row_number
 
 select row_number(), col1, col2...
 FROM   tableName
 
 Thanks a lot!
 丁叶

Your best bet is to upgrade to a modern version of PostgreSQL.  While
you will of course need to do tests with your applications, 9.0 has no
significant backward-incompatibility with 8.3.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Scott Ribe
On Apr 21, 2011, at 9:44 AM, Florian Weimer wrote:

 But log files are recycled, so looking at the directory alone does not
 seem particularly helpful.

You have to look at the file timestamps. From that you can get an idea of 
traffic.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Greg Smith

On 04/21/2011 11:33 AM, Florian Weimer wrote:

Is there an easy way to monitor WAL traffic in away? It
does not have to be finegrained, but it might be helpful to know if
we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
database, should the question of SSDs ever come up.
   


You can use functions like pg_current_xlog_location() : 
http://www.postgresql.org/docs/9.0/interactive/functions-admin.html


Save a copy of this periodically:

select now(),pg_current_xlog_location();

And you can see WAL volume over time given any two points from that set 
of samples.


To convert the internal numbers returned by that into bytes, you'll need 
to do some math on them.  Examples showing how that works and code in a 
few languages:


http://archives.postgresql.org/pgsql-general/2010-10/msg00077.php (by hand)
http://munin-monitoring.org/browser/trunk/plugins/node.d/postgres_streaming_.in?rev=3905 
(in Perl)

http://archives.postgresql.org/pgsql-general/2010-10/msg00079.php (in C)
http://postgresql.1045698.n5.nabble.com/How-can-we-tell-how-far-behind-the-standby-is-td3252297.html 
(in bash with bc(!), other links)


What I keep meaning to write is something that does that as part of the 
SQL itself, so it gets pulled out of the database already in bytes.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Tom Lane
Florian Weimer fwei...@bfk.de writes:
 * Adrian Klaver:
 Interesting.  Is there an easy way to monitor WAL traffic in away?

 They are found in $DATA/pg_xlog so checking the size of that
 directory regularly would get you the information.

 But log files are recycled, so looking at the directory alone does not
 seem particularly helpful.

du would be useless, but you could check the name of the newest WAL
segment file from time to time, and do a bit of math to see how much
WAL had been written since the previous time.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different views of remote server

2011-04-21 Thread Bob Pawley

Hi Scott

According to NSAuditor(www.nsauditor.com) there is only one server with port 
5432.


When I enter information into the remote database it shows up on the same 
database that has this problem.


How do I determine my 'connection credentials'?

Bob

-Original Message- 
From: Scott Marlowe

Sent: Wednesday, April 20, 2011 7:38 PM
To: Bob Pawley
Cc: Postgresql
Subject: Re: [GENERAL] Different views of remote server

On Wed, Apr 20, 2011 at 1:09 PM, Bob Pawley rjpaw...@shaw.ca wrote:

Hi

This is probably going to turn out to be me doing something stupid, but-

I have two computers, one of which I use as a remote server for my 
database.


When I connect to the remote database through my interface there are 
errors

that suggest a problem with the data in one of the tables.

Using PgAdmin to view the remote table there is indeed some information
missing (5 out of 16 geoms).

When I use the PGAdmin on the remote computer, using a local connection,
this information is not missing it is intact.

I don’t think it is a PgAdmin problem because when I do a ‘select the_geom’
as a remote query the information is missing where it is not missing the
PgAdmin on the other computer as a local query.

Also connecting to the remote DB using Quantum GIS shows the “missing”
information as being present and normal.

I have also checked with a server display app and the appropriate number 
of

servers are present (only one present on the remote computer).

I’ve run out of ideas - Would anyone have any thoughts of what might be
going on???


You're likely connecting to a different database than you think you
are.  What do your connection credentials look like in each case?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-21 Thread Tomas Vondra
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
 Tomas,
 
 I did a crash log with the strace for PID of the index command as you
 suggested.
 
 Here's the output:
 http://www.heypasteit.com/clip/WNR
 
 Also including below, but because this will wrap etc, you can look at
 the link above.
 
 Thanks for any ideas or pointers!
 
 
 
 Process 15900 attached - interrupt to quit

Nope, that's the psql process - you need to attach to the backend
process that's created to handle the connection. Whenever you create a
connection (from a psql), a new backend process is forked to handle that
single connection - this is the process you need to strace.

You can either see that in 'ps ax' (the PID is usually +1 with respect
to the psql process), or you can do this

  SELECT pg_backend_pid();

as that will give you PID of the backend for the current connection.

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to realize ROW_NUMBER() in 8.3?

2011-04-21 Thread raghu ram
On Thu, Apr 21, 2011 at 9:19 PM, David Fetter da...@fetter.org wrote:

 On Wed, Apr 20, 2011 at 11:51:25AM -0400, Emi Lu wrote:
  Hello,
 
  ROW_NUMBER() is only ready in 8.4. For 8.3, is there a simple way to
  get row_number
 
  select row_number(), col1, col2...
  FROM   tableName
 
  Thanks a lot!
  丁叶

 Your best bet is to upgrade to a modern version of PostgreSQL.  While
 you will of course need to do tests with your applications, 9.0 has no
 significant backward-incompatibility with 8.3.




Hmm, PostgreSQL 8.3 does not support the ROWNUM feature, however, a possible
work around can be achieved by using the LIMIT and OFFSET options.


psql=# SELECT empno FROM emp LIMIT 10

The above query will display the first 10 records.


You can also use the (auto incrementing) SERIAL data type as a ROWNUM column
to simulate the ROWNUM feature.


Something like this...


psql=# create table rownumtest(rownum SERIAL, val1 varchar, val2 int4);

psql=# insert into rownumtest(val1,val2) values('abc', '1');

psql=# insert into rownumtest(val1,val2) values('def', '2');

psql=# insert into rownumtest(val1,val2) values('ghi', '3');

psql=# insert into rownumtest(val1,val2) values('jkl', '4');

psql=# select * from rownumtest;

rownum | val1 | val2

+--+--

1 | abc | 1

2 | def | 2

3 | ghi | 3

4 | jkl | 4


Hope this helps



--Raghu Ram


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Alban Hertroys
On 18 Apr 2011, at 15:53, Karsten Hilbert wrote:

 What is the suggested approach for this situation ? (there
 will be more tables like icd10 holding other coding
 systems of fairly diverse nature but all of them sharing
 .code and .term: LOINC, ATC, ICPC-2, ICD-9, ...).

I think your best bet is to not rely on inheritance here. If you instead 
reference code_root from icd10 using (code, term), you end up with the same 
results, while you then only need to reference code_root from your 
lnk_codes2epi table.

 begin;
 
 create table code_root (
   pk_code_root serial primary key,
   code text not null,
   term text not null
 );
 
 create table icd10 (
   pk serial primary key,
   version text not null
 ) inherits (code_root);

So this would become:

create table code_root (
pk_code_root serial primary key,
code text not null,
term text not null,
UNIQUE (code, term)
);

create table icd10 (
pk serial primary key,
code text not null,
term text not null,
version text not null,
FOREIGN KEY (code, term) REFERENCES code_root (code, term)
);

This does rely on the combination of (code, term) being unique in code_root. If 
it's not, you would need an extra table with just every unique combination of 
(code, term) that both code_root and icd10 would reference.

BTW, do you really need those artificial PK's? If not, you may well be better 
off dropping them. That way (code, term) could be your PK instead. I don't know 
enough about your data to make more than a guess though, I just get itchy when 
I see such designs ;)


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4db0665111731275120228!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to realize ROW_NUMBER() in 8.3?

2011-04-21 Thread hubert depesz lubaczewski
On Wed, Apr 20, 2011 at 09:27:18PM +0530, raghu ram wrote:
 On Wed, Apr 20, 2011 at 9:21 PM, Emi Lu em...@encs.concordia.ca wrote:
 
  Hello,
 
  ROW_NUMBER() is only ready in 8.4. For 8.3, is there a simple way to get
  row_number
 
  select row_number(), col1, col2...
  FROM   tableName
 
 
 
 Below link will demonstrates ROW_NUMBER features in pre Postgresql-8.4::
 
 http://www.postgresonline.com/journal/archives/79-Simulating-Row-Number-in-PostgreSQL-Pre-8.4.html

another approach:

http://www.depesz.com/index.php/2007/08/17/rownum-anyone-cumulative-sum-in-one-query/

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Karsten Hilbert
On Thu, Apr 21, 2011 at 07:53:04AM -0700, Adrian Klaver wrote:

 On Thursday, April 21, 2011 4:36:51 am Karsten Hilbert wrote:
  Does anyone have any suggestions regarding the below ?
 
 The only thing I can come up with is to eliminate the FK :
  fk_code integer not null
references code_root(pk_code_root)
on update restrict
on delete restrict
 on lnk_codes2epi and replace it with a trigger that essentially does the same 
 thing ; check for presence of pk_code_root.

I feared as much. I hoped to get around that somehow but
what you suggest doesn't sound half bad, actually.

I had been thinking to do the typical master-detail tables
for the coding systems instead of the inheritance:


table generic_code
pk serial primary key
code
term
system

table icd10
pk serial primary key
fk_generic_code
references generic_code(pk)
icd10_extra_field

table icd9
pk serial primary key
fk_generic_code
references generic_code(pk)
icd9_extra_field

table disease
pk serial primary key
description

table lnk_code2disease
pk serial primary key
fk_generic_code
references generic_code(pk)
fk_disease
references disease(pk)


But then I'd have to write even more triggers making sure
that rows in, say, the icd10 table don't link to rows in the
generic_code table whose .system  'icd10'.

I can't put the .system column into the icd10/icd9/...
tables either because I need that column in generic_code to
ensure:

unique(code, system)

Sheesh :-)

I suppose my underlying problem is that PostgreSQL's
inheritance is not intended to support polymorphism which is
what I seem to be trying to do - link diseases to
polymorphic code tables.

I can't easily think of a better relational solution,
though. The real world requirement for polymorphism is
surely there.

I guess I'll go with your solution unless someone comes up
with a better idea yet.

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Greg Smith

On 04/21/2011 06:16 AM, Henry C. wrote:

Since Pg is already journalling, why bother duplicating (and pay the
performance penalty, whatever that penalty may be) the effort for no real
gain (except maybe a redundant sense of safety)?  ie, use a
non-journalling battle-tested fs like ext2.
   


The first time your server is down and unreachable over the network 
after a crash, because it's run fsck to recover, failed to execute 
automatically, and now requires manual intervention before the system 
will finish booting, you'll never make that mistake again.  On real 
database workloads, there's really minimal improvement to gain for that 
risk--and sometimes actually a drop in performance--using ext2 over a 
properly configured ext3.  If you want to loosen the filesystem journal 
requirements on a PostgreSQL-only volume, use data=writeback on ext3.  
And I'd still expect ext4/XFS to beat any ext2/ext3 combination you can 
come up with, performance-wise.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with parent/child table and FKs

2011-04-21 Thread Karsten Hilbert
On Thu, Apr 21, 2011 at 07:15:38PM +0200, Alban Hertroys wrote:

 BTW, do you really need those artificial PK's? If not, you
 may well be better off dropping them. That way (code, term)
 could be your PK instead. I don't know enough about your
 data to make more than a guess though, I just get itchy when
 I see such designs ;)

Well, coding system content changes at the whims of
governments, professional boards, the winds of change, and
the hair color of the director of WHO...

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] getting EXPLAIN output from inside a function

2011-04-21 Thread Joseph S
I saw this in the mailing list archives without an answer, so for future
reference:

DECLARE
...
   line TEXT;
BEGIN
...
  FOR line IN  EXECUTE ''EXPLAIN ANALYZE statement goes here LOOP
   RAISE NOTICE ''% '' , line;
  END LOOP;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-21 Thread Scott Marlowe
On Thu, Apr 21, 2011 at 11:22 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Florian Weimer fwei...@bfk.de writes:
 * Adrian Klaver:
 Interesting.  Is there an easy way to monitor WAL traffic in away?

 They are found in $DATA/pg_xlog so checking the size of that
 directory regularly would get you the information.

 But log files are recycled, so looking at the directory alone does not
 seem particularly helpful.

 du would be useless, but you could check the name of the newest WAL
 segment file from time to time, and do a bit of math to see how much
 WAL had been written since the previous time.

I'd think using sysstat packages sar is the way to see how much work
your drives are doing.  Assuming the sysstat package / daemon is set
to monitor disk block activity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread Greg Smith

On 04/21/2011 02:22 AM, Toby Corkindale wrote:
I also tested btrfs, and was disappointed to see it performed 
*dreadfully* - even with the recommended options for database loads.


Best TPS I could get out of ext4 on the test machine was 2392 TPS, but 
btrfs gave me just 69! This is appalling performance. (And that was 
with nodatacow and noatime set)


I don't run database performance tests until I've tested the performance 
of the system doing fsync calls, what I call its raw commit rate.  
That's how fast a single comitting process will be able to execute 
individual database INSERT statements for example.  Whether or not 
barriers are turned on or not is the biggest impact on that, and from 
what you're describing it sounds like the main issue here is that you 
weren't able to get btrfs+nobarrier performing as expected.


If you grab 
http://projects.2ndquadrant.it/sites/default/files/bottom-up-benchmarking.pdf 
page 26 will show you how to measure fsync rate directly using 
sysbench.  Other slides cover how to get sysbench working right, you'll 
need to get a development snapshot to compile on your Ubuntu system.


General fsync issues around btrfs are still plentiful it seems.  
Installing packages with dpkg sometimes does that (I haven't been 
following exactly which versions of Ubuntu do and don't fsync), so there 
are bug reports like 
https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/570805 and 
https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/607632


One interesting thing from there is an idea I'd never though of:  you 
can link in an alternate system library that just ignore fsync if you 
want to test turning it off above the filesystem level.  Someone has 
released a package to do just that, libeatmydata:  
http://www.flamingspork.com/projects/libeatmydata/


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Column storage (EXTERNAL/EXTENDED) settings for bytea/text column

2011-04-21 Thread Noah Misch
On Mon, Apr 11, 2011 at 03:19:23PM -0700, Joel Stevenson wrote:
 I'm trying to do some comparisons between the EXTERNAL and the EXTENDED 
 storage methods on a bytea column and from the outside the setting doesn't 
 appear to affect the value stored on initial insert, but perhaps I'm looking 
 at the wrong numbers.  If I create two new tables with a single bytea column 
 and set one of them to external storage, then insert an existing bytea value 
 from another table into each one, they appear to be of exactly the same size. 
  This is using PG 9.0.3 on Debian Lenny, using the backports-sloppy deb 
 package of PG 9.
 
 (I've verified that the first table has extended storage via pg_attribute 
 and that the second table has external.)
 
 create table obj1 ( object bytea );
 create table obj2 ( object bytea );
 alter table obj2 alter column object set storage external;
 insert into obj1 ( object ) select object from serialized_content where id = 
 12345;
 insert into obj2 ( object ) select object from obj1;

If the value that shows up for insertion is already compressed, EXTERNAL storage
will not decompress it.  Change this line to

  insert into obj2 ( object ) select object || '' from obj1;

to observe the effect you seek.

Given the purpose of EXTERNAL storage, this might qualify as a bug.

 select pg_total_relation_size('obj1') as o1, pg_total_relation_size( (select 
 reltoastrelid from pg_class where relname = 'obj1' ) ) as otoast1, 
 pg_total_relation_size('obj2') as o2, pg_total_relation_size( (select 
 reltoastrelid from pg_class where relname = 'obj2' ) ) as otoast2;
   o1   | otoast1 |  o2   | otoast2 
 ---+-+---+-
  65536 |   57344 | 65536 |   57344

 Can I use the relation size like this to determine whether or not compression 
 is happening for these toast columns?  If not, is there a way that I can 
 confirm that it is or isn't active?  The results appear to be similar for 
 text columns.

Yes; the sizes you're seeing through that method should be accurate.

nm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different views of remote server

2011-04-21 Thread Adrian Klaver
On Thursday, April 21, 2011 9:24:57 am Bob Pawley wrote:
 Hi Scott
 
 According to NSAuditor(www.nsauditor.com) there is only one server with
 port 5432.
 
 When I enter information into the remote database it shows up on the same
 database that has this problem.
 
 How do I determine my 'connection credentials'?

In pgAdmin they will be in the server properties. Look to see if you are 
connecting to a port other than 5432.

 
 Bob
 


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Poor performance of btrfs with Postgresql

2011-04-21 Thread mark


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Toby Corkindale
 Sent: Thursday, April 21, 2011 12:22 AM
 To: luv-main; pgsql-general@postgresql.org
 Subject: [GENERAL] Poor performance of btrfs with Postgresql
 
 I've done some testing of PostgreSQL on different filesystems, and with
 different filesystem mount options.
 

{snip}

 
 I'm curious to know if anyone can spot anything wrong with my testing?


 
{snip}


 (Tested on Ubuntu Server - Maverick - Kernel 2.6.35-28)


Don't take this the wrong way - I applaud you asking for feedback. BTW -
Have you seen Greg Smiths PG 9.0 high performance book ? it's got some
chapters dedicated to benchmarking. 

Do you have battery backed write cache and a 'real' hardware raid card?
Not sure why your testing with raid 0, but that is just me.

You also did not provide enough other details for it to be of interest to
many other people as a good data point. If you left all else at the defaults
then might just mention that. 

Did you play with readahead ?


XFS mount options I have used a time or two... for some of our gear at work:

rw,noatime,nodiratime,logbufs=8,inode64,allocsize=16m   

How was the raid configured ? did you do stripe/block alignment ? might not
make a noticeable difference but if one is serious maybe it is a good habit
to get into. I haven't done as much tuning work as I should with xfs but a
primer can be found at :
http://oss.sgi.com/projects/xfs/training/xfs_slides_04_mkfs.pdf



Getting benches with pg 9 would also be interested because of the changes to
pgbench between 8.4 and 9.0, although at only about 230 tps I don't know how
much a difference you will see, since the changes only really show up when
you can sustain at a much higher tps rate. 


Knowing the PG config, would also be interesting, but with so few disks and
OS, xlogs, and data all being on the same disks  well yeah it's not a
superdome, but still would be worth noting on your blog for posterity sake. 




Right now I wish I had a lot of time to dig into different XFS setups on
some of our production matching gear - but other projects have me too busy
and I am having trouble getting our QA people loan me gear for it. 

Heck I haven't tested ext4 at all to speak of - so shame on me for that. 

To loosely quote someone else I saw posting to a different thread a while
back I would walk through fire for a 10% performance gain. IMO through
proper testing and benchmarking you can make sure you are not giving up 10%
(or more) performance where you don't have to - no matter what hardware you
are running. 


-Mark

 
 Cheers,
 Toby
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-21 Thread Phoenix Kiula
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
 Tomas,

 I did a crash log with the strace for PID of the index command as you
 suggested.

 Here's the output:
 http://www.heypasteit.com/clip/WNR

 Also including below, but because this will wrap etc, you can look at
 the link above.

 Thanks for any ideas or pointers!



 Process 15900 attached - interrupt to quit

 Nope, that's the psql process - you need to attach to the backend
 process that's created to handle the connection. Whenever you create a
 connection (from a psql), a new backend process is forked to handle that
 single connection - this is the process you need to strace.

 You can either see that in 'ps ax' (the PID is usually +1 with respect
 to the psql process), or you can do this

  SELECT pg_backend_pid();

 as that will give you PID of the backend for the current connection.





Thanks. Did that.

The crash.log is a large-ish file, about 24KB. Here's the last 10
lines though. Does this help?



 ~  tail -10 /root/crash.log
read(58, `\1\0\0\230\337\0\343\1\0\0\0P\0T\r\0 \3
\374\236\2\2T\215\312\1\354\235\32\2..., 8192) = 8192
write(97, 213.156.60\0\0 \0\0\0\37\0\364P\3\0\34@\22\0\0\000210,
8192) = 8192
read(58, `\1\0\0\274\362\0\343\1\0\0\0T\0\210\r\0 \3
0\217\352\1\240\236\272\0024\235\322\2..., 8192) = 8192
read(58, [\1\0\0\354)c*\1\0\0\0T\0\214\r\0 \3
\254\236\242\2\340\220\342\2\\\235\232\2..., 8192) = 8192
read(58, \\\1\0\0\200\245\207\32\1\0\0\0\\\0\340\r\0 \3
\237\272\1\304\235\262\2\340\215\322\1..., 8192) = 8192
read(58, \350\0\0\0\274\311x\323\1\0\0\0\\\\r\0 \3
\200\236\372\2(\235\252\2\34\234\22\2..., 8192) = 8192
read(58, ;\1\0\0|#\265\30\1\0\0\0`\0h\r\0 \3
\324\236R\2\314\235\n\2h\215\362\1..., 8192) = 8192
read(58, c\1\0\\24%u\1\0\0\0\230\0\210\r\0 \3
\240\226\32\16\260\235\252\1p\222Z\10..., 8192) = 8192
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
Process 17161 detached



The full crash.log file is here if needed:
https://www.yousendit.com/download/ VnBxcmxjNDJlM1JjR0E9PQ

Btw, this happens when I try to create an index on one of the columns
in my table.

Just before this, I had created another index on modify_date  (a
timestamp column) and it went fine.

Does that mean anything?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-21 Thread Phoenix Kiula
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
 Tomas,

 I did a crash log with the strace for PID of the index command as you
 suggested.

 Here's the output:
 http://www.heypasteit.com/clip/WNR

 Also including below, but because this will wrap etc, you can look at
 the link above.

 Thanks for any ideas or pointers!



 Process 15900 attached - interrupt to quit

 Nope, that's the psql process - you need to attach to the backend
 process that's created to handle the connection. Whenever you create a
 connection (from a psql), a new backend process is forked to handle that
 single connection - this is the process you need to strace.

 You can either see that in 'ps ax' (the PID is usually +1 with respect
 to the psql process), or you can do this

  SELECT pg_backend_pid();

 as that will give you PID of the backend for the current connection.





 Thanks. Did that.

 The crash.log is a large-ish file, about 24KB. Here's the last 10
 lines though. Does this help?



  ~  tail -10 /root/crash.log
 read(58, `\1\0\0\230\337\0\343\1\0\0\0P\0T\r\0 \3
 \374\236\2\2T\215\312\1\354\235\32\2..., 8192) = 8192
 write(97, 213.156.60\0\0 \0\0\0\37\0\364P\3\0\34@\22\0\0\000210,
 8192) = 8192
 read(58, `\1\0\0\274\362\0\343\1\0\0\0T\0\210\r\0 \3
 0\217\352\1\240\236\272\0024\235\322\2..., 8192) = 8192
 read(58, [\1\0\0\354)c*\1\0\0\0T\0\214\r\0 \3
 \254\236\242\2\340\220\342\2\\\235\232\2..., 8192) = 8192
 read(58, \\\1\0\0\200\245\207\32\1\0\0\0\\\0\340\r\0 \3
 \237\272\1\304\235\262\2\340\215\322\1..., 8192) = 8192
 read(58, \350\0\0\0\274\311x\323\1\0\0\0\\\\r\0 \3
 \200\236\372\2(\235\252\2\34\234\22\2..., 8192) = 8192
 read(58, ;\1\0\0|#\265\30\1\0\0\0`\0h\r\0 \3
 \324\236R\2\314\235\n\2h\215\362\1..., 8192) = 8192
 read(58, c\1\0\\24%u\1\0\0\0\230\0\210\r\0 \3
 \240\226\32\16\260\235\252\1p\222Z\10..., 8192) = 8192
 --- SIGSEGV (Segmentation fault) @ 0 (0) ---
 Process 17161 detached



 The full crash.log file is here if needed:
 https://www.yousendit.com/download/ VnBxcmxjNDJlM1JjR0E9PQ

 Btw, this happens when I try to create an index on one of the columns
 in my table.

 Just before this, I had created another index on modify_date  (a
 timestamp column) and it went fine.

 Does that mean anything?

 Thanks




Probably a dumb and ignorant question, but should I be reseting the xlog?
http://postgresql.1045698.n5.nabble.com/SIGSEGV-when-trying-to-start-in-single-user-mode-td1924418.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general