[GENERAL] problem connecting oracle and postgres database

2009-02-08 Thread Josh Harrison
Hi all,

Im trying to create a dbi_link between Oracle and postgresql. i installed
all the necessary perl packages
And I had run dbi_link.sql and it completed without any errors

This is sql that I use to connect
postg...@garuda:~$ less /home/postgres/dbi-link-2.0.0/examples/oracle/dola.sql

/*
 * Data source: dbi:Oracle:hr;host=localhost;sid=xe
 * User:hr
 * Password:foobar
 * dbh attributes:  {AutoCommit = 1, RaiseError = 1}
 * dbh environment: NULL
 * remote schema:   NULL
 * remote catalog:  NULL
 * local schema:hr
 */

UPDATE
pg_catalog.pg_settings
SET
setting =
CASE WHEN 'dbi_link' = ANY(string_to_array(setting, ','))
THEN setting
ELSE 'dbi_link,' || setting
END
WHERE
name = 'search_path'
;

SELECT make_accessor_functions(
'dbi:Oracle:database=sample;host=111.11.11.11;sid=xxx;port=1521',
''username',
'password',
'---
AutoCommit: 1
RaiseError: 1
',
NULL,
NULL,
NULL,
'sample'
);

And it executed successfully .It didn't complain of anything and created
rules for all the tables in the oracle database

\d sample.* gives the list of all the tables and their fields

Now when I try to execute this SQL

 select * from sample.ACCESS_METHOD;
I get

NOTICE:  Setting bail in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  Setting quote_literal in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  Setting get_connection_info in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  Setting quote_ident in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  Setting get_dbh in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  Setting remote_exec_dbh in %_SHARED hash. at line 25.

CONTEXT:  SQL statement SELECT dbi_link.dbi_link_init()
NOTICE:  SELECT dbi_link.cache_connection( 1 ) at line 12.

NOTICE:  In cache_connection, there's no shared dbh 1 at line 7.

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
NOTICE:  Entering get_connection_info at line 44.

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
NOTICE:  ref($args) is HASH
---
data_source_id: 1

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
NOTICE:  Leaving get_connection_info at line 75.

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
NOTICE:  ---
auth: passwd
data_source: dbi:Oracle:database=sample;host=111.11.11.11;sid=xxx;port=1521
dbh_attributes: |
  ---
  AutoCommit: 1
  RaiseError: 1
local_schema: sample
remote_catalog: ~
remote_schema: ~
user_name: sample

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
NOTICE:  In get_dbh, input connection info is
---
auth: passwd
data_source: dbi:Oracle:database=sample;host=111.11.11.11;sid=xxx;port=1521
dbh_attributes: |
  ---
  AutoCommit: 1
  RaiseError: 1
local_schema: sample
remote_catalog: ~
remote_schema: ~
user_name: sample

CONTEXT:  SQL statement SELECT dbi_link.cache_connection( 1 )
ERROR:  error from Perl function remote_select: error from Perl function
cache_connection: DBI
connect('database=sample;host=111.11.11.11;sid=xxx;port=1521','sample',...)
failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var  or PATH
(Windows) and or NLS settings, permissions, etc. at line 137 at line 13.

Can you help? Im able to connect to the Oracle database thro' a perl program
but get this error when trying to query thro' dbi_link

Thanks
Josh


[GENERAL] dbi_link question -problem connecting to oracle

2009-02-06 Thread Josh Harrison
Im not sure if this is the right place to ask about dbi_link. If not please
forward  it there.

Im trying to create a dbi_link between Oracle and postgresql. i nistalled
all the necessary perl packages
And I had run dbi_link.sql and it completed without any errors

This is sql that I use to connect
postg...@garuda:~$ less
/home/postgres/dbi-link-2.0.0/examples/oracle/dola.sql
/*
 * Data source: dbi:Oracle:hr;host=localhost;sid=xe
 * User:hr
 * Password:foobar
 * dbh attributes:  {AutoCommit = 1, RaiseError = 1}
 * dbh environment: NULL
 * remote schema:   NULL
 * remote catalog:  NULL
 * local schema:hr
 */

UPDATE
pg_catalog.pg_settings
SET
setting =
CASE WHEN 'dbi_link' = ANY(string_to_array(setting, ','))
THEN setting
ELSE 'dbi_link,' || setting
END
WHERE
name = 'search_path'
;

SELECT make_accessor_functions(
'dbi:Oracle:sample;host=111.11.11.11;sid=xxx;port=1521',
''username',
'password',
'---
AutoCommit: 1
RaiseError: 1
',
NULL,
NULL,
NULL,
'sample'
);

And it executed successfully i think.It didnt comeplain of anything and
created rules for all the tables in the oracle database

Now when I try to select from the remote table using
select r.* from sample.employee

ERROR:  relation sample.employee does not exist

What am I missing here?
Also what values should go into remote_schema and remote_catalog? Can they
be nulls?

Thanks
Josh


[GENERAL] Disabling FK for tables in Postgresql

2009-01-28 Thread Josh Harrison
Hi
Is it possible to disable FK constraints in Postgresql?
Thanks
Josh


[GENERAL] Question about COPY command

2009-01-08 Thread Josh Harrison
Hi,
A basic question about the COPY command syntax
This is the syntax in the postgres manual.

COPY *tablename* [ ( *column* [, ...] ) ]
FROM { '*filename*' | STDIN }
..
.

What is the difference between copying from 'filename' and copying
from 'stdin' ???

Thanks
Josh


Re: [GENERAL] Question about COPY command

2009-01-08 Thread Josh Harrison
On Thu, Jan 8, 2009 at 8:52 AM, A. Kretschmer 
andreas.kretsch...@schollglas.com wrote:

 In response to Josh Harrison :
  Hi,
  A basic question about the COPY command syntax
  This is the syntax in the postgres manual.
 
  COPY tablename [ ( column [, ...] ) ]
 
  FROM { 'filename' | STDIN }
  ..
  .
 
  What is the difference between copying from 'filename' and copying from
 'stdin' ???

 Within psql, stdin is your keyboard. Otherwise, you can use COPY in a
 UNIX-PIPE.

Thanks... Is it possible to use COPY command to copy data from oracle to
postgresql database? I currently use jdbc to do this but its really
slow...so I wanted to try the COPY command ?  So if i have to do  this using
COPY command then I have to 'exp'  the oracle table to a flat file and read
it using COPY command into postgresql tables ? Is there a way to do this
without dumping the data into a file ?

My question is is it possible to read the oracle data from the oracle
database and copy them into postgresql database directly by using COPY
command instead of jdbc preparedstatement(INSERT command) ?

Thanks
Josh



 Andreas
 --
 Andreas Kretschmer
 Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
 GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] dblink between oracle and postgres?

2009-01-08 Thread Josh Harrison
Hi,
Is there any utility like (oracle's dblink etc) that can establish
connection between oracle and postgres database?
Thanks
Josh


Re: [GENERAL] ALTER TABLE .....Error: Must be owner of the table

2008-12-08 Thread Josh Harrison
Thanks all

On 12/7/08, Tom Lane [EMAIL PROTECTED] wrote:
 Josh Harrison [EMAIL PROTECTED] writes:
 How can I give the ALTER permission

 You can't grant ALTER permission --- that's only allowed to the table
 owner.  However, you could make thw table be owned by a group role and
 grant membership in that role.

   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ALTER TABLE .....Error: Must be owner of the table

2008-12-05 Thread Josh Harrison
Hi,
I had created some tables in the schema Foo and given
GRANT ALL ON SCHEMA FOO TO SCOT;

But when Scot tries to alter a table he gets the error
MUST BE OWNER OF THE TABLE

How can I give the ALTER permission or is there any other way to let other
users modify or add tables in this schema?

Thanks
Josh


[GENERAL] Limit on number of databases in a Cluster ?

2008-12-03 Thread Josh Harrison
Hi,

1. Is there a limit on the number of databases that can be in a single
postgres cluster?
2. Is there any performance impacts associated with having too many
databases in a cluster?
3. Is there a good magical number for this limit ?


Thanks
Josh


Re: [GENERAL] Limit on number of databases in a Cluster ?

2008-12-03 Thread Josh Harrison
On Wed, Dec 3, 2008 at 11:51 AM, Scott Marlowe [EMAIL PROTECTED]wrote:

 On Wed, Dec 3, 2008 at 8:43 AM, Josh Harrison [EMAIL PROTECTED] wrote:
  Hi,
 
  1. Is there a limit on the number of databases that can be in a single
  postgres cluster?

 No.  I'm sure there's a practical limit into the thousands where
 things start to get slower.

  2. Is there any performance impacts associated with having too many
  databases in a cluster?

 Define too many.  I've run a couple hundred before without it being a
 problem.

  3. Is there a good magical number for this limit ?

 Only the one that your testing tells you there is.  Got a rough guess
 of how many you want to run?  How busy they'll be?  that kind of
 thing.


About 10-15 ?


[GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Josh Harrison
Hi,
I tried the 8.4-devel version and the CTE (Common Table Expression -WITH
RECURSIVE ) patch is working pretty good.
I just have a question

These are the queries  their plan .

The first query uses RECURSIVE keyword (and has a recursive and
non-recursive term as CTE) while the second query uses only WITH keyword(and
has no recursive term)
My question is when I don't use the Recursive term does the optimizer just
consider it as a subquery or does it work like Oracle's WITH CLAUSE
(Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
queries. So does this do the same?

1. explain analyse
WITH RECURSIVE subdepartment AS
(
-- non-recursive term
SELECT * FROM department WHERE name = 'A'
UNION ALL
-- recursive term
SELECT d.*FROM   department AS d
JOIN   subdepartment AS sd   ON (d.parent_department = sd.id)
)
SELECT *
FROM subdepartment

QUERY
PLAN

 
-

 CTE Scan on subdepartment  (cost=17.57..18.99 rows=71 width=40) (actual
time=0.044..0.590 rows=5 loops=1)

InitPlan

 -  Recursive Union  (cost=0.00..17.57 rows=71 width=10) (actual
time=0.034..0.536 rows=5 loops=1)
   -  Seq Scan on department  (cost=0.00..1.10 rows=1 width=10)
(actual time=0.025..0.031 rows=1 loops=1)
 Filter: (name =
'A'::text)

   -  Hash Join  (cost=0.33..1.51 rows=7 width=10) (actual
time=0.080..0.107 rows=1 loops=4)
 Hash Cond: (d.parent_department = sd.id)

 -  Seq Scan on department d  (cost=0.00..1.08 rows=8
width=10) (actual time=0.004..0.033 rows=8 loops=4)
 -  Hash  (cost=0.20..0.20 rows=10 width=4) (actual
time=0.023..0.023 rows=1 loops=4)
   -  WorkTable Scan on subdepartment sd
(cost=0.00..0.20 rows=10 width=4) (actual time=0.004..0.009 rows=1 loops=4)
 Total runtime: 0.681 ms

2. explain analyse
WITH  subdepartment AS
(
-- non-recursive term
SELECT * FROM department WHERE name = 'A'
)
SELECT id,name FROM subdepartment

QUERY
PLAN

 
---

 CTE Scan on subdepartment  (cost=1.10..1.12 rows=1 width=36) (actual
time=0.037..0.050 rows=1 loops=1)

InitPlan

 -  Seq Scan on department  (cost=0.00..1.10 rows=1 width=10) (actual
time=0.024..0.030 rows=1 loops=1)
   Filter: (name =
'A'::text)

 Total runtime: 0.111 ms

Thanks
Josh


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Josh Harrison
Thanks Tom. This is wonderful

On Thu, Nov 20, 2008 at 4:21 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Josh Harrison [EMAIL PROTECTED] writes:
  My question is when I don't use the Recursive term does the optimizer
 just
  consider it as a subquery or does it work like Oracle's WITH CLAUSE
  (Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
  queries. So does this do the same?

 See the fine manual, for instance last para here:
 http://developer.postgresql.org/pgdocs/postgres/queries-with.html

regards, tom lane



[GENERAL] PostgreSQL 8.4 download?

2008-11-19 Thread Josh Harrison
Hi
Is version 8.4 available for download now ? If so can you direct me to he
website?
Thanks
Josh


Re: [GENERAL] PostgreSQL 8.4 download?

2008-11-19 Thread Josh Harrison
Thanks GJ for the 'he' information. It was just a typo.
If so can you direct me to the website?
-Josh

On Wed, Nov 19, 2008 at 8:33 AM, Grzegorz Jaśkiewicz [EMAIL PROTECTED]wrote:

 he is still in the making, so there's no 8.4 release as such yet.
 You have to wait, for he will come. ;)


 --
 GJ



Re: [GENERAL] PostgreSQL 8.4 download?

2008-11-19 Thread Josh Harrison
On Wed, Nov 19, 2008 at 1:08 PM, Joshua D. Drake [EMAIL PROTECTED]wrote:

 On Wed, 2008-11-19 at 17:19 +0200, Devrim GÜNDÜZ wrote:
  On Wed, 2008-11-19 at 08:35 -0500, Josh Harrison wrote:
   If so can you direct me to the website?

 Just a tip... 8.4 isn't released... it isn't even beta yet.

I know...I just want to try the WITH RECURSIVE feature that is already
committed to the 8.4 version .


 
  Here is yesterday's snapshot:
 
  http://yum.pgsqlrpms.org/srpms/8.4/postgresql-8.4devel_20081118.tar.bz2
 
  and its md5 sum:
 
 
 http://yum.pgsqlrpms.org/srpms/8.4/postgresql-8.4devel_20081118.tar.bz2.md5
 
  ...or maybe better:
 
  ftp://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.bz2
 
 --




[GENERAL] Heikki's Visibility Map for postgres 8.4?

2008-08-05 Thread Josh Harrison
Hi,
Is Heikki's Visibility Map patch included for the Postgresql 8.4 version
http://archives.postgresql.org/pgsql-hackers/2007-11/msg00142.php

If not whats the status of that patch? Im especially interested in the
index-only scan mentioned there!!!

Thanks


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-22 Thread Josh Harrison
On Jan 18, 2008 4:14 AM, Dorren [EMAIL PROTECTED] wrote:

 Terabytes of data: this is a lot of Oracle data to migrate. You would
 need a high performance tools capable to handle heterogeneous
 environment
 People suggested links here, so I will add some that could be very
 appropriate to your case:

 PostgreSQL loader is limited by the way. For instance, if you have a
 end of the line character within your data then load into PostgreSQL
 will fail.
 Check this pdf:
 http://www.wisdomforce.com/dweb/resources/docs/OracleToNetezzaWithFastReader.pdf

 Few tools to consider:

 FastReader:  http://www.wisdomforce.com/dweb/index.php?id=23 -
 extracts data from Oracle into ASCII flat files or pipe and create a
 input for PostgreSQL loader. Many people use it for fast initial
 synchronization. Fastreader performs bulk data extract when terabytes
 of data can be migrated in hours

 Database Sync - http://www.wisdomforce.com/dweb/index.php?id=1001  -
 also fast data transfer tool that operates as a change data capture.
 It captures all the latest transactions and could be used for data
 warehouse incremental feeds with OLTP Oracle data. You may need it if
 don't want each time to move terabytes of data but only the changed
 data


Thanks. Ill checkout those options. I also have another question in mind.
How good(or fast) will it be to use java with jdbc to transfer these
terabytes of data from oracle to postgresql? This worked okay for small
datasets but Im not sure how it will behave for large data.
 And also keep track of the changes in the Oracle production system using
triggers?

Thanks
josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-18 Thread Josh Harrison
On Jan 18, 2008 7:45 AM, Gregory Williamson 
[EMAIL PROTECTED] wrote:

  Mayuresh Nirhali wrote:
 
  Josh Harrison wrote:
  
  
   Thanks .
   We have around 3TB of data now running in Oracle. I have done
   replication in postgresql but not much in Oracle. Is there a way you
   can replicate between Oracle and Postgresql.  For writing the custom
   codes do you suggest any preferred language ...like java, perl etc?
  See, if this can help,
  https://daffodilreplicator.dev.java.net/
 
  ... and do let us know if you find it useful.
  Rgds
  Mayuresh

 At least from my browser the links to documentation, comparisons, FAQ and
 download all fail ...

The same for me. I couldn't open anything there
josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison


We have an Oracle production database with some terbytes of data.
We wanted
to migrate that to Postgresql (rigt now...a test database and not
production) database.
What are the good options to do that?
Please advise me on where to look for more information on this
 topic
   
You have two steps to work on.  The first is the DDL, to create
equivalent tables in pgsql as in oracle, the second is to migrate
 over
your data.
   
  I had done this with the test database. For ddl generation I used
 xml/xsl
  and for data migration I used jdbc. I can get the ddl generated fine.
 With
  JDBC the data migration is a bit slow.
  My question is abt the data migration. Im not sure how to try this with
 an
  online oracle database. We are required to run both postgres and oracle
  database simultaneously for a couple of months (atleast till we decide
  whether we are going to shut down oracle for good !!!). Since the oracle
  database is a production database, It will have updates/inserts during
 this
  time. How do you manage that?
 

 About a year ago we converted one of our clients multi-TB ODS systems
 built in
 Oracle over to PostgreSQL. There's a case study about it you can get from
 the
 Sun folks at

 http://www.sun.com/third-party/srsc/resources/postgresql/postgre_success_dwp.pdf

 Now, due to the size of the project, we had to run both the Oracle and
 Postgres systems in parallel for several months. We kept the data up to
 date
 using a slew of custom code, designed to replicate data from either the
 ODS
 system or the OLTP system, depending on various technical and business
 factors.  My guess is that in your case, you'd want a mix of replicating
 data
 from the current Oracle database and your application, as best possible.

 Figuring out how you go about replicating the data is  certainly easier if
 you've have been through it before, but I don't think it is anything too
 magical; we went through a number of different ideas and ended up using
 multiple methods depending on the data involved.  HTH.

Thanks .
We have around 3TB of data now running in Oracle. I have done replication in
postgresql but not much in Oracle. Is there a way you can replicate between
Oracle and Postgresql.  For writing the custom codes do you suggest any
preferred language ...like java, perl etc?

Thanks
Josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:

 On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
  Thanks
 
  On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
 
   On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
Hi
We have an Oracle production database with some terbytes of data. We
wanted to migrate that to Postgresql (rigt now...a test database and
not production) database.  What are the good options to do that?
  
   I have written some Free software, DBI-Link, for just this use case.
   The software is under the BSD license, so you can use it freely.  I
   also offer consulting on such migrations.
  
   I downloaded DBI-Link.
  When I tried to compile postgres8.3 with-perl option it gives me this
 error.

 You may have an old or broken version of perl.  What's the output of
 perl -v?

I use version 5.8.8
Thanks. I sorted out that. That was a linker problem. I installed binutils
and made gcc use that ld. Now I can compile postgres with perl option.


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 10:11 AM, Josh Harrison [EMAIL PROTECTED] wrote:



 On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:

  On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
   Thanks
  
   On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
  
On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
 Hi
 We have an Oracle production database with some terbytes of data.
  We
 wanted to migrate that to Postgresql (rigt now...a test database
  and
 not production) database.  What are the good options to do that?
   
I have written some Free software, DBI-Link, for just this use case.
The software is under the BSD license, so you can use it freely.  I
also offer consulting on such migrations.
   
I downloaded DBI-Link.
   When I tried to compile postgres8.3 with-perl option it gives me this
  error.
 
  You may have an old or broken version of perl.  What's the output of
  perl -v?

 I use version 5.8.8
 Thanks. I sorted out that. That was a linker problem. I installed binutils
 and made gcc use that ld. Now I can compile postgres with perl option.


Now a new problem had come up. When I try createlang command
createlang plperlu test
I get this error...
createlang: language installation failed: ERROR:  could not load library
/export/home/josh/postgres8.3-perl/lib/plperl.so: ld.so.1: postgres:
fatal: relocation error: file
/export/home/josh/postgres8.3-perl/lib/plperl.so: symbol PL_curpad:
referenced symbol not found

perl information:
 perl -v
This is perl, v5.8.8 built for i86pc-solaris-64int

Can you advise pls
josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 1:43 PM, Josh Harrison [EMAIL PROTECTED] wrote:



 On Jan 16, 2008 10:11 AM, Josh Harrison [EMAIL PROTECTED] wrote:

 
 
  On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:
 
   On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
Thanks
   
On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
   
 On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
  Hi
  We have an Oracle production database with some terbytes of
   data. We
  wanted to migrate that to Postgresql (rigt now...a test database
   and
  not production) database.  What are the good options to do that?

 I have written some Free software, DBI-Link, for just this use
   case.
 The software is under the BSD license, so you can use it freely.
I
 also offer consulting on such migrations.

 I downloaded DBI-Link.
When I tried to compile postgres8.3 with-perl option it gives me
   this error.
  
   You may have an old or broken version of perl.  What's the output of
   perl -v?
 
  I use version 5.8.8
  Thanks. I sorted out that. That was a linker problem. I installed
  binutils and made gcc use that ld. Now I can compile postgres with perl
  option.


 Now a new problem had come up. When I try createlang command
 createlang plperlu test
 I get this error...
 createlang: language installation failed: ERROR:  could not load library
 /export/home/josh/postgres8.3-perl/lib/plperl.so: ld.so.1: postgres:
 fatal: relocation error: file
 /export/home/josh/postgres8.3-perl/lib/plperl.so: symbol PL_curpad:
 referenced symbol not found

 perl information:
  perl -v
 This is perl, v5.8.8 built for i86pc-solaris-64int

 Can you advise pls
 josh

 Forgot to mention..
My perl information
perl -V
Summary of my perl5 (revision 5 version 8 subversion 8) configuration:
  Platform:
osname=solaris, osvers=2.10, archname=i86pc-solaris-64int
uname='sunos aishwarya 5.10 generic_118844-26 i86pc i386 i86pc '
config_args='-Dcc=gcc -Dprefix=/export/home/josh/perl5 -Duse64bitint
-Duseshrplib'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef
usemultiplicity=undef
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=define use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='gcc', ccflags ='-fno-strict-aliasing -pipe
-Wdeclaration-after-statement -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV
-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV
-DPERL_USE_SAFE_PUTENV',
optimize='-O',
cppflags='-fno-strict-aliasing -pipe -Wdeclaration-after-statement'
ccversion='', gccversion='3.4.5', gccosandvers='solaris2.8'
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=12345678
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=8
alignbytes=4, prototype=define
  Linker and Libraries:
ld='gcc', ldflags =' -L/usr/local/lib '
libpth=/usr/local/lib /usr/lib /usr/ccs/lib
libs=-lsocket -lnsl -ldl -lm -lc
perllibs=-lsocket -lnsl -ldl -lm -lc
libc=/lib/libc.so, so=so, useshrplib=true, libperl=libperl.so
gnulibc_version=''
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' -Wl,-E -R
/export/home/josh/perl5/lib/5.8.8/i86pc-solaris-64int/CORE'
cccdlflags='-fPIC', lddlflags=' -Wl,-E -G -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Compile-time options: PERL_MALLOC_WRAP PERL_USE_SAFE_PUTENV
USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO
  Built under solaris
  Compiled at Jan 16 2008 12:13:26
  @INC:
/export/home/josh/perl5/lib/5.8.8/i86pc-solaris-64int
/export/home/josh/perl5/lib/5.8.8
/export/home/josh/perl5/lib/site_perl/5.8.8/i86pc-solaris-64int
/export/home/josh/perl5/lib/site_perl/5.8.8
/export/home/josh/perl5/lib/site_perl


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 1:31 PM, Dawid Kuroczko [EMAIL PROTECTED] wrote:

  On Jan 11, 2008 7:14 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
 
  On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
   Hi
   We have an Oracle production database with some terbytes of data. We
 wanted
   to migrate that to Postgresql (rigt now...a test database and not
   production) database.
   What are the good options to do that?
   Please advise me on where to look for more information on this topic
 
  You're going to need to use your brain for a fair portion of this,
  because how you use oracle will be just different enough from everyone
  else that no boxed solution.
 
  You have two steps to work on.  The first is the DDL, to create
  equivalent tables in pgsql as in oracle, the second is to migrate over
  your data.
 
  I've generally done the ddl conversion by hand in an editor, and
  migrated data over with some scripting language like perl or php.

 If you are migrating terabytes don't use perl.  I did some experimental
 for fun migration some time ago and DBD::Oracle worked remarkably
 slow...  What you need is to get a program which will export data
 from Oracle as CSV.  As far as I know Oracle does not provide such
 a tool (though it will import CSV happily through sqlldr),
 but you can Google out a C-code which does just that.  I don't remember
 where I left if... :-(

 From that, you just need to stream CSV into PostgreSQL's COPY
 command.  It worked FAST.  Really.

 And be wary of data types conversion.

 In this case how do you migrate the simultaneous
updates/deletes/inserts, Since this is a production system,  there are
everyday changes in the databse ?


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-14 Thread Josh Harrison
Thanks

On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:

 On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
  Hi
  We have an Oracle production database with some terbytes of data. We
  wanted to migrate that to Postgresql (rigt now...a test database and
  not production) database.  What are the good options to do that?

 I have written some Free software, DBI-Link, for just this use case.
 The software is under the BSD license, so you can use it freely.  I
 also offer consulting on such migrations.

 I downloaded DBI-Link.
When I tried to compile postgres8.3 with-perl option it gives me this error.

ld: fatal: relocations remain against allocatable but non-writable sections
collect2: ld returned 1 exit status
gmake[3]: *** [libplperl.so.0.0] Error 1

My OS is SunOS 5.10
(The same compiles fine in FreeBSD but gives an error in Solaris). Is this
kind 0f solaris-specific error?

Do you know what caues this error?
Thanks
josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-11 Thread Josh Harrison
On Jan 11, 2008 1:22 PM, Erik Jones [EMAIL PROTECTED] wrote:


 On Jan 11, 2008, at 12:14 PM, Scott Marlowe wrote:

  On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
  Hi
  We have an Oracle production database with some terbytes of data.
  We wanted
  to migrate that to Postgresql (rigt now...a test database and not
  production) database.
  What are the good options to do that?
  Please advise me on where to look for more information on this topic
 
  You're going to need to use your brain for a fair portion of this,
  because how you use oracle will be just different enough from everyone
  else that no boxed solution.
 
  You have two steps to work on.  The first is the DDL, to create
  equivalent tables in pgsql as in oracle, the second is to migrate over
  your data.
 
  I've generally done the ddl conversion by hand in an editor, and
  migrated data over with some scripting language like perl or php.

 Another option is to talk to the folks at EnterpriseDB as Oracle-
 Postgres compatibility is their specialty.


I had done this with the test database. For ddl generation I used xml/xsl
and for data migration I used jdbc. I can get the ddl generated fine. With
JDBC the data migration is a bit slow.
My question is abt the data migration. Im not sure how to try this with an
online oracle database. We are required to run both postgres and oracle
database simultaneously for a couple of months (atleast till we decide
whether we are going to shut down oracle for good !!!). Since the oracle
database is a production database, It will have updates/inserts during this
time. How do you manage that?

Thanks
josh


[GENERAL] Online Oracle to Postgresql data migration

2008-01-11 Thread Josh Harrison
Hi
We have an Oracle production database with some terbytes of data. We wanted
to migrate that to Postgresql (rigt now...a test database and not
production) database.
What are the good options to do that?
Please advise me on where to look for more information on this topic
thanks
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 9:12 AM, A. Kretschmer [EMAIL PROTECTED]
wrote:

 am  Wed, dem 09.01.2008, um  9:02:23 -0500 mailte Josh Harrison folgendes:
  Hi,
  When restoring the pg_dumped data thro psql does the rows of the table
 are
  restored in the same order? ie for example if
  Table A has rows r1,r2,r3,r4,r5 in this order, then if I pg_dump and
 restore it
  to another database, will it have the rows in the same order
 r1,r2,r3,r4,r5?

 No. If you need an order than you need an ORDER BY in the
 SELECT-Statement.


Fine. I can use order by when I want  to order it in terms of some columns.
But What if I want to maintain the same order as in the database1? ie., I
want my rows of TableABC in Database2 to be the same order as the rows in
TableABC in Database 1 ???

Thanks
josh


[GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
Hi,
When restoring the pg_dumped data thro psql does the rows of the table are
restored in the same order? ie for example if
Table A has rows r1,r2,r3,r4,r5 in this order, then if I pg_dump and restore
it to another database, will it have the rows in the same order
r1,r2,r3,r4,r5? Does this apply to big tables also?
Thanks
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 9:35 AM, A. Kretschmer [EMAIL PROTECTED]
wrote:

 am  Wed, dem 09.01.2008, um 14:07:13 + mailte Raymond O'Donnell
 folgendes:
  On 09/01/2008 14:02, Josh Harrison wrote:
 
  When restoring the pg_dumped data thro psql does the rows of the table
  are restored in the same order? ie for example if
  Table A has rows r1,r2,r3,r4,r5 in this order, then if I pg_dump and
  restore it to another database, will it have the rows in the same order

  r1,r2,r3,r4,r5? Does this apply to big tables also?
 
  If you use the text dump format, you can see the order in which the rows
  are restored.

 Right, but within the table the rows hasn't an order. You can see this
 when you select the ctid-column:

 test=# create table order_test (i int);
 CREATE TABLE
 test=*# insert into order_test values (1);
 INSERT 0 1
 test=*# insert into order_test values (2);
 INSERT 0 1
 test=*# insert into order_test values (3);
 INSERT 0 1
 test=*# select ctid, i from order_test ;
  ctid  | i
 ---+---
  (0,1) | 1
  (0,2) | 2
  (0,3) | 3
 (3 rows)

 test=*# update order_test set i=20 where i=2;
 UPDATE 1
 test=*# update order_test set i=2 where i=20;
 UPDATE 1
 test=*# select ctid, i from order_test ;
  ctid  | i
 ---+---
  (0,1) | 1
  (0,3) | 3
  (0,5) | 2
 (3 rows)



 Now a pg_dump:

 ALTER TABLE public.order_test OWNER TO webmaster;

 --
 -- Data for Name: order_test; Type: TABLE DATA; Schema: public; Owner:
 webmaster
 --

 COPY order_test (i) FROM stdin;
 1
 3
 2
 \.


 Now the question: what is the correct order?

 All my requirement is that the dumped table in database2 should be in the
same order as the original table(at the time of dump) in database1 .
Thanks
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 9:59 AM, A. Kretschmer [EMAIL PROTECTED]
wrote:

 am  Wed, dem 09.01.2008, um  9:45:11 -0500 mailte Josh Harrison folgendes:
  What if I want to maintain the same order as in the database1? ie., I
 want my
  rows of TableABC in Database2 to be the same order as the rows in
 TableABC in
  Database 1 ???

 For what reason?

 Again: there is no order within the database.

 Thanks...
Another quick question...When you issue a query like this
select * from dummy limit 10
What 10 rows are fetched? like first 10  or last 10 or  the first 10 from
first block or ?
And this query always returns the same 10 rows (when there are no
updates/deletes)


josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 10:27 AM, Alvaro Herrera [EMAIL PROTECTED] wrote:

 Josh Harrison escribió:

  Another quick question...When you issue a query like this
  select * from dummy limit 10
  What 10 rows are fetched? like first 10  or last 10 or  the first 10
 from
  first block or ?

 Any 10.  (First 10 in the physical table _if_ a seqscan is used).


Okay. Here is another scenario where Im confused.
I have a a table with around  30,000,000 recs. This is not a production
system but a test system. So in the test system generally we upload the rows
in some order say rows corresponding to a particular  patient or something
like that. But in the production system, it generally doesn't happen like
that. The rows of 1 particular patient can be shuffled anywhere (ie.,
inserted in any order). We r trying to duplicate the same by shuffling te
data in the table so that the rows are not in any order and also not stored
in contiguous blocks

So now I have a table Dummy with 30,000,000 recs and a table Shuffled_Dummy
(Create table Shuffled_Dummy as select * from Dummy order by random() ) with
the same shuffled rows of dummy.

My questions
1.  I pg_dumped dummy and Shuffled_dummy (from database1) to another
database(database2)
When I issued the query in both database (database1 and database2)

select * from dummy limit 1000 ( the planner chooses seq scan for this
query)
 - the output results from dummy are different in the 2 databases
But
select * from shuffled_dummy limit 1000 (planner chooses seq scan)
 - the outputs from shuffled_dummy are same from both the database

Why?

2. Also when does the planner switch from choosing index scan to bitmap
index scan? Is it dependent on the number of rows to be retrieved or the
position of the relevant data in the blocks or something else?

Thanks
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 11:39 AM, Andrew Sullivan [EMAIL PROTECTED] wrote:

 On Wed, Jan 09, 2008 at 05:28:15PM +0100, Ivan Sergio Borgonovo wrote:
  Does it make any sense *knowing* how the implementation works to load
  records in a table in a specific order to improve performances?

 Well, this is more or less what CLUSTER does.  There are some cases where
 happening to know about the order the table is in will yield happy
 effects,
 yes.


You are right. Sometimes when i cluster the table according to the
frequently accessed indexes then it makes queries pretty fast. But its not a
feasible solution always since some tables have more indexes which are
accessed frequently. So clustering the table according to one index will
yield poor performance to queries involving other indexes. Index-only scan
is a good solution for this I guess for queries involving indexed columns
(like in oracle) !!!

josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 11:28 AM, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote:

 On Wed, 09 Jan 2008 10:54:21 -0500
 Tom Lane [EMAIL PROTECTED] wrote:

  Alvaro Herrera [EMAIL PROTECTED] writes:
   Josh Harrison escribió:
   Fine. I can use order by when I want  to order it in terms of
   some columns. But What if I want to maintain the same order as
   in the database1? ie., I want my rows of TableABC in Database2
   to be the same order as the rows in TableABC in Database 1 ???
 
   You can't.
 
  According to the SQL standard, a table is an *unordered* collection
  of rows, and the results of any query are produced in an
  unspecified order (unless you use ORDER BY).  The ambiguity about
  row ordering is intentional and is exploited by most DBMSes
  including Postgres to improve implementation efficiency.  If you
  assume there is such a thing as a specific ordering within a table,
  you'll live to regret it eventually.

 Does it make any sense *knowing* how the implementation works to load
 records in a table in a specific order to improve performances?

 And yeah I know that once you start deleting/updating row you may
 lose the advantage you gained betting on some peculiarity of the
 implementation... but in case you're dealing with a mostly static
 table?

 eg. if I'm importing a table does it make any sense to pre-sort it
 before importing it in postgres?


Okay. Let me explain this again
Lets say you load the data related to a particular person sequentially into
a table in the test  database. This results in all of the data for any one
person being located one or a very few sequential data blocks.  Testing
access to the person's data then reveals access to be very fast.  However,
the data is never loaded in that way in the production database.  It is
almost always spread out across many data blocks within the database,
roughly organized by the date-time in which the data arrived.In this case
access to a particular person's data is not as fast as compared to the
previous one where the data are located close to each other.

we have this problem when we compare Oracle's performance with postgres
since Oracle has index-only scan where it can access the data just from the
index when the query involves only indexed columns. But since postgres
currently doesn't implement index-only scan and goes to the heap for
fetching the tuples it becomes very slow when the data are shuffled

Let me know if it makes sense now
Thanks
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 11:56 AM, Andreas Kretschmer [EMAIL PROTECTED]
wrote:

 Josh Harrison [EMAIL PROTECTED] schrieb:
  My questions
  1.  I pg_dumped dummy and Shuffled_dummy (from database1) to another
 database
  (database2)
  When I issued the query in both database (database1 and database2)
 
  select * from dummy limit 1000 ( the planner chooses seq scan for this
 query)
  select * from shuffled_dummy limit 1000 (planner chooses seq scan)
 
 
  2. Also when does the planner switch from choosing index scan to bitmap
 index
  scan? Is it dependent on the number of rows to be retrieved or the
 position of
  the relevant data in the blocks or something else?

 For a select * ... without a WHERE the db can't use an index, this query
 forced a seq-scan.

 A index is used when:
 - a index are created
 - a propper WHERE or ORDER BY in the query
 - this index is useful
  (a index isn't useful, for instance, for a small table or when almost
  all rows are in the result set)

 A bitmap index scan performed when:
 - 2 or more propper indexes available
 - see above

 Thanks
In my database, I have a table 'person' containing roughly 30,000,000
records

explain select count(*) from person where person_id   114700
QUERY
PLAN

 


 Aggregate  (cost=307708.20..307708.21 rows=1
width=0)
   -  Index Scan using person_pk on person
(cost=0.00..307379.79rows=131364 width=0)
 Index Cond: (person_id 
114700::numeric)

 3 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms]
This returns the result
count
 
 78718

But for this query where the condition is slightly different the query plan
is different. The planner goes for bitmap index

explain select count(*) from person where person_id   114600
QUERY
PLAN

 
---

 Aggregate  (cost=342178.51..342178.52 rows=1
width=0)
   -  Bitmap Heap Scan on person  (cost=3120.72..341806.71 rows=148721
width=0)
 Recheck Cond: (person_id 
114600::numeric)
 -  Bitmap Index Scan on person_pk  (cost=0.00..3083.53 rows=148721
width=0)
   Index Cond: (person_id 
114600::numeric)

 5 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms]
and the result is
count
 
 90625

How does the planner choose the plan?
josh


Re: [GENERAL] quick question abt pg_dump and restore

2008-01-09 Thread Josh Harrison
On Jan 9, 2008 12:11 PM, Andrew Sullivan [EMAIL PROTECTED] wrote:

 On Wed, Jan 09, 2008 at 11:51:16AM -0500, Josh Harrison wrote:

  accessed frequently. So clustering the table according to one index will
  yield poor performance to queries involving other indexes.

 Maybe not poor, but certainly not optimised.

  Index-only scan is a good solution for this I guess for queries
 involving
  indexed columns (like in oracle) !!!

 I think I don't know what you mean by index-only scan.  Oracle can't be
 returning you data just by scanning an index, can it?  It needs to get the
 tuple if you need it back.


For example if I have a query like
select  column2 from ABC where column1  20
and table ABC is indexed on (column1,column2) then Oracle will not goto the
heap to fetch the tuples. It will return them from the index itself since
the column information is available in the index. But postgres always goes
to the heap even if it has all the queried columns in the index.
For queries that involves all the columns(like select * from ABC where
column120) ... of course oracle goes to the heap to fetch them


 My bet is that you haven't tuned your vacuums correctly, or you aren't
 doing
 ANALYSE often enough on the affected tables, or you need to SET STATISTICS
 higher on some of the columns in order to get better estimates (and
 therefore better plans).


I had vacuumed and analysed the tables ...tried increasing the statistics
too. But the performance compared to oracle for these types of queries(that
i had mentioned above) is pretty slow

josh


Re: [GENERAL] Any big slony and WAL shipping users?

2007-12-28 Thread Josh Harrison
Hi,
Thanks for all your response.
I also thought abt having 2 setups for backup and replication so that even
when slony fails I will always have the standby server (WAL shipping) to
help me out.
 I have another question regarding this. I also want to write these to the
tape. Right now we have a cron job doing level 0,1,2,...  backups of the
other servers to the tape regularly. What is the good way to include
postgres server backup to tape?
Thanks
josh

On Dec 28, 2007 9:29 AM, Glyn Astill [EMAIL PROTECTED] wrote:

 Hi Josh,

 This is exactly the same setup I'm currently testing. For those
 asking why use both WAL shipping and slony, it's simple; this means
 we have no single point of failure. If slony stops replicating
 because we mess up a replication set or our shipping method (NFS)
 falls on its ass, at least we still have some replication going.

 We have to use slony, like you we need a replica to take the load off
 of our main system, this is mainly for our reporting processes.

 Glyn


 --- Josh Harrison [EMAIL PROTECTED] wrote:

  Hi,
  We are trying to use slony and WAL shipping for warm standby for
  replication
  with postgresql. Currently our systems are in oracle and we r
  checking the
  feasibility to migrate to postgres. Replication is one major issue
  here.
  Though everything seems to be working fine in our test environment,
  we just
  want the assurance that Slony and WAL shipping is used by other
  large
  production systems and running successfully.
  What are the other large 24x7 productions systems that use slony
  and the
  other WAL archiving of postgresql successfully?
 
  Thanks
  josh
 



  ___
 Yahoo! Answers - Got a question? Someone out there knows the answer. Try
 it
 now.
 http://uk.answers.yahoo.com/




[GENERAL] Applying patch

2007-12-28 Thread Josh Harrison
Hi,
I have postgresql version 8.3beta3 running. I want to apply a patch (
gzn7ImULlK3v.gz patch for index-only scan) to this and try it out. How to
apply this patch. Where should the patch be placed and what command to be
run. Should I have to stop the postgres server or run initdb for the patch
to take effect.
This is the first time Im trying out a postgres patch...
Thanks
josh


[GENERAL] Any big slony and WAL shipping users?

2007-12-27 Thread Josh Harrison
Hi,
We are trying to use slony and WAL shipping for warm standby for replication
with postgresql. Currently our systems are in oracle and we r checking the
feasibility to migrate to postgres. Replication is one major issue here.
Though everything seems to be working fine in our test environment, we just
want the assurance that Slony and WAL shipping is used by other large
production systems and running successfully.
What are the other large 24x7 productions systems that use slony and the
other WAL archiving of postgresql successfully?

Thanks
josh


Re: [GENERAL] [Slony1-general] Any big slony and WAL shipping users?

2007-12-27 Thread Josh Harrison
On Dec 27, 2007 12:37 PM, Andrew Sullivan [EMAIL PROTECTED] wrote:

 On Thu, Dec 27, 2007 at 10:49:10AM -0500, Josh Harrison wrote:
  Hi,
  We are trying to use slony and WAL shipping for warm standby for
 replication

 It's unusual to use both.  Any reason you want to?


We wanted to have 1 master and 1 slave that can be queried and 1 warm
standby server that can be brought up in case of crash.  So I thought it
might be better to have WAL shipping for warm standby since thats working
pretty good and Slony for master-slave replication.
Let me know your comments on this setup? What is better for this setup?



 Anyway. . .

  What are the other large 24x7 productions systems that use slony and the
  other WAL archiving of postgresql successfully?

 Slony was originally written by Jan Wieck, and released by Afilias
 (disclosure: currently my employer).  Afilias wrote it because we needed
 better replication, and there were no community-offered systems.  We run
 the
 registries for several Internet top-level domains, including .info and
 .org.
 We have fairly stringent uptime guarantees, and reasonably high
 transaction
 volumes.  The databases are not immense, however.  Nevertheless, all the
 DNS
 changes for .org (for instance) today are dependent on Slony operating
 correctly.  I wouldn't say the system is perfect, but I think I can safely
 say we've been quite happy with its flexibility.  User-space tools are a
 little, uh, geeky still (with the possible exception of the GUI support --
 our system deployment makes that a little hard for us to use).

 There is someone who is using Slony to operate some rather large
 databases;
 he can post here if he wants to share his experience with you.


That will be very useful since our people are debating on the reliability of
Slony for large databases!!!

Thanks
Josh


[GENERAL] postgresql long text column

2007-12-26 Thread Josh Harrison
Hi
I have a question about postgres long text column values.
How does it handles these long text column values? Does it put all the long
text columns values from all the tables in one single place or separately?
Thanks
josh


[GENERAL] pgpool and slony

2007-12-19 Thread Josh Harrison
Hi,
This is a newbie pgpool question. I have 2 servers (master-slave) and Slony
is doing the replication. I had installed pgpool for load balance where i
had disabled the replication_mode=false and load_balance_mode=true in
pgpool.conf

pcp_node_info gives this output.

 -sh-3.00$pcp_node_info 10 localhost 9898 josh postgres 1
172.31.0.67 5432 2 16383.50

-sh-3.00$ pcp_node_info 10 localhost 9898 josh postgres 0
172.31.0.68 5432 2 16383.50

So does this mean that pgpool is working properly with slony-1. Also how do
I test to see if the load balancing with pgpool is working properly. What
kind of query will help me confirm that it is load balanced ?

Also will pgpool automate failover? How can I test that?

sorry if my question is silly
thanks
josh


Re: [GENERAL] slony error --need help

2007-12-18 Thread Josh Harrison
On Dec 17, 2007 3:14 PM, Shane Ambler [EMAIL PROTECTED] wrote:

 Not sure I can help much, I only read out of interest, but thought
 clarifying a few things may help.

 SHARMILA JOTHIRAJAH wrote:
  Hi I had posted this in the slony mailing list but no luck in getting
  any answers...Pls help me as I'm stuck with this error for the last 4
  days
 
 
 
  Im trying to replicate between postgres version 7.4.18 and version
  8.1.10.

 7.4 is the master?

yes



  I configured postgres-7.4 with enable-thread-safety option
 
  I configured slony1 with this command
 
  ./configure --prefix=/export/home/josh/slony7.4
  --enable-thread-safety
  --with-pgconfigdir=/export/home/josh/postgres7.4/bin
  --with-pgsourcetree=/export/home/josh/postgresql-7.4.18
 
 This is the 7.4 config? is the 8.1 config the same?
 (your not building slony for 8.1 against a 7.4 source tree?)

 Are both versions running on the same machine?

Yes



 What version of Slony? same for both servers?

Yes slony version 1.2.12



 
  When i try to execute the this script

 On the 7.4 machine?

yes



  #!/bin/sh
 
  slonik _EOF_ cluster name = slony_example;
 
  node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST
  user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME
  host=$SLAVEHOST user=$REPLICATIONUSER';
 
  init cluster ( id=1, comment = 'Master Node');
 
  create set (id=1, origin=1, comment='All pgbench tables'); set add
  table (set id=1, origin=1, id=1, fully qualified name =
  'public.sample1', comment='accounts table'); store node (id=2,
  comment = 'Slave node');
 
  store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME
  host=$MASTERHOST user=$REPLICATIONUSER'); store path (server = 2,
  client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST
  user=$REPLICATIONUSER');
 
  _EOF_
 
 
 
  I get theis error
 
  stdin:21: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could
  not load library /export/home/josh/postgres7.4/lib/xxid.so:
  ld.so.1: postgres: fatal: relocation error: file
  /export/home/josh/postgres7.4/lib/xxid.so: symbol
  GetTopTransactionId: referenced symbol not found
 
  stdin:21: Error: the extension for the xxid data type cannot be
  loaded in database 'dbname=testdb1 host=172.31.0.67 user=josh'
  stdin:21: ERROR: no admin conninfo for node 134701624
 
 
 
 
  The same works fine between postgresql versions 8.1.10 and 8.2.5 .
 
  Why do I get this error when replicating between versions 7.4 and8.1.
   Does slony1 replicate between these 2 versions? If so is there any
  other settings that needs to be done?
 

 I sorted out the problem. I think I had too many postgres installations in
my system. I removed them and compiled both postgresql and slony fro scratch
and it just worked...Thanks again


[GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Josh Harrison
Hi
I have a postgres version 7.4 and version 8.3 installed in my system.
 7.4 uses port 5432 and 8.3 uses port 5433.
I started 7.4 and the database is running fine. Now i started the database
server in version 8.3
and it started fine.

pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
server starting
-sh-3.00$ pg_ctl status
pg_ctl: server is running (PID: 4408)
/usr4/postgres8.3/bin/postgres  -D  /export/home/josh/postgres8.3/pgsql/data

But when I type psql -l   I get this error
-sh-3.00$ psql -l
psql: FATAL:  database postgres does not exist

why? Is it not possible to have multiple version installations i the same
machine(in different ports)?

thanks
josh


Re: [GENERAL] multiple version installation in the same machine ????

2007-12-18 Thread Josh Harrison
Thanks...It works...
josh

On Dec 18, 2007 3:06 PM, Richard Huxton [EMAIL PROTECTED] wrote:

 Josh Harrison wrote:
  Hi
  I have a postgres version 7.4 and version 8.3 installed in my system.
   7.4 uses port 5432 and 8.3 uses port 5433.
  I started 7.4 and the database is running fine. Now i started the
 database
  server in version 8.3
  and it started fine.
 
  pg_ctl -D /export/home/josh/postgres8.3/pgsql/data start -l log8.3.log
  server starting
  -sh-3.00$ pg_ctl status
  pg_ctl: server is running (PID: 4408)
  /usr4/postgres8.3/bin/postgres  -D
  /export/home/josh/postgres8.3/pgsql/data
 
  But when I type psql -l   I get this error
  -sh-3.00$ psql -l
  psql: FATAL:  database postgres does not exist

 Yep.

  why? Is it not possible to have multiple version installations i the
 same
  machine(in different ports)?

 It is, and lots of people do.

 Answer the following questions and you'll find out why you get your
 error (or just guess, and you'll probably figure it out).

 Q1. Do you have two versions of the psql command installed?
 Q2. Which one are you running above?
 Q3. What is the default port, user, database that it is connecting as to
 run -l?
 Q4. Does the version of the PostgreSQL on that port have that user and
 database?

 Personally I wrap my commands in aliases:
   alias psql82='/usr/local/pgsql82/bin/psql -p5433'
   alias psql83='/usr/local/pgsql83/bin/psql -p5483'

 Others prefer a small wrapper script, or even a set default
 installation command.

 --
   Richard Huxton
   Archonet Ltd



Re: [GENERAL] slony question

2007-12-11 Thread Josh Harrison
Thanks. Im testing slony1-1.2.12 in postgres 8.3beta3 version. I tried to
configure slony but I keep getting this error.

 ./configure --with-pgconfigdir=/export/home/sjothirajah/postgres8.3/bin
--with-enable-thread-safety
or
./configure --with-pgconfigdir=/export/home/sjothirajah/postgres8.3/bin
--enable-thread-safety



checking for correct version of PostgreSQL... 8.3
pg_config says pg_sharedir is /usr4/postgres8.3/share/postgresql/
checking PostgreSQL for enable-thread-safety as required on solaris2.10...
no
configure: error: PostgreSQL needs to be compiled with
--enable-thread-safety on solaris2.10

Can you help
Thanks
josh

On Dec 10, 2007 8:19 PM, Geoffrey [EMAIL PROTECTED] wrote:

 Scott Marlowe wrote:
  On Dec 10, 2007 2:19 PM, Chris Browne [EMAIL PROTECTED] wrote:
  [EMAIL PROTECTED] (Josh Harrison) writes:
  Does slony support postgres major version upgrade ? ie., will it
  replicate between different major versions?
  Yes, that's one of the major use cases for Slony-I.
 
  Version 1.0 and 1.1 support (loosely) from PostgreSQL 7.3 to 8.1-ish.
 
  Version 1.2 dropped 7.3 support, but supports up to (just about;
  there's a patch not in a 1.2 release that is needed for late-breaking
  8.3 changes) 8.3.
 
  Didn't 1.2 drop support for pg 7.4 as well?  I thought I remembered
  reading that some time ago.

 Not according to the docs:

 http://slony.info/documentation/requirements.html
 Section 3.1, 3rd bullet:

 You also need a recent version of PostgreSQL source. Slony-I depends on
 namespace support so you must have PostgreSQL version 7.3.3 or newer to
 be able to build and use Slony-I.

 It says the same thing in the Slony 1.2.12 docs rpm.

 --
 Until later, Geoffrey

 Those who would give up essential Liberty, to purchase a little
 temporary Safety, deserve neither Liberty nor Safety.
  - Benjamin Franklin

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



[GENERAL] slony question

2007-12-10 Thread Josh Harrison
Hi,
Does slony support postgres major version upgrade ? ie., will it replicate
between different major versions?
Thanks
josh


Re: [GENERAL] slony question

2007-12-10 Thread Josh Harrison
Thanks...Ill check that list
josh

On Dec 10, 2007 12:37 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Mon, 10 Dec 2007 12:20:07 -0500
 Josh Harrison [EMAIL PROTECTED] wrote:

  Hi,
  Does slony support postgres major version upgrade ? ie., will it
  replicate between different major versions?

 Yes. But for further questions on Slony please see the slony lists:

 http://lists.slony.info/mailman/listinfo

 Sincerely,

 Joshua D. Drake

  Thanks
  josh


 - --

  === The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
 PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
 Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
 PostgreSQL Replication: http://www.commandprompt.com/products/

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFHXXlfATb/zqfZUUQRAn5jAJ9iounFrKiLUSv/Eo5c4KaBZl5QRQCgogOV
 5On/T8c/7xMFQ6UrvPpMhpE=
 =4gk8
 -END PGP SIGNATURE-



[GENERAL] Replication using WAL files

2007-12-07 Thread Josh Harrison
Hi list,

I'm a newbie for postgresql replication
This is my requirement.

I have 2 servers on which I need to have data replicated. The master server
should serve for read/write queries and the 2nd server is used mainly for
research queries(read-only queries) and so it doesn't have to be up-to-date.
The read queries should be able to be sent to both servers so load-balancing
is possible. In case the master server fails the 2nd server should be able
to take over(fair-over support).

Is it possible to implement this thro' WAL shipping?

I tried the 'Continuous Archiving and PITR' in my test database and it works
fine. But this set-up is only for a warm standby server...right?! Is it
possible to make both the servers work asynchronously, while the primary
server takes up all write queries and the read queries can be sent to both
primary and 2nd server .And can the 2nd server  be updated using the WAL
files (WAL shipping) of the primary server asynchronously (while both the
servers are up)?

Hope I explained my requirement clearly.
Please advice
Thanks
josh


Re: [GENERAL] Replication using WAL files

2007-12-07 Thread Josh Harrison



 On 12/7/07, Josh Harrison  [EMAIL PROTECTED] wrote:
   I have 2 servers on which I need to have data replicated. The master
  server
   should serve for read/write queries and the 2nd server is used mainly
  for
   research queries(read-only queries) and so it doesn't have to be
  up-to-date.
  ...
   Is it possible to implement this thro' WAL shipping?
 
  No. At the moment [1] the WAL shipping system does not permit you to
  query the slave. There are a few options:
 
 
  http://pgfoundry.org/search/?type_of_search=softwords=replicationSearch=Search
 
  As well as this:
 
   http://bucardo.org/
 
  [1] But someone is working on this for 8.4. Don't hold your breath,
  though.
 
 
 Thanks for the info. Just to clarify, So at the moment for WAL shipping to
 work the 2nd server should  only be a stand-by server and not a slave(cannot
 be queried) ?
 In your experience, which other replication system (slony,pgpool etc) is
 better suited for my requirement?

 Thanks again
 josh


Hi,
I read all the documentations and got a bit confused.
1. I have a primary server which archives the WAL files to the secondary
servers's directory(some directory).
2. The recovery.conf  in the secondary server is set to copy these files
from this directory.
3. I take a base backup and feed it to the secondary server and start the
secondary server.
4. The secondary server now contains all the base backuped data.
5. I set up a trigger (as in this demo
http://archives.postgresql.org/sydpug/2006-10/msg1.php)http://archives.postgresql.org/sydpug/2006-10/msg1.php
  I created/inserted some table/data in the primary server, 'touch' trigger
in sec server and  the archived files are reflected in the  sec server's
database now and the sec database is up (recovery.conf becomes recovery.done)
and I can query the secondary as normal.


My question is that will any future changes in the primary server gets
reflected in the secondary server or is it just a 1-time show? If no, is it
possible to have a set-up like that (secondary server getting asychronous
updates from master thro'  WAL files..I guess Hot standby using WAL
files) ?

Sorry if my question sounds silly.
Thanks for all your help
josh




My question is


Re: [GENERAL] initdb - encoding question

2007-12-04 Thread Josh Harrison
initdb -E en_CA.utf-8 -D /export/home/sjothirajah/postgres8.3/pgsql/data
gives this error
initdb: en_CA.utf-8 is not a valid server encoding name

Thanks
josh

On Dec 3, 2007 1:01 PM, brian [EMAIL PROTECTED] wrote:

 Josh Harrison wrote:
  Hi
  Im tying to initialize the cluster using initdb and encoding 'UTF8'
  initdb -E UTF8 -D /export/home/josh/postgres8.3/pgsql/data
 
  But I get this error
  The files belonging to this database system will be owned by user
 josh.
  This user must also own the server process.
 
  The database cluster will be initialized with locales
COLLATE:  en_CA.ISO8859-1
CTYPE:en_CA.ISO8859-1
MESSAGES: C
MONETARY: en_CA.ISO8859-1
NUMERIC:  en_CA.ISO8859-1
TIME: C
  initdb: encoding mismatch
  The encoding you selected (UTF8) and the encoding that the
  selected locale uses (LATIN1) do not match.  This would lead to
  misbehavior in various character string processing functions.
  Rerun initdb and either do not specify an encoding explicitly,
  or choose a matching combination.
 
  Why do I get this error? How can I solve this?
  Thanks
  josh
 

 ISO8859-1 is latin 1. Try using en_CA.utf-8 instead.

 brian

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



[GENERAL] uninstall postgres 8.2

2007-12-03 Thread Josh Harrison
Hi,
What is the best way to uninstall postgres?
 I'm planning to use postgres 8.3 beta and I would like to uninstall my
earlier version 8.2.3. This is only a  test database and so I'm not
interested in upgrading or preserving anything from this version.
Thanks
josh


Re: [GENERAL] uninstall postgres 8.2

2007-12-03 Thread Josh Harrison
Thanks. Ill just delete them
josh

On Dec 3, 2007 10:37 AM, Gregory Williamson 
[EMAIL PROTECTED] wrote:

  josh --

 Delete everything under the postgres directory (we put ours on
 /apps/postgres.version.whatever.) to remove the executables/libs etc, and
 delete the entire data directory that the initdb pointed to. e.g if
 $PGDATA is /data/postgres/billing, rm -rf /data/postgres/billing would get
 rid of all the data and supporting tables. Don't forget to wipe the log, if
 any.

 (Sorry for top-posting --challenged reader)

 Greg Williamson
 Senior DBA
 GlobeXplorer LLC, a DigitalGlobe company

 Confidentiality Notice: This e-mail message, including any attachments, is
 for the sole use of the intended recipient(s) and may contain confidential
 and privileged information and must be protected in accordance with those
 provisions. Any unauthorized review, use, disclosure or distribution is
 prohibited. If you are not the intended recipient, please contact the sender
 by reply e-mail and destroy all copies of the original message.

 (My corporate masters made me say this.)



 -Original Message-
 From: [EMAIL PROTECTED] on behalf of Josh Harrison
 Sent: Mon 12/3/2007 8:20 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] uninstall postgres 8.2

 Hi,
 What is the best way to uninstall postgres?
  I'm planning to use postgres 8.3 beta and I would like to uninstall my
 earlier version 8.2.3. This is only a  test database and so I'm not
 interested in upgrading or preserving anything from this version.
 Thanks
 josh




[GENERAL] initdb - encoding question

2007-12-03 Thread Josh Harrison
Hi
Im tying to initialize the cluster using initdb and encoding 'UTF8'
initdb -E UTF8 -D /export/home/josh/postgres8.3/pgsql/data

But I get this error
The files belonging to this database system will be owned by user josh.
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_CA.ISO8859-1
  CTYPE:en_CA.ISO8859-1
  MESSAGES: C
  MONETARY: en_CA.ISO8859-1
  NUMERIC:  en_CA.ISO8859-1
  TIME: C
initdb: encoding mismatch
The encoding you selected (UTF8) and the encoding that the
selected locale uses (LATIN1) do not match.  This would lead to
misbehavior in various character string processing functions.
Rerun initdb and either do not specify an encoding explicitly,
or choose a matching combination.

Why do I get this error? How can I solve this?
Thanks
josh


Re: [GENERAL] Recheck condition

2007-11-30 Thread Josh Harrison


*Query1*
SELECT person_id  FROM person   WHERE (column1=1 AND column2='62')
INTERSECT
SELECT person_id  FROM person  WHERE (column1=1 AND column2='189')

  I get the same plan(see below)  with 'sort'  for 'intersect all'
 operation
  too. Why is intersect not an effecient way? Is there any other way this
  query/index can be written/created so that I can get the intersect
 results
  in an efficient way?

 Set operations are rather inefficient. To find the intersection of two
 arbitrary sets you need to sort them and compare. A query like you
 write would be better expressed as a join, something like:

 SELECT a.person_id
 FROM (SELECT person_id  FROM person   WHERE (column1=1 AND column2='62')
 a,
 (SELECT person_id  FROM person  WHERE (column1=1 AND column2='189') b
 WHERE a.person_id = b.person_id;

 or perhaps:

 SELECT a.person_id
 FROM person a, person b
 WHERE a.column1=1 AND a.column2='62'
 AND b.column1=1 AND b.column2='189'
 AND a.person_id = b.person_id;

 Which will probably generate a merge join...


Thanks. But this query seems to be more expensive than using intersect
operator.
This is the explain analyse plan for this query. It took 5 1/2 minutes to
generate this. I also tried to disable the mergejoin and in that case it
uses hash join and still takes more than 3 minutes (intersect took only 40
sec)

QUERY
PLAN

 
-

 Merge Join  (cost=1610648.92..10280119.99 rows=577856095 width=4) (actual
time=30562.630..264534.677 rows=225145385
loops=1)
   Merge Cond: (a.patient_id = b.patient_id)

   -  Sort  (cost=527974.81..528929.79 rows=381993 width=4) (actual time=
3755.361..3845.134 rows=213435
loops=1)
 Sort Key: a.patient_id

 Sort Method:  quicksort  Memory:
15868kB

 -  Bitmap Heap Scan on clinical_variable2 a  (cost=
17886.42..492557.97 rows=381993 width=4) (actual
time=315.753..3410.366rows=327498 loops=1)
   Recheck Cond: ((top_parent_service_sys_id = 1) AND
((top_parent_service_code)::text =
'62'::text))
   -  Bitmap Index Scan on clinical_variable_idx_topserv
(cost=0.00..17790.92 rows=381993 width=0) (actual
time=277.185..277.185rows=327498 loops=1)
 Index Cond: ((top_parent_service_sys_id = 1) AND
((top_parent_service_code)::text =
'62'::text))
   -  Materialize  (cost=1082674.11..1124546.38 rows=3349781 width=4)
(actual time=26807.248..99885.620 rows=225148250
loops=1)
 -  Sort  (cost=1082674.11..1091048.57 rows=3349781 width=4)
(actual time=26807.238..30343.870 rows=3429228
loops=1)
   Sort Key: b.patient_id

   Sort Method:  external merge  Disk:
53552kB

   -  Bitmap Heap Scan on clinical_variable2 b  (cost=
156754.24..679555.96 rows=3349781 width=4) (actual
time=2744.126..20106.160rows=3429228 loops=1)
 Recheck Cond: ((top_parent_service_sys_id = 1) AND
((top_parent_service_code)::text =
'189'::text))
 -  Bitmap Index Scan on clinical_variable_idx_topserv
(cost=0.00..155916.80 rows=3349781 width=0) (actual
time=2686.456..2686.456rows=3429228 loops=1)
   Index Cond: ((top_parent_service_sys_id = 1) AND
((top_parent_service_code)::text =
'189'::text))
 Total runtime: 324646.035ms


 18 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms] a: 0/ms]

Is there any other way you can think of to solve this problem. May be
creating the indexes in a  different way or something?

Thanks
jo


Re: [GENERAL] Recheck condition

2007-11-30 Thread Josh Harrison


 Or using an IN or EXISTS query:

 SELECT person_id
  FROM person
  WHERE column1=1
   AND column2='62'
   AND person_id IN (
 SELECT person_id
   FROM person
  WHERE column1=1
AND column2='189'
   )

 or

 SELECT person_id
  FROM person AS parent
  WHERE column1=1
   AND column2='62'
   AND EXISTS (
 SELECT 1
   FROM person
  WHERE parent.person_id = person_id
AND column1=1
AND column2='189'
   )


Thanks for your reply
The query with IN  gave this plan and took 1m19sec to give the result which
is slightly more than the intersect query(40 sec). The other query with
exists takes way long time for results. All these queries does a heap scan
for intermediate results...right? Is there a way to get them not to use the
heap for intermediate result and go to heap only for final data? This will
drastically improve the performance but Im not sure if postgres can do that?
Will creating the index in a different way and/or rewriting the query in a
different way achieve this result?

Thanks
jo

QUERY
PLAN

 
-

 Hash Join  (cost=705823.44..1182434.64 rows=43631 width=4) (actual time=
26443.675..52055.698 rows=140464
loops=1)
   Hash Cond: (public.person.patient_id = public.person.patient_id)

   -  Bitmap Heap Scan on person  (cost=17886.42..492557.97 rows=381993
width=4) (actual time=442.934..25779.601 rows=327498
loops=1)
 Recheck Cond: ((column1 = 1) AND ((column2)::text =
'62'::text))

 -  Bitmap Index Scan on person_idx
(cost=0.00..17790.92rows=381993 width=0) (actual time=
403.869..403.869 rows=327498 loops=1)
   Index Cond: ((column1 = 1) AND ((column2)::text =
'62'::text))
   -  Hash  (cost=687933.35..687933.35 rows=294 width=4) (actual time=
26000.635..26000.635 rows=6568
loops=1)
 -  HashAggregate  (cost=687930.41..687933.35 rows=294 width=4)
(actual time=25992.971..25996.471 rows=6568
loops=1)
   -  Bitmap Heap Scan on person
(cost=156754.24..679555.96rows=3349781 width=4) (actual time=
3202.251..23974.389 rows=3429228 loops=1)
 Recheck Cond: ((column1 = 1) AND ((column2)::text =
'189'::text))
 -  Bitmap Index Scan on person_idx  (cost=
0.00..155916.80 rows=3349781 width=0) (actual
time=3145.912..3145.912rows=3429228 loops=1)
   Index Cond: ((column1 = 1) AND ((column2)::text =
'189'::text))
 Total runtime: 52094.598 ms


Re: [GENERAL] Recheck condition

2007-11-30 Thread Josh Harrison
On Nov 30, 2007 7:55 AM, Alvaro Herrera [EMAIL PROTECTED] wrote:

 Josh Harrison escribió:

  Thanks...
  I have 1 more question in the same line...
 
  *Query1*
  SELECT person_id  FROM person   WHERE (column1=1 AND column2='62')
  INTERSECT
  SELECT person_id  FROM person  WHERE (column1=1 AND column2='189')

 Hmm, I think INTERSECT (and EXCEPT) is pretty stupid in Postgres in
 general.  Maybe INTERSECT ALL could be a bit faster, because it can
 avoid the sort steps.  Make sure you eliminate duplicates if they are a
 concern.


I get the same plan(see below)  with 'sort'  for 'intersect all' operation
too. Why is intersect not an effecient way? Is there any other way this
query/index can be written/created so that I can get the intersect results
in an efficient way?
Thanks
jo

QUERY
PLAN

 
---

 SetOp Intersect All  (cost=1750719.48..1769378.35 rows=373177 width=4)
(actual time=41065.459..45469.038 rows=128562
loops=1)
   -  Sort  (cost=1750719.48..1760048.92 rows=3731774 width=4) (actual
time=41065.375..44027.342 rows=3756726
loops=1)
 Sort Key: *SELECT*
1.patient_id

 Sort Method:  external merge  Disk:
73432kB

 -  Append  (cost=17886.42..1209431.67 rows=3731774 width=4)
(actual time=1445.675..30171.066 rows=3756726
loops=1)
   -  Subquery Scan *SELECT* 1
(cost=17886.42..496377.90rows=381993 width=4) (actual time=
1445.674..8223.061 rows=327498 loops=1)
 -  Bitmap Heap Scan on person   (cost=
17886.42..492557.97 rows=381993 width=4) (actual
time=1445.670..8021.006rows=327498 loops=1)
   Recheck Cond: ((column1 = 1) AND ((column2)::text
= '62'::text))
   -  Bitmap Index Scan on person_idx  (cost=
0.00..17790.92 rows=381993 width=0) (actual
time=1440.189..1440.189rows=327498 loops=1)
 Index Cond: ((column1 = 1) AND
((column2)::text =
'62'::text))
   -  Subquery Scan *SELECT* 2
(cost=156754.24..713053.77rows=3349781 width=4) (actual time=
4183.977..20195.276 rows=3429228 loops=1)
 -  Bitmap Heap Scan on person   (cost=
156754.24..679555.96 rows=3349781 width=4) (actual
time=4183.973..18191.919rows=3429228 loops=1)
   Recheck Cond: ((column1 = 1) AND ((column2)::text
= '189'::text))
   -  Bitmap Index Scan on person_idx  (cost=
0.00..155916.80 rows=3349781 width=0) (actual
time=4178.644..4178.644rows=3429228 loops=1)
 Index Cond: ((column1 = 1) AND
((column2)::text =
'189'::text))
 Total runtime: 45504.425 ms


Re: [GENERAL] Recheck condition

2007-11-30 Thread Josh Harrison
On Nov 29, 2007 8:15 AM, Alvaro Herrera [EMAIL PROTECTED] wrote:

 Josh Harrison escribió:
  
For example if I have a table Person with 3 fields
 (name,city_id,age).
   And
the table contains 1000 rows. The table has 2 indexes city_id and
 age
If I have a query :
SELECT * FROM PERSON WHERE city_id=5 AND AGE=30
 
  OkaySo If I have a query like the above and the query plan shows  a
  'recheck condition' and bitmap scan, then does that mean it scans the
  indexes first to get the intermediate results and goto the heap only for
 the
  final data?

 Yes.

 If the table actually contains 1000 rows, the most likely outcome is
 that the bitmaps would not be lossy and therefore no rechecking is
 needed at all.  (Tuple bitmaps become lossy only if they have to store a
 lot of tuples, in which case they forget the idea of storing each tuple,
 and instead compress the representation to storing only the page
 numbers where matching tuples are to be found).

 Note however, that even if the bitmaps are not lossy, the visit to the
 heap is still required, because the need to check for visibility.

Thanks...
I have 1 more question in the same line...

*Query1*
SELECT person_id  FROM person   WHERE (column1=1 AND column2='62')
INTERSECT
SELECT person_id  FROM person  WHERE (column1=1 AND column2='189')

There is an index created as person_idx(column1,column2)

QUERY
PLAN

 
---

 SetOp Intersect  (cost=1750719.48..1769378.35 rows=373177 width=4) (actual
time=42913.626..47247.650 rows=6352
loops=1)
   -  Sort  (cost=1750719.48..1760048.92 rows=3731774 width=4) (actual
time=42913.537..45838.472 rows=3756726
loops=1)
 Sort Key: *SELECT*
1.patient_id

 *Sort method: external merge Disk:73432kB *

 -  Append  (cost=17886.42..1209431.67 rows=3731774 width=4)
(actual time=1474.995..32215.493 rows=3756726
loops=1)
   -  Subquery Scan *SELECT* 1
(cost=17886.42..496377.90rows=381993 width=4) (actual time=
1474.993..4936.240 rows=327498 loops=1)
 -  Bitmap Heap Scan on person   (cost=
17886.42..492557.97 rows=381993 width=4) (actual
time=1474.990..4735.972rows=327498 loops=1)
   Recheck Cond: ((column1 = 1) AND ((column2)::text
= '62'::text))
   -  Bitmap Index Scan on person_idx  (cost=
0.00..17790.92 rows=381993 width=0) (actual
time=1469.508..1469.508rows=327498 loops=1)
 Index Cond: ((column1 = 1) AND
((column2)::text =
'62'::text))
   -  Subquery Scan *SELECT* 2
(cost=156754.24..713053.77rows=3349781 width=4) (actual time=
4142.577..25518.305 rows=3429228 loops=1)
 -  Bitmap Heap Scan on person   (cost=
156754.24..679555.96 rows=3349781 width=4) (actual
time=4142.573..23493.596rows=3429228 loops=1)
   Recheck Cond: ((column1 = 1) AND ((column2)::text
= '189'::text))
   -  Bitmap Index Scan on person_idx  (cost=
0.00..155916.80 rows=3349781 width=0) (actual
time=4136.948..4136.948rows=3429228 loops=1)
 Index Cond: ((column1 = 1) AND
((column2)::text =
'189'::text))
 Total runtime: 47250.501 ms


**
Question:
In this query Intersection is used. How does postgres handle this? The steps
in the above query are
1.find all tuples that match column1=1 AND column2='62'
2. find all tuples that match column1=1 AND column2='189'
3. Find the intersection of the above 2
Does it go to the heap even to get the intermediate results (1  2) ?
or
Does it do the first 2 steps using index and go to the heap for the final
data?

Also what does *Sort method: external merge Disk:73432kB  *mean?  Should I
have to modify this to make this query run faster? Postgres takes 4 times
slower than Oracle to return this query.  Is there a way to make this
faster?

Thanks
jo


Re: [GENERAL] Recheck condition

2007-11-29 Thread Josh Harrison

  For example if I have a table Person with 3 fields (name,city_id,age).
 And
  the table contains 1000 rows. The table has 2 indexes city_id and age
  If I have a query :
  SELECT * FROM PERSON WHERE city_id=5 AND AGE=30

 The answer is it depends. Postgres has a cost based planner, it will
 estimate the costs of each different way of getting the result and use
 the cheapest. The factors that are important is how many rows each
 condition will match.

 Given your table is only 8MB, the system may decide that it's all in
 memory and just do a scan.

 Or it maybe see that city_id is almost unique and use that index and
 check the matches for the second condition. Or vice-versa.

 Or maybe it will scan both indexes, calculate the intersection and then
 looks up the matches in the heap (with a recheck).


OkaySo If I have a query like the above and the query plan shows  a
'recheck condition' and bitmap scan, then does that mean it scans the
indexes first to get the intermediate results and goto the heap only for the
final data?


 Thanks

jo


[GENERAL] Recheck condition

2007-11-28 Thread Josh Harrison
Hi,
Sorry if my question is silly.
When I use explain analyze command I come across 'recheck condition' in some
places.
I googled for this  but didn't get any solid answers.

What is recheck condition and when does the query planner choose this?
Thanks
josh


Re: [GENERAL] Recheck condition

2007-11-28 Thread Josh Harrison
Some indexes are inexact, i.e. they may sometimes return tuples that
don't actually match the index condition.

What causes an index to be inexact. When you create an index and vacuum it
regularly, it is suppose to be correctright??

This also happens with bitmap
scans, because it'll return anything in the bitmap which will probably
be more than what you asked for. The recheck just means that the
planner retests the index condition on the result to make sure you only
get the rows you wanted

So does recheck condition affect the performance of the queries since it
basically rechecks the condition?
Also does it goes to the heap to retest ?

For example for this query
explain analyze select count(*) from foo where foo_id=1 I get the following
plan

QUERY PLAN

 


 Aggregate  (cost=1949.84..1949.85 rows=1 width=0) (actual
time=7.996..7.996rows=1 loops=1)
   -  Bitmap Heap Scan on foo  (cost=277.45..1924.94 rows=9959 width=0)
(actual time=1.903..5.270 rows=10020 loops=1)
 Recheck Cond: (foo_id =
1::numeric)

 -  Bitmap Index Scan on foo_pk  (cost=0.00..274.96 rows=9959
width=0) (actual time=1.864..1.864 rows=10020 loops=1)
   Index Cond: (foo_id =
1::numeric)

 Total runtime: 8.062 ms

Can you please explain to me with respect to this example what is happening
here? This is a small table but for big tables the performance is not very
good. Does recheck condition brings down the query performance?

Thanks
josh


Re: [GENERAL] postgresql storage and performance questions

2007-11-20 Thread Josh Harrison
We are working on migrating our database from oracle to postgres.
Postgres tablesize is twice than oracle tablesize for all my
tables.And so the query also takes twice as much time than oracle. So
we were checking to see what makes postgres slower than oracle even
for basic full tablescan queries.
There were a couple of things we noted.
1. Tablesize twice as much than oracle-- Im not sure if postgres null
columns has any overhead since  we have lots of null columns in our
tables.Does postgresql has lots of overhead for null columns?
2. Oracle seems to be reading larger bocks than postgresql (when we
examined the iostat and vmstat) (we had set postgres' db block size as
8 and oracle's is 16kb...)
Do you have any comments on this?

Thanks in advance
josh

On Nov 20, 2007 12:37 AM, Trevor Talbot [EMAIL PROTECTED] wrote:
 On 11/19/07, Josh Harrison [EMAIL PROTECTED] wrote:

  I have 2 tables with 2 cols each( 1 numeric(8,0) and 1 varchar(3) ).
  In table1 both the cols are filled and in table2  the varchar colm is null
 
  So when I checked the tablesize for these two tables (using 
  pg_relation_size)
  table1 - 57344 bytes (no null columns)
  table2 - 49152 bytes (varchar colm is null)
 
  There is not much difference between the two sizes.So even if a column
  is null postgresql still has lots of overhead.
   Does postgres occupy space even when the column is NULL?

 PostgreSQL's disk storage works in pages, where each page is 8KB.
 It will use as much space within each page as it can.  Filip's last
 link details this.

 Is there a specific reason you're looking at this, as in you have some
 requirement to meet?  Or just curious how it works?

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] postgresql storage and performance questions

2007-11-20 Thread Josh Harrison
On Nov 20, 2007 8:10 AM, Filip Rembiałkowski [EMAIL PROTECTED] wrote:
 2007/11/20, Josh Harrison [EMAIL PROTECTED]:
  We are working on migrating our database from oracle to postgres.
  Postgres tablesize is twice than oracle tablesize for all my
  tables.
 Interesting. Which postgresql version?
Version 8.2.3

 And so the query also takes twice as much time than oracle.
 This is even more interesting :) What query? can you show it here
 along with EXPLAIN ANALYZE?

explain analyze select count(*) from dummy1

QUERY PLAN
---
 Aggregate  (cost=1192999.60..1192999.61 rows=1 width=0) (actual
time=109792.239..109792.239 rows=1 loops=1)
   -  Seq Scan on dummy1 (cost=0.00..1119539.48 rows=29384048
3000 width=0) (actual time=0.027..101428.016 rows=29384048
loops=1)
 Total runtime: 109792.332 ms

Postgresql takes 1m 40s for this query
Oracle takes 45 sec

It is just a count(*) query. I know count(*) query is slower in
postgres becoz it doesn't use index. But in Oracle I give the query as
select /*+full(dummy1)*/ count(*) from dummy1
with the hint so that oracle uses full table scan and not the index scan.


 Did you do some index tuning or do you just expect the indexes ported
 from Oracle schema to work?
I created the indexes and Im not sure what kind of tuning neds to be
done for the indexes. But this above query doesnt use any indexes.

 Did you run ANALYZE after populating database?
Yes

 What are server parameters and did you tune postgres config to fit them?
I had attached my config file and the table structure

  So
  we were checking to see what makes postgres slower than oracle even
  for basic full tablescan queries.
 I'm curious too :) please let me know if you resolve this mystery :)


  There were a couple of things we noted.
  1. Tablesize twice as much than oracle-- Im not sure if postgres null
  columns has any overhead since  we have lots of null columns in our
  tables.Does postgresql has lots of overhead for null columns?
 I've expained this previously - you have a bitmap in each tuple.
 Bitmap size is (NATTS+7) % 8

  2. Oracle seems to be reading larger bocks than postgresql (when we
  examined the iostat and vmstat) (we had set postgres' db block size as
  8 and oracle's is 16kb...)
 yes, 8 kB is default pg block size. it is not recommended to change it
 - however it could be useful in some situations - but I doubt it would
 speedup your queries twice, whatever they are.



 --
 Filip Rembiałkowski


Thanks again
josh




# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect, or use
# pg_ctl reload. Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'   # use data in another directory
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file

# If external_pid_file is not explicitly set, no extra pid file is written.
#external_pid_file = '(none)'   # write an extra pid file


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -
#tcpip_socket=true
listen_addresses = '*'  # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
#port = 5432
max_connections = 100
# note: increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).  You
# might also need to raise shared_buffers to support more connections

Re: [GENERAL] postgresql storage and performance questions

2007-11-20 Thread Josh Harrison
On Nov 20, 2007 11:13 AM, Brad Nicholson [EMAIL PROTECTED] wrote:
 On Tue, 2007-11-20 at 07:22 -0500, Josh Harrison wrote:

  There were a couple of things we noted.
  1. Tablesize twice as much than oracle-- Im not sure if postgres null
  columns has any overhead since  we have lots of null columns in our
  tables.Does postgresql has lots of overhead for null columns?

 Did you by any chance have an aborted load of the data?  If you load in
 a table, and that load fails or does not commit, it will still occupy
 the space until you vacuum.  If you try to load again, the table will be
 twice the size.

 If you want to compact the physical space the table occupies, you can
 try running VACUUM FULL on it, and possibly a redindex afterwards.  This
 will bring the physical space down to the minimum.  Both of these
 operations will lock out access to the tables though.
I ran vacuum full on this table already. I haven't re-indexed it. But
this will not affect the table size...right...since indexes are stored
separately?
  2. Oracle seems to be reading larger bocks than postgresql (when we
  examined the iostat and vmstat) (we had set postgres' db block size as
  8 and oracle's is 16kb...)
  Do you have any comments on this?

 8k is the defualt.  You can change the block size if you need to.  You
 need to modify src/include/pg_config_manual.h recompile and re-initdb.
Does changing the block size has any side effects on any other
operations in particular?

 Brad Nicholson  416-673-4106
 Database Administrator, Afilias Canada Corp.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] postgresql storage and performance questions

2007-11-19 Thread Josh Harrison
Hi,
I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (40)  1
numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL]

2007-11-19 Thread Josh Harrison


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] postgresql performance and storage questions

2007-11-19 Thread Josh Harrison
Hi,
I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (3)  1
numeric(8,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Postgresql storage question

2007-11-19 Thread Josh Harrison
Hi,

I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (40)  1
numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql storage and performance questions

2007-11-19 Thread Josh Harrison
Thanks Filip.
I have 2 tables with 2 cols each( 1 numeric(8,0) and 1 varchar(3) ).
In table1 both the cols are filled and in table2  the varchar colm is null

So when I checked the tablesize for these two tables (using pg_relation_size)
table1 - 57344 bytes (no null columns)
table2 - 49152 bytes (varchar colm is null)

There is not much difference between the two sizes.So even if a column
is null postgresql still has lots of overhead.
 Does postgres occupy space even when the column is NULL?

This is not a spam I posted it twice becoz my question  didnot
show up the first time in the mailing list even after 30 minutes. So i
tried again and then both showed up...kind of strange though!

Thanks again
Josh

On Nov 19, 2007 1:37 PM, Filip Rembiałkowski [EMAIL PROTECTED] wrote:
 2007/11/19, Josh Harrison [EMAIL PROTECTED]:
  Hi,
  I have a few questions about the storage and performance
 
  1. How do you estimate the table size in postgresql?
  For example if I have a table 'Dummy' with 1 varchar (40)  1
  numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
  this (including the row overhead etc)? How many pages will this
  occupy?
 
  2. Also if the table contains null columns, does postgres allocates
  the same space for these nulls columns? How does it handle 'nulls' in
  terms of storage?

 Try these:
 http://www.postgresql.org/docs/current/static/datatype-numeric.html
 http://www.postgresql.org/docs/current/static/datatype-character.html
 http://www.postgresql.org/docs/current/static/storage-page-layout.html

 ad.1 )  There is a fixed-size header (occupying 27 bytes on most
 machines) for each tuple

 so you will have about 27 + 1 +  varchar data + numeric data per row,
 plus some overhaed for block headers

 ad.2 ) there is a null bitmap for each tuple which has nullable fields
 - so every 8 NULLable columns occupy one byte bitmap.


 PS. why do you post same thing many times? this is kinda.. spam?

 --
 Filip Rembiałkowski


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] rules and command status question

2007-09-25 Thread Josh Harrison
Hello,
I have a question in the postgres document chapter 34. Rules and Command
Status.
The last paragraph of that page says that

The programmer can ensure that any desired INSTEAD rule is the one that
sets the command status in the second case, by giving it the alphabetically
last rule name among the active rules, so that it gets applied last.

Can someone help me understand what this means. How can I change my INSERT
...INSTEAD rule so that query returns the number of rows inserted ( instead
of 0 rows inserted).

Thanks in advance
Josh


[GENERAL] foreign key on views

2007-09-21 Thread Josh Harrison
Hi,
Where in the source code can  I see the error message that says foreign key
to a view is not allowed
The exact error message as it appears when I try foreign key reference on a
view is

 referenced relation xyz_view is not a table

I just wanted to know where is this error message thrown in the source
code.?
Any help will be appreciated.
Thanks
josh


[GENERAL] postgres catalog files problem

2007-09-17 Thread Josh Harrison
Hi,
I noticed that I have too many records in my pg_ catalog files with the same
name. For example
if I give this query, which checks for the triggers that do not have
corresponding tables in the pg_class

select * from pg_trigger where tgrelid not in (select relfilenode from
pg_class),

I get  set of records like this (omitted some fields)

tgrelid tgname   tgfoid
tgtype  tgconstrname
629324  RI_ConstraintTrigger_654082  1644   5
payment_fk_id  ...


 629324  RI_ConstraintTrigger_654083  1645   17
payment_fk_id...


That is there are records in my pg_trigger that has no corresponding tgrelid
in pg_class. Why is that? Is it okay to manually delete these records from
the catalog files?

So Im getting this errorthis error when I try to update some tables
' could not open relation with oid'

Thanks for your help
josh


Re: [GENERAL] postgres catalog files problem

2007-09-17 Thread Josh Harrison
Thanks Tom.
I tried it using pg_class.oid and I still have some records which r not in
the pg_class files. What can I do about that?
(BTW what does relfilenode in pg_class stands for?)
Thanks again
Josh

On 9/17/07, Tom Lane [EMAIL PROTECTED] wrote:

 Josh Harrison [EMAIL PROTECTED] writes:
  if I give this query, which checks for the triggers that do not have
  corresponding tables in the pg_class

  select * from pg_trigger where tgrelid not in (select relfilenode from
  pg_class),

 This query is wrong --- relfilenode is not a join column for any other
 table.  You should be using pg_class.oid there.

 regards, tom lane



Re: [GENERAL] postgres catalog files problem

2007-09-17 Thread Josh Harrison
Thanks Eric,
ll try to explain as much as possible

When i try to give this query

UPDATE payment
SET desc='New description'
WHERE payment_id='xyz'

I got the error
ERROR: could not open relation with OID 672178

This error was consistent for this query and it yielded the same OID every time.

When I checked the pg_class with oid=672178, there were no records
found with that OID.
So I checked the other catalog files to see if there are any reference
to that OID in any of them.
I found a couple of them in the pg_trigger files that references that
OID 672178.

That is why I tried the first query ie.,
select * from pg_trigger where tgrelid not in (select oid from pg_class)
to see if there are any other rows in pg_trigger without corresponding
OID in pg_class.

Is it common to have records in pg_triggers and other catalog files
iwthout corresponding OID in the pg_class file?

Do you use CLUSTER on any kind of regular basis?  Have you had any other
kind of abnormal performance issues (other errors, system crashes,
etc...)?
I don't use cluster on any kind. Im not sure about the performance since Im
working with a very small test dataset.
Thanks
Josh

On 9/17/07, Erik Jones [EMAIL PROTECTED] wrote:


 On Sep 17, 2007, at 9:02 AM, Josh Harrison wrote:
 
  On 9/17/07, Tom Lane [EMAIL PROTECTED] wrote: Josh Harrison
  [EMAIL PROTECTED] writes:
   if I give this query, which checks for the triggers that do not have
   corresponding tables in the pg_class
 
   select * from pg_trigger where tgrelid not in (select relfilenode
  from
   pg_class),
 
  This query is wrong --- relfilenode is not a join column for any other
  table.  You should be using pg_class.oid there.
 
  regards, tom lane

  Thanks Tom.
  I tried it using pg_class.oid and I still have some records which r
  not in the pg_class files. What can I do about that?
  (BTW what does relfilenode in pg_class stands for?)
  Thanks again
  Josh

 relfilenode is the name of the actual file that holds the given
 relation's data.  Now that you've got that query right, can you give
 us some more details about what's happening?  Just to be sure, when
 checking triggers, the query should be

 select * from pg_trigger where tgrelid not in (select oid from
 pg_class);

 What are the queries that are generating these errors?  Why are you
 looking at pg_trigger?  In order to help, we'll need to know more
 about the situation that just the error type as different situations
 can yield that.  Do the same queries consistently yield the error?
 Do you see the same oids in the errors, or do they change?  Do you
 use CLUSTER on any kind of regular basis?  Have you had any other
 kind of abnormal performance issues (other errors, system crashes,
 etc...)?  The more info you give, the better help you can receive.


 Erik Jones

 Software Developer | Emma(r)
 [EMAIL PROTECTED]
 800.595.4401 or 615.292.5888
 615.292.0777 (fax)

 Emma helps organizations everywhere communicate  market in style.
 Visit us online at http://www.myemma.com





Re: [GENERAL] postgres catalog files problem

2007-09-17 Thread Josh Harrison
Hi,
Yes...there is a relation in pg_class with the name 'payment' but its oid is
not 672178. So why is it giving me could not open relation with OID 672178
when i try an update statement ?
I use version 8.2. I think the problem started when i manually deleted some
rows from the pg_class catalog file instead of using 'drop table' sql
command. Do you think this created the problem?

Thanks
josh

On 9/17/07, Tom Lane [EMAIL PROTECTED] wrote:

 Josh Harrison [EMAIL PROTECTED] writes:
  When i try to give this query

  UPDATE payment
  SET desc='New description'
  WHERE payment_id='xyz'

  I got the error
  ERROR: could not open relation with OID 672178

 Hmm, there apparently *is* a pg_class row for relation 'payment', else
 you'd not get this far, and I'll bet it's got OID 672178 --- try select
 oid from pg_class where relname = 'payment' to see.  If so, it seems
 likely that this is just an index corruption and you can get out of it
 by REINDEXing pg_class_oid_index.  Depending on what PG version you are
 using, that may require special setup --- read the REINDEX reference
 page *for your version* before proceeding.

 What version is it, anyway, and what were you doing before you got this
 error the first time?  This isn't exactly an everyday type of problem.

 regards, tom lane



Re: [GENERAL] postgres catalog files problem

2007-09-17 Thread Josh Harrison
Thanks.
Actually Im currently testing oracle to postgres migration with a small
dataset. I haven't started working with the real dataset. I wanted to check
with a small dataset before I start with the big one.
I know the records that I deleted from the pg_class file. so what do i do
next? Will it help if I drop the problematic tables and recreate them? (like
i said im currently testing with a small datset only and my real data are in
oracle and they are not messed in nay way). What will happen to my catalog
files if i do that?
Josh

n 9/17/07, Erik Jones [EMAIL PROTECTED] wrote:

 On Sep 17, 2007, at 11:57 AM, Josh Harrison wrote:
 
  On 9/17/07, Tom Lane [EMAIL PROTECTED] wrote: Josh Harrison
  [EMAIL PROTECTED] writes:
   When i try to give this query
 
   UPDATE payment
   SET desc='New description'
   WHERE payment_id='xyz'
 
   I got the error
   ERROR: could not open relation with OID 672178
 
  Hmm, there apparently *is* a pg_class row for relation 'payment', else
  you'd not get this far, and I'll bet it's got OID 672178 --- try
  select
  oid from pg_class where relname = 'payment' to see.  If so, it seems
  likely that this is just an index corruption and you can get out of it
  by REINDEXing pg_class_oid_index.  Depending on what PG version you
  are
  using, that may require special setup --- read the REINDEX reference
  page *for your version* before proceeding.
 
  What version is it, anyway, and what were you doing before you got
  this
  error the first time?  This isn't exactly an everyday type of problem.
 
  regards, tom lane
  Hi,
  Yes...there is a relation in pg_class with the name 'payment' but
  its oid is not 672178. So why is it giving me could not open
  relation with OID 672178 when i try an update statement ?
  I use version 8.2. I think the problem started when i manually
  deleted some rows from the pg_class catalog file instead of using
  'drop table' sql command. Do you think this created the problem?
 
  Thanks
  josh
 

 Yep, that would do it.  Never manually edit catalog tables unless you
 *really* know what you're doing and then think ten times about it
 first.  My guess is that you deleted an entry for a TOAST table or
 index on that table and there are still entries in pg_depend (as well
 as others) so that when you try to access that table it isn't finding
 the related, dependant objects.  Others may know more, but I don't
 know enough to help you get your catalogs back in order past
 restoring from a backup.  Also, if I were you I'd see if you can get
 a dump of the current database first thing.  Do you know what you
 deleted from pg_class?


 Erik Jones

 Software Developer | Emma(r)
 [EMAIL PROTECTED]
 800.595.4401 or 615.292.5888
 615.292.0777 (fax)

 Emma helps organizations everywhere communicate  market in style.
 Visit us online at http://www.myemma.com





[GENERAL] update problem in partitioned tables

2007-09-12 Thread Josh Harrison
Hi
I have the master-child tables as follows

Master table
create table foo(a numeric(10,0));

first child table foo1 as
create table foo1 (check(a=0 and a10)) inherits(foo)

second child table foo2 as
create table foo1 (check(a=10) inherits(foo)

Now I create this function to redirect the inserts  and updates on the
master table.

CREATE OR REPLACE FUNCTION foo_function () RETURNS trigger AS
'
BEGIN
 IF (TG_OP = ''INSERT'') THEN
   IF NEW.a = 0  and NEW.a 10 THEN
   INSERT INTO foo1 (a) values(new.a);

ELSIF (NEW.a =10  ) THEN
   INSERT INTO foo2 (a) values(new.a);
  end if;

ELSIF (TG_OP = ''UPDATE'') THEN
IF (OLD.a=0 AND OLD.a  10 AND NEW.a = 10) THEN

  INSERT INTO foo2 (a)  VALUES (NEW.a);
  DELETE FROM foo1  WHERE foo1.a = OLD.a;

  END IF;
END IF;
RETURN NULL;
END
' LANGUAGE 'plpgsql'
GO

And I create a before insert/update trigger on foo

CREATE TRIGGER FOO_TRIGGER
 BEFORE INSERT OR UPDATE
  ON FOO
  FOR EACH ROW
  EXECUTE PROCEDURE foo_function();

The insert statements are working properly.ie., they are inserting the rows
into the corresponding child tables.
But the update statements involving migration of row from 1 child to another
is not working

for example
UPDATE FOO SET A=5 WHERE A=4 works fine becoz the updated row remains in the
same child

But
UPDATE FOO SET A =20 WHERE A=4 gives the following error
new row for relation foo1 violates check constraint foo1_a_check

Why? All I wanted to do in this case is to  insert that row into 'foo2'
table and delete it from 'foo1' table . What am I doing wrong here?
(Note: If I implement the same using rules it works fine!!!)

Thanks in advance
josh


Re: [GENERAL] update problem in partitioned tables

2007-09-12 Thread Josh Harrison
Thanks. Check this website that talk abt the approach that I had mentioned
http://blogs.ittoolbox.com//oracle/guide/archives/comparing-partitioned-tables-in-oracle-and-enterprisedbpostgresql-13261

I also tried your method (triggers on the child table) and it gives the same
error too.

Josh


On 9/12/07, Tom Lane [EMAIL PROTECTED] wrote:

 Josh Harrison [EMAIL PROTECTED] writes:
  Now I create this function to redirect the inserts  and updates on the
  master table.

 Uh ... there never will be any updates in the master table, since it
 hasn't got any entries.

 What you'd need is an insert trigger on the master and update triggers
 on each child table.  I think the logic of this trigger is wrong for
 the update case anyway.

 regards, tom lane