Re: [GENERAL] Stored Procs Vs User Defined Functions vis-a-vis UDF's in Postgresql

2007-10-29 Thread Albe Laurenz
Harpreet Dhaliwal wrote:
 lately I have been looking at difference between a Stored 
 Proc and User Defined Functions in other RDBMS like Sql 
 Server / Oracle.

Nomenclature varies wildly between different Database
Management Systems. Be careful.

The SQL standard (2005) speaks of SQL-invoked routines and
devides those into SQL-invoked procedures and SQL-invoked
functions (chapter 11.50).

The difference is mainly that functions are created with
CREATE FUNCTION and have a return value, while
procedures are created with CREATE PROCEDURE and have no
return value.

Is that what you are talking about?
If not, maybe you should explain it in more detail.

Oracle indeed uses the term User-Defined Function and it
uses it in the above sense of a function (see
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions231.htm#sthref2615).

 However, in postgresql, I think Stored Procs are wrapped 
 around in User Defined functions, if I am not wrong.

In PostgreSQL, there are only functions (in the sense of
the SQL standard).

This will probably answer most of your questions.

 The following is the list of main differences b/w a Stored 
 Proc and a UDF in general.

Did you compile that list by examining the implementation
details of MS SQL Server and Oracle?

 Can anyone please comment on how a 
 postgresql UDF would behave for each of these difference 
 mentioned below ?
 
 1. Stored Procedures are parsed and compiled and stored in
 compiled format in the database. We can also say that
 Stored Procedures are stored as pseudo code in the
 database i.e. compiled form. On the other hand, User Defined 
 Functions are parsed, and compiled at runtime.

PostgreSQL functions are not compiled, but the execution plans
that are generated for SQL statements in the function are
retained until the end of the session.

For Oracle, your statement is not true, because both
functions and procedures (when written in PL/SQL) are
compiled upon first use. The compiled version is retained
until something renders it invalid or the database server
is stopped.

 2. A User Defined Function must return a value where as a 
 Stored Procedure doesn't need to (they definitely can,
 if required).

If you are talking about output parameters, you are wrong
because the SQL standard allows them for both functions
and procedures.
If you are only talking about return values, you are wrong
because procedures do not have any.

Oracle sticks with the standard here.

In PostgreSQL functions, you can have composite return values,
so you can work around the limitation that a function has only
one return value.

 3. A User Defined Function can be used with any Sql 
 statement. For example, [...]
 On the other hand, Stored Procedures can't be called inside a 
 Sql statement.

Strange.
The SQL standard has the CALL statement to invoke an
SQL-invoked routine (see chapter 15.1 of SQL-Foundation).

Oracle, for one, implements the CALL statement.

 4. Operationally, when an error is encountered, the function 
 stops, while an error is ignored in a Stored Procedure and
 proceeds to the next statement in the code (provided
 one has included error handling support).

That is wrong, at least in Oracle.
The standard will probably only say something about
PSM functions and routines, but I can't be bothered to look
it up.

 5. Functions return values of the same type, Stored 
 Procedures return multiple type values.

Procedures have no return values.
If you mean output parameters, you are wrong because
they are typed.

 6. Stored Procedures support deferred name resolution. To
 explain this, lets say we have a
 stored procedure in which we use named tables tbl x and tbl y
 but these tables actually
 don't exist in the database at the time of this stored
 procedure creation. Creating such
 a stored procedure doesn't throw any error. However, at
 runtime, it would definitely
 throw error it tables tbl x and tbl y are still not there in
 the database. On the other
 hand, User Defined Functions don't support such deferred name
 resolution.

In PostgreSQL, the existence of a table in an SQL statement
inside a function will not be checked at function definition
time.
Additionally, you have the parameter check_function_bodies
(default on) that determines whether syntax checking of function
bodies will be performed at definition time.


I hope that answers some of your questions.

I have the feeling that the amount of contradictions indicated that
there is a misunderstanding, and you were talking about something
else than I was.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Automatic schema updates

2007-10-29 Thread Kalra, Ashwani

Hi,
In our thick client application, I want to update the schema  on the
user's machine automatically as normal software update process using
JNLP etc
We will be using Eclipser RCP.  Is it possible?

 

Regards
Ashwani


This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient,  you are not authorized 
to read, print, retain, copy, disseminate,  distribute, or use this message or 
any part thereof. If you receive this  message in error, please notify the 
sender immediately and delete all  copies of this message.


Re: [GENERAL] I want to search my project source code

2007-10-29 Thread Alvaro Herrera
Tom Lane wrote:

 I wouldn't recommend trying to use a standard FTS to index code:
 code is not a natural language and the kinds of searches you usually
 want to perform are a lot different.  As an example, I glimpse for
 foo when looking for references to a function foo, but ^foo
 when seeking its definition (this relies on the coding conventions
 about function layout, of course).  An FTS doesn't think start-of-line
 is significant so it can't do that.

+1.  The nice thing about a tool that understands code is that you can
query it in ways that make sense to code.  For example I can search for
all files that include foo.h or all callers of function bar or all
occurences of the symbol baz.  I use cscope for this, which integrates
nicely into my text editor (vim), and others have told me they use
kscope which puts it inside a nice GUI window, if you care about such
things.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
I would rather have GNU than GNOT.  (ccchips, lwn.net/Articles/37595/)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Problems with PostgreSQL DBI-Link / DBD-ODBC

2007-10-29 Thread Pit M.
I have the following problem with PostgreSQL and hope you guys can help 
me out in this matter: I try to get a remote connection to any database 
(e.g. MS-SQL or MS Access) using DBI-Link and DBD:ODBC. My problem is, 
that everything seems fine so far, however I don't know how to use the 
parameters for make_accessor_functions(). I searched the net for it, 
but can only find samples for Perl-Scripting, but I need a sample for 
using it directly in SQL.


This is what I've come up so far. Hope you can point me to the right 
direction.


SELECT make_accessor_functions(
'dbi:ODBC:Northwind::dbi_link.data_source,
'sa'::text,
NULL::text,
'---
AutoCommit: 1
RaiseError: 1
'::dbi_link.yaml,
NULL::dbi_link.yaml,
NULL::text,
NULL, ::text,
'Northwind'::text
);


I have a ODBC connection named Northwind.

The error message I'm always receiving (for the MS-SQL sample) is:

ERROR: error from Perl function: error from Perl function: duplicate key 
violates unique constraint dbi_connection_data_source_key at line 94. 
at line 35.

SQL state: XX000

I'm using following components under Windows XP
postgresql-8.2.5-1
pgadmin3-1.8.0
DBI-1.59
dbi-link-2.0.0
DBD-ODBC-1.05

Thanks in advance.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Kevin Grittner
 On Fri, Oct 26, 2007 at  6:39 PM, in message
[EMAIL PROTECTED], Jeff Davis [EMAIL PROTECTED]
wrote: 
 On Fri, 2007-10-26 at 18:06 -0500, Kevin Grittner wrote:
 Hmmm...  We would actually prefer to get the WAL file at the
 specified interval.  We have software to ensure that the warm
 standby instances are not getting stale, and that's pretty simple
 with the current behavior.
 
 Another thought: when you say it's pretty simple, what do you do now?
 My monitoring scripts for this particular situation employ some pretty
 ugly code.
 
Here's our script:
 
#! /bin/bash

if [ $1 ==  ] ; then
  savepwd=$PWD
  cd /var/pgsql/data/county/
  find * -maxdepth 0 -type d | xargs -idirname $0 dirname
  cd $savepwd
  exit 0
fi

for countyName ; do
  echo County: $countyName
  /usr/local/pgsql/bin/pg_controldata /var/pgsql/data/county/$countyName/data | 
grep -E '(Database cluster state|pg_control last modified)'
  /etc/init.d/postgresql-${countyName}-cc status
  grep basebackup 
/var/pgsql/data/county/$countyName/data/basebackup-of-this-instance
  echo ''
done

Here's an example of running it (although the opcenter usually runs
it without a parameter, to get all counties):
 
[EMAIL PROTECTED]:~ sudo pgstatus.sh iowa
County: iowa
Database cluster state:   in archive recovery
pg_control last modified: Mon 29 Oct 2007 09:03:16 AM CDT
pg_ctl: server is running (PID: 15902)
/usr/local/pgsql-8.2.4/bin/postgres -D /var/pgsql/data/county/iowa/data
basebackupcc-2007-10-26_190001
 
This gets parsed by a script in our monitor (python, I think) and
winds up feeding a status display.  It's probably a bit crude, but
it has worked well for us, with very little time required to get it
going.  This thread has made me aware that it is dependent on the
checkpoint frequency as well as the archive frequency.  Our
checkpoint_timeout setting is 30min and our archive_timeout is the
default (one hour).  The monitor shows red if the cluster state
isn't in archvie recovery or pg_ctl doesn't report server is
running or the last modified is older than 75 minutes.
 
We are OK with a one hour archive interval because we have a
separate application-oriented transaction stream (independent of
the database product) coming back real-time, which we can replay
to top off a database backup.
 
-Kevin
 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Kevin Grittner
 On Fri, Oct 26, 2007 at  6:28 PM, in message
[EMAIL PROTECTED], Jeff Davis [EMAIL PROTECTED]
wrote: 
 
 [ of course, there's no guarantee that the archive_command succeeds in
 that time ]
 
Which is one of the things we would want to cause an alert.
 
-Kevin
 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] A few questions

2007-10-29 Thread Samantha Atkins

First on prepared statements:

1) If I am using the libpq are prepared statements tied to a  
connection?  In other words can I prepare the statement once and use  
it on multiple connections?


2) What is the logical scope of prepared statement names?  Can I use  
the same name on different tables without conflict or is the scope  
database wide or something else?


On indices:

3) same as 2 for index names.  I think they are per table but it is  
worth asking.


and last:

4) Is it generally better to have more tables in one database from a  
memory and performance point of view or divide into more databases if  
there is a logical division.  The reason I ask is that I have a  
situation where one app is used by multiple different users each  
running their own copy.  The app uses on the order of 30 tables.  In  
some ways it would be convenient to have one big database and  
specialize the table names per user.   But I am not sure that is most  
optimal.  Is there a general answer to such a question?


Thanks very much for any enlightenment on these questions.

- samantha


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] A few questions

2007-10-29 Thread Joshua D. Drake
On Mon, 29 Oct 2007 09:52:55 -0700
Samantha Atkins [EMAIL PROTECTED] wrote:

 First on prepared statements:
 
 1) If I am using the libpq are prepared statements tied to a  
 connection? 

Yes.

  In other words can I prepare the statement once and use  
 it on multiple connections?

No.

 
 2) What is the logical scope of prepared statement names?  Can I use  
 the same name on different tables without conflict or is the scope  
 database wide or something else?

Each prepare must be unique within the session. So session 1 can have
foo and session 2 can have foo, but session 1 can not have foo that
calls to two different objects... 

 
 On indices:
 
 3) same as 2 for index names.  I think they are per table but it is  
 worth asking.

Indexes are per relation (table)

 
 and last:
 
 4) Is it generally better to have more tables in one database from a  
 memory and performance point of view or divide into more databases
 if there is a logical division.

Uhmm this is more of a normalization and relation theory question :). I

  The reason I ask is that I have a  
 situation where one app is used by multiple different users each  
 running their own copy.

Ahh... use namespaces/schemas:

http://www.postgresql.org/docs/current/static/ddl-schemas.html


 Thanks very much for any enlightenment on these questions.
 
 - samantha

Hope this was helpful.

Sincerely,

Joshua D. Drake

 
 
 ---(end of
 broadcast)--- TIP 5: don't forget to increase
 your free space map settings
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/



signature.asc
Description: PGP signature


Re: [GENERAL] A few questions

2007-10-29 Thread David Fetter
On Mon, Oct 29, 2007 at 09:52:55AM -0700, Samantha Atkins wrote:
 First on prepared statements:
 
 1) If I am using the libpq are prepared statements tied to a
 connection?  In other words can I prepare the statement once and use
 it on multiple connections?

Yes they are, and no, you can't.  Because of MVCC, in general, each
connection could see a completely different database, so there's no
way for plans to cross connections.

 2) What is the logical scope of prepared statement names?  Can I use
 the same name on different tables without conflict or is the scope
 database wide or something else?

What happened when you tried it?

 On indices:
 
 3) same as 2 for index names.  I think they are per table but it is
 worth asking.

See above question ;)

 and last:
 
 4) Is it generally better to have more tables in one database from a
 memory and performance point of view or divide into more databases
 if there is a logical division.

Just generally, get correctness first and improve performance if you
need to by finding bottlenecks empirically and figuring out what to do
once you've identified them.

 The reason I ask is that I have a situation where one app is used by
 multiple different users each running their own copy.  The app uses
 on the order of 30 tables.  In some ways it would be convenient to
 have one big database and specialize the table names per user.
^^^
That's a loud warning of a design you didn't think through ahead of
time.  As a general rule, a day not spent in design translates into at
least 10 in testing (if you're lucky enough to catch it there) or
(more usually) 100 or more in production.

 But I am not sure that is most  optimal.  Is there a general answer
 to such a question?

See above :)

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Jeff Davis
On Mon, 2007-10-29 at 09:56 -0500, Kevin Grittner wrote:
 Here's our script:

Thanks, I think that is better than what I'm doing.

One minor thing: I think it's still dependent on locale though, because
the output of pg_controldata is locale-dependent, right? It would work
fine for me, but it would be nice if there was something that could be
released that anyone could use, including non-english installations.

Also, did you publish your pg_clearxlogtail program anywhere? I think
that would be helpful to many people, but I don't see it on pgfoundry.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_restore

2007-10-29 Thread Bob Pawley

Following the examples in the docs I've come to this.

I am attempting to restore the existing sql dump using
psql -d PDW -f aurel.sql

I am then asked for a password.

I try every password that the computer knows with no success.

Funny thing the password cursor doesn't move when inputting the password.

I keep getting authentication failure.

When I attempt to do a new pg_dump with -Fc I also get a request for 
password with identical results.


Bob



- Original Message - 
From: Adrian Klaver [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Cc: Bob Pawley [EMAIL PROTECTED]; Tom Lane [EMAIL PROTECTED]
Sent: Sunday, October 28, 2007 3:58 PM
Subject: Re: [GENERAL] pg_restore



On Sunday 28 October 2007 3:40 pm, Bob Pawley wrote:

This is the dump command

pg_dump -h localhost -d  Aurel -U postgres

Could you suggest a dump command that will match the restore command -

pg_restore  -h localhost -d PDW -U postgres  aurel.sql


Thanks

Bob

It depends on what you want to do. But to use pg_restore you will need to 
use
one of either -Fc or Ft after the pg_dump command. My concern is that you 
are

connecting to a different database name in the dump and restore commands.
This may be what you want, but then again it may not.I would suggest 
reading

the information at the URL below before proceeding further.
http://www.postgresql.org/docs/8.2/interactive/app-pgdump.html

--
Adrian Klaver
[EMAIL PROTECTED] 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problems with PostgreSQL DBI-Link / DBD-ODBC

2007-10-29 Thread David Fetter
On Mon, Oct 29, 2007 at 03:44:05PM +0100, Pit M. wrote:
 I have the following problem with PostgreSQL and hope you guys can
 help me out in this matter: I try to get a remote connection to any
 database (e.g. MS-SQL or MS Access) using DBI-Link and DBD:ODBC. My
 problem is, that everything seems fine so far, however I don't know
 how to use the parameters for make_accessor_functions(). I
 searched the net for it, but can only find samples for
 Perl-Scripting, but I need a sample for using it directly in SQL.
 
 This is what I've come up so far. Hope you can point me to the right
 direction.
 
 SELECT make_accessor_functions(
 'dbi:ODBC:Northwind::dbi_link.data_source,
 'sa'::text,
 NULL::text,
 '---
 AutoCommit: 1
 RaiseError: 1
 '::dbi_link.yaml,
 NULL::dbi_link.yaml,
 NULL::text,
 NULL, ::text,
 'Northwind'::text
 );
 
 
 I have a ODBC connection named Northwind.
 
 The error message I'm always receiving (for the MS-SQL sample) is:
 
 ERROR: error from Perl function: error from Perl function: duplicate key 
 violates unique constraint dbi_connection_data_source_key at line 94. 
 at line 35.
 SQL state: XX000

It looks like you're trying to connect the same data_source and
user_name twice.  Check whether the existing one works :)

Are the documents unclear on the idea that you only run
make_accessor_functions() once per remote (data_source,user_name)
pair per local database?

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_restore

2007-10-29 Thread Douglas McNaught
Bob Pawley [EMAIL PROTECTED] writes:

 Following the examples in the docs I've come to this.

 I am attempting to restore the existing sql dump using
 psql -d PDW -f aurel.sql

 I am then asked for a password.

 I try every password that the computer knows with no success.

 Funny thing the password cursor doesn't move when inputting the password.

This is standard for Unix command-line applications.

 I keep getting authentication failure.

 When I attempt to do a new pg_dump with -Fc I also get a request for
 password with identical results.

Sounds like you need to fix pg_hba.conf then.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] A few questions

2007-10-29 Thread Richard Huxton

Samantha Atkins wrote:

First on prepared statements:

1) If I am using the libpq are prepared statements tied to a 
connection?  In other words can I prepare the statement once and use it 
on multiple connections?


Per session (connection).

Temporary tables etc. are the same.

2) What is the logical scope of prepared statement names?  Can I use the 
same name on different tables without conflict or is the scope database 
wide or something else?


Per session.


On indices:

3) same as 2 for index names.  I think they are per table but it is 
worth asking.


Per database (if you count the schema name). We don't have cross-table 
indexes, but the global naming allows it.



and last:

4) Is it generally better to have more tables in one database from a 
memory and performance point of view or divide into more databases if 
there is a logical division.  The reason I ask is that I have a 
situation where one app is used by multiple different users each running 
their own copy.  The app uses on the order of 30 tables.  In some ways 
it would be convenient to have one big database and specialize the table 
names per user.   But I am not sure that is most optimal.  Is there a 
general answer to such a question?


Not really, but...

1. Do you treat them as separate logical entities?
Do you want to backup and restore them separately?
Is any information shared between them?
What are the consequences of a user seeing other users' data?

2. Are you having performance issues with the most logical design?
Can you solve it by adding some more RAM/Disk?
What are the maintenance issues with not having the most logical design?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Jeff Davis
On Mon, 2007-10-29 at 14:20 -0400, Tom Lane wrote:
 Jeff Davis [EMAIL PROTECTED] writes:
  One minor thing: I think it's still dependent on locale though, because
  the output of pg_controldata is locale-dependent, right? It would work
  fine for me, but it would be nice if there was something that could be
  released that anyone could use, including non-english installations.
 
 Wouldn't export LANG=C fix that?
 

Ah, of course. 

Thanks,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_restore

2007-10-29 Thread Thomas Kellerer

Tom Lane wrote on 29.10.2007 00:55:

Thomas Kellerer [EMAIL PROTECTED] writes:
Why is it, that pg_dump can use a compressed output directly but pg_dumpall is 
always using a SQL (i.e. plain text) output?


The custom and tar formats are not designed to support data from more
than one database.  At some point somebody should probably try to
improve that situation, but it's not immediately obvious what the
feature ought to look like.


OK, thanks



If all you need is compression it's certainly easy enough:

pg_dumpall | gzip mydump.gz


That's what I'm currently doing ;)


Thanks


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Tom Lane
Jeff Davis [EMAIL PROTECTED] writes:
 One minor thing: I think it's still dependent on locale though, because
 the output of pg_controldata is locale-dependent, right? It would work
 fine for me, but it would be nice if there was something that could be
 released that anyone could use, including non-english installations.

Wouldn't export LANG=C fix that?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] WAL archiving idle database

2007-10-29 Thread Kevin Grittner
 On Mon, Oct 29, 2007 at 11:50 AM, in message
[EMAIL PROTECTED], Jeff Davis [EMAIL PROTECTED]
wrote: 

 Also, did you publish your pg_clearxlogtail program anywhere? I think
 that would be helpful to many people, but I don't see it on pgfoundry.
 
So far I've just included with the email on the hackers list.  I
have made one fix since: I found that an explicit close of stdout
speeds the worst-case situation to break-even.  (I'm guessing that's
safer, too.)  So in all of my tests it is now as fast or faster to
pipe through this on the way to gzip than to just pipe through gzip.
 
I'll see about getting that onto pgfoundry soon.
 
-Kevin



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] reg. rewrite rules

2007-10-29 Thread bharat u
Hi,

Can anyone tell me how pg_analyze_and_rewrite
works?

-bharat.


Re: [GENERAL] Inheritance problem when restoring db

2007-10-29 Thread Tom Lane
I wrote:
 Sebastjan Trepca [EMAIL PROTECTED] writes:
 This is how to reproduce this issue:
 ...
 inh_test=# alter table capitals inherit cities;

 Fascinating.  pg_dump is almost smart enough to get this right, ...

I've fixed this --- if you need a patch right away, see here:
http://archives.postgresql.org/pgsql-committers/2007-10/msg00448.php

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] reg. rewrite rules

2007-10-29 Thread Tom Lane
bharat u [EMAIL PROTECTED] writes:
 Can anyone tell me how pg_analyze_and_rewrite works?

That's a rather broad question.  Have you read the overview in the manual?
http://developer.postgresql.org/pgdocs/postgres/overview.html
pg_analyze_and_rewrite covers what that material calls the
transformation process and query rewrite stages of processing.

If you've got specific questions about pieces of that code,
pgsql-hackers is probably a better place than -general for asking 'em.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] DST problem on pg 8.2.5

2007-10-29 Thread Joseph S
On one of my servers postgres thinks that we're back on standard time 
already:


[local]:db=# select current_timestamp;
  now
---
 2007-10-29 15:06:10.049795-05
(1 row)

Time: 0.807 ms
[local]:db=# select version();
  version
---
 PostgreSQL 8.2.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 
20060404 (Red Hat 3.4.6-3)

(1 row)

Time: 37.631 ms

But the os itself doesn't have a problem:
 date
Mon Oct 29 16:05:50 EDT 2007

On another machine with an 8.2.5 install it is working fine.  What could 
have caused this?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] ALTER TRIGGER Before / After?

2007-10-29 Thread Josh Trutwin
On Postgresql 8.1 I am guessing there isn't a convenient way to alter
a trigger to change its before/after behavior?  I wrote one of my
first triggers using an AFTER and now I release I needed to
do BEFORE.  It's used on a couple tables so I was hoping to avoid
dropping it and re-creating it but if that is my only option, so be
it.

Josh

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] DST problem on pg 8.2.5

2007-10-29 Thread Tom Lane
Joseph S [EMAIL PROTECTED] writes:
 On one of my servers postgres thinks that we're back on standard time 
 already:

What's its TimeZone setting?  What is the mod date of the corresponding
file in the $SHAREDIR/timezone/ directory (where $SHAREDIR means
whatever pg_config --sharedir says)?

It sounds to me like you've got obsolete timezone files, but 8.2.5 went
out with the latest info AFAIK.  Maybe you did an incomplete update?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ALTER TRIGGER Before / After?

2007-10-29 Thread Richard Huxton

Josh Trutwin wrote:

On Postgresql 8.1 I am guessing there isn't a convenient way to alter
a trigger to change its before/after behavior?  I wrote one of my
first triggers using an AFTER and now I release I needed to
do BEFORE.  It's used on a couple tables so I was hoping to avoid
dropping it and re-creating it but if that is my only option, so be
it.


What's the problem with drop/create?

BEGIN;
DROP TRIGGER...
CREATE TRIGGER...
COMMIT;

No other activity needs to be interrupted.

A common trick is to put these changes in a script with a ROLLBACK at 
the end. That way you can run the script, look for errors and only put 
the commit at the end once it all works.



--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DST problem on pg 8.2.5

2007-10-29 Thread Joseph S

Tom Lane wrote:

Joseph S [EMAIL PROTECTED] writes:
On one of my servers postgres thinks that we're back on standard time 
already:


What's its TimeZone setting?


On the server that is working:
= show TimeZone;
  TimeZone

 US/Eastern
(1 row)

On the one that is broken:

 show TimeZone;
  TimeZone

 posixrules
(1 row)


  What is the mod date of the corresponding

file in the $SHAREDIR/timezone/ directory (where $SHAREDIR means
whatever pg_config --sharedir says)?

The problem seems to have been that universal read permission (and on 
the directories execute permission) was not granted on the 
$SHAREDIR/timezone/ directory structure during install.  I fixed the 
permissions and restarted postgres and now everything is fine.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ALTER TRIGGER Before / After?

2007-10-29 Thread Josh Trutwin
On Mon, 29 Oct 2007 22:33:28 +
Richard Huxton [EMAIL PROTECTED] wrote:

 Josh Trutwin wrote:
  On Postgresql 8.1 I am guessing there isn't a convenient way to
  alter a trigger to change its before/after behavior?  I wrote one
  of my first triggers using an AFTER and now I release I needed to
  do BEFORE.  It's used on a couple tables so I was hoping to avoid
  dropping it and re-creating it but if that is my only option, so
  be it.
 
 What's the problem with drop/create?
 
 BEGIN;
 DROP TRIGGER...
 CREATE TRIGGER...
 COMMIT;
 
 No other activity needs to be interrupted.
 
 A common trick is to put these changes in a script with a ROLLBACK
 at the end. That way you can run the script, look for errors and
 only put the commit at the end once it all works.

Nothing, other than having to script it to work on about 30 tables.
But I'd have to script an ALTER TABLE as well.

And yeah, ROLLBACK on DDL is sure a nice feature that PostgreSQL
has.

Josh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Base Backups from PITR Standby

2007-10-29 Thread Brian Wipf
After bringing up a PG 8.2.5 database restored from a base backup  
taken from a warm standby, the following warnings are logged:

...
[2007-10-26 19:21:22 MDT] LOG:  archived transaction log file  
0001017C00E2
WARNING:  relation category_click_history page 250226 is  
uninitialized --- fixing
WARNING:  relation category_click_history page 250227 is  
uninitialized --- fixing
[2007-10-26 19:22:22 MDT] LOG:  archived transaction log file  
0001017C00E3

...
WARNING:  relation category_product page 30364 is uninitialized ---  
fixing


Most warnings are for tables that are constantly being inserted into.  
When I restore the database to the same server using a base backup  
from the main server, I don't get this problem. I'm not sure if it's a  
bug or a problem with my procedures. If it's a bug fixed in 8.3, I can  
wait for that.


The process I use that leads to the warnings is simple:

I use pg_controldata to determine the current checkpoint WAL location  
of the standby server. I ensure I have this WAL file and all newer  
WALs. I backup all files under the standby's database cluster  
directory, including all tablespaces, which are soft linked. I restore  
the database cluster directory to its new location and create soft  
links in pg_tblspc to point to the proper tablespace directories just  
backed up. I create the pg_xlog directory containing an archive_status  
directory.


I bring up the database and it goes into archive recovery mode (using  
the recovery.conf from the original standby's backup). I let it replay  
WALs until it's in sync with the production server. I then bring it  
out of archive recovery mode by having my restore_comand script return  
non-zero once and the database is up and running. With the database  
sitting idle, other than autovac, the warnings noted above are logged.  
Other than the warnings, the database seems okay.


I have resumed taking base backups from the main server and it's not  
terribly important that this even works. If anyone could shed any  
insight though, I would appreciate the feedback.


Brian Wipf
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] A few questions

2007-10-29 Thread Samantha  Atkins


On Oct 29, 2007, at 10:14 AM, Richard Huxton wrote:


Samantha Atkins wrote:

First on prepared statements:
1) If I am using the libpq are prepared statements tied to a  
connection?  In other words can I prepare the statement once and  
use it on multiple connections?


Per session (connection).

Temporary tables etc. are the same.

2) What is the logical scope of prepared statement names?  Can I  
use the same name on different tables without conflict or is the  
scope database wide or something else?


Per session.


On indices:
3) same as 2 for index names.  I think they are per table but it is  
worth asking.


Per database (if you count the schema name). We don't have cross- 
table indexes, but the global naming allows it.



and last:
4) Is it generally better to have more tables in one database from  
a memory and performance point of view or divide into more  
databases if there is a logical division.  The reason I ask is that  
I have a situation where one app is used by multiple different  
users each running their own copy.  The app uses on the order of 30  
tables.  In some ways it would be convenient to have one big  
database and specialize the table names per user.   But I am not  
sure that is most optimal.  Is there a general answer to such a  
question?


Not really, but...

1. Do you treat them as separate logical entities?


A set of tables per a user, yes.  A app process is always for one and  
only one user.




Do you want to backup and restore them separately?


Not necessarily.  Although the is a possibility of wanting separate  
per-user backups which would pretty much answer the question in this  
specific case.




Is any information shared between them?


Possible sharing of some common id numbers for common items.
Although it is not essential the common items have the same serial  
number on different databases.




What are the consequences of a user seeing other users' data?



Little likelihood unless we expose database username/passwd.   These  
are users not necessarily represented as postgresql database users.



2. Are you having performance issues with the most logical design?


The first prototype has not yet been completed so no, not yet.  :-)



Can you solve it by adding some more RAM/Disk?


???  There is a desire to use as little ram/disk as possible for the  
application.   I would be interested in what the overhead is for  
opening a second database.




What are the maintenance issues with not having the most logical  
design?




What do you consider the most logical, one database per user?

- samantha


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] A few questions

2007-10-29 Thread Gregory Williamson
Samantha Atkins shaped electrons to ask:
 
 What do you consider the most logical, one database per user?
 
 - samantha

Perhaps a schema per user ? Then you can have the common tables (look up 
values, whatever) in the public schema. Each user gets a schema that has all of 
the tables they share in common (accounting or addresses or whatever) plus you 
can add an specialized tables and not worry about other users seeing them. Of 
course, all table references have to be qualified (myschema.mytable) or you 
have to set the search_path.

I'd lean toward making each a real postgres user and then revoke all rights ont 
heir schema from public and allow them access to the schema and the underlying 
tables.

HTH,

Greg Williamson
Senior DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)



Re: [GENERAL] A few questions

2007-10-29 Thread Richard Huxton

Samantha  Atkins wrote:


On Oct 29, 2007, at 10:14 AM, Richard Huxton wrote:


Samantha Atkins wrote:

First on prepared statements:
1) If I am using the libpq are prepared statements tied to a 
connection?  In other words can I prepare the statement once and use 
it on multiple connections?


Per session (connection).

Temporary tables etc. are the same.

2) What is the logical scope of prepared statement names?  Can I use 
the same name on different tables without conflict or is the scope 
database wide or something else?


Per session.


On indices:
3) same as 2 for index names.  I think they are per table but it is 
worth asking.


Per database (if you count the schema name). We don't have cross-table 
indexes, but the global naming allows it.



and last:
4) Is it generally better to have more tables in one database from a 
memory and performance point of view or divide into more databases if 
there is a logical division.  The reason I ask is that I have a 
situation where one app is used by multiple different users each 
running their own copy.  The app uses on the order of 30 tables.  In 
some ways it would be convenient to have one big database and 
specialize the table names per user.   But I am not sure that is most 
optimal.  Is there a general answer to such a question?


Not really, but...

1. Do you treat them as separate logical entities?


A set of tables per a user, yes.  A app process is always for one and 
only one user.


OK, so no data-sharing.


Do you want to backup and restore them separately?


Not necessarily.  Although the is a possibility of wanting separate 
per-user backups which would pretty much answer the question in this 
specific case.


Yep. Or if you want to prevent other users knowing that they share a 
database.



Is any information shared between them?


Possible sharing of some common id numbers for common items.   Although 
it is not essential the common items have the same serial number on 
different databases.



What are the consequences of a user seeing other users' data?


Little likelihood unless we expose database username/passwd.   These are 
users not necessarily represented as postgresql database users.


Ah, but with separate databases they can (and might as well be) separate 
 db users. It's the simplest way to guarantee no data leakage.


If you have only one db, then you'll want to have separate tables for 
each user, perhaps in their own schema or a column on each table saying 
which row belongs to which user. It's easier to make a mistake here.



2. Are you having performance issues with the most logical design?


The first prototype has not yet been completed so no, not yet.  :-)


Good. In that case, I recommend going away and mocking up both, with 
twice as many users as you expect and twice as much data. See how they 
operate.



Can you solve it by adding some more RAM/Disk?


???  There is a desire to use as little ram/disk as possible for the 
application. 


Don't forget there might well be a trade-off between the two. Caching 
results in your application will increase requirements there but lower 
them on the DB.


 I would be interested in what the overhead is for opening

a second database.


Not much. If you have duplicated data, that can prove wasteful.

Otherwise it's a trade off between a single 100MB index and 100 1MB 
index and their overheads. Now, if only 15 of your 100 users log in at 
any one time that will make a difference too. It'll all come down to 
locality of data - whether your queries need more disk blocks from 
separate databases than from larger tables in one database.



What are the maintenance issues with not having the most logical design?


What do you consider the most logical, one database per user?


You're the only one who knows enough to say. You're not sharing data 
between users, so you don't need one database. On the other hand, you 
don't care about backing up separate users, which means you don't need 
many DBs.


Here's another question: when you upgrade your application, do you want 
to upgrade the db-schema for all users at once, or individually?


Write a list of all these sort of tasks - backups, installations, 
upgrades, comparing users, expiring user accounts etc. Mark each for how 
 often you'll have to deal with it and then how easy/difficult it is 
with each design. Total it up and you'll know whether you want a single 
DB or multiple.


Then come back and tell us what you decided, it'll be interesting :-)

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Stored procedure for generation next sequence value

2007-10-29 Thread Joshua D. Drake
On Mon, 29 Oct 2007 23:59:59 +
Frank Church [EMAIL PROTECTED] wrote:

 Is there a built in function for retrieving the next value of a
 sequence and updating the sequence counter?
 

The docs are amazing:

http://www.postgresql.org/docs/current/static/functions-sequence.html

Sincerely,

Joshua D. Drake


 ---(end of
 broadcast)--- TIP 3: Have you checked our
 extensive FAQ?
 
http://www.postgresql.org/docs/faq
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/



signature.asc
Description: PGP signature


[GENERAL] Collation sequence and use of operatings system's locale

2007-10-29 Thread James Gates
I'm working with a issue where it seems the PostgreSQL server collation 
sequence for certain locales don't match the operating system's 
collation sequence for the same locale (set during initdb).


I can reproduce this on both 8.1.9  8.2.5 on Solaris (both Nevada  10).



1) First I have a text file containing some LATIN1 (ISO8859-1)
characters in random order (followed by their decimal byte code):

$ cat barf
Ö   214
A   65
a   97
Å   197
ä   228
Ä   196

2) Perform ascending byte code  dictionary order sorts (on 1st field 
only) with the C locale (Note that the results are the same when using 
a POSIX locale):


$ locale
LANG=C
LC_CTYPE=C
LC_NUMERIC=C
LC_TIME=C
LC_COLLATE=C
LC_MONETARY=C
LC_MESSAGES=C
LC_ALL=

$ sort +1 -0 barf
A   65
a   97
Ä   196
Å   197
Ö   214
ä   228

$ sort +0 -1 -d barf
Ä   196
Å   197
Ö   214
ä   228
A   65
a   97

3) Perform ascending byte code  dictionary order sorts (on 1st field 
only) with the Swedish locale (Note that the results are the same when 
using any ISO8859 locale):


$ locale
LANG=sv_SE.ISO8859-1
LC_CTYPE=sv_SE.ISO8859-1
LC_NUMERIC=sv_SE.ISO8859-1
LC_TIME=sv_SE.ISO8859-1
LC_COLLATE=sv_SE.ISO8859-1
LC_MONETARY=sv_SE.ISO8859-1
LC_MESSAGES=sv_SE.ISO8859-1
LC_ALL=

$ sort +1 -0 barf
A   65
a   97
Ä   196
Å   197
Ö   214
ä   228

$ sort +0 -1 -d barf
a   97
A   65
Å   197
ä   228
Ä   196
Ö   214

(Note that in the CLDR definition for 8859-1 locales, lower case letters
appear before the equivalent upper case letter in a dictionary sort)

4) Now create a db with the same locale:

$ locale
LANG=sv_SE.ISO8859-1
LC_CTYPE=sv_SE.ISO8859-1
LC_NUMERIC=sv_SE.ISO8859-1
LC_TIME=sv_SE.ISO8859-1
LC_COLLATE=sv_SE.ISO8859-1
LC_MONETARY=sv_SE.ISO8859-1
LC_MESSAGES=sv_SE.ISO8859-1
LC_ALL=

$ initdb
The files belonging to this database system will be owned by user
postgres.
This user must also own the server process.

The database cluster will be initialized with locale sv_SE.ISO8859-1.
The default database encoding has accordingly been set to LATIN1.

fixing permissions on existing directory
/var/tmp/postgres/8.2.5/sv_SE.ISO8859-1 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers/max_fsm_pages ... 32MB/204800
creating configuration files ... ok
creating template1 database in
/var/tmp/postgres/8.2.5/sv_SE.ISO8859-1/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
 etc 

5) Check the correct collation sequence  char set in the db:

postgres=# \l
List of databases
   Name|  Owner   | Encoding
---+--+--
 postgres  | postgres | LATIN1
 template0 | postgres | LATIN1
 template1 | postgres | LATIN1
(3 rows)

postgres=# show lc_collate;
   lc_collate
-
 sv_SE.ISO8859-1
(1 row)

6) And try the same sort on a table containing the same characters:

postgres=# select barf,ascii(barf) from jim order by barf asc;
 barf | ascii
--+---
 Å|   197
 ä|   228
 Ä|   196
 Ö|   214
 a|97
 A|65
(6 rows)

postgres=# \d jim
Table public.jim
 Column | Type | Modifiers
+--+---
 barf   | text |

Notice that the results don't match the operating system's (either byte
code or dictionary) sort order for the same locale, or even the C or
POSIX locales.

In fact, I can't tell where this order is derived from? I thought that
calling initdb with a particular locale meant that the database used all
aspects of the operating systems locale, including collation sequence.
This is implied in the docs at:

http://www.postgresql.org/docs/8.2/interactive/sql-select.html#SQL-ORDERBY


Character-string data is sorted according to the locale-specific
collation order that was established when the database cluster was
initialized.


I'm probably missing (or misunderstanding) something, since I don't
fully understand PostgreSQL's localization. Can someone please explain?

Many thanks.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Stored procedure for generation next sequence value

2007-10-29 Thread Frank Church
Is there a built in function for retrieving the next value of a
sequence and updating the sequence counter?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stored procedure for generation next sequence value

2007-10-29 Thread Michael Glaesemann


On Oct 29, 2007, at 18:59 , Frank Church wrote:


Is there a built in function for retrieving the next value of a
sequence and updating the sequence counter?


http://www.postgresql.org/docs/faqs.FAQ.html#item4.11.2
http://www.postgresql.org/docs/8.2/interactive/functions-sequence.html

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Raid Chunk Sizes for DSS type DB

2007-10-29 Thread Joshua D. Drake
On Tue, 30 Oct 2007 09:42:37 +0800
Ow Mun Heng [EMAIL PROTECTED] wrote:

 It's not an optimal setup but since I only have 3x500G drives to play
 with, I can't build a Raid10, so I'm going for Raid5 to test out
 capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No
 Fault tolerance since 3 drives)
 

Uhhh RAID 1 is your best bet. You get fault tolerance (mirrored) plus
you get a hot spare (3 drives).

RAID 5 on the other hand will be very expensive on writes.

Joshua D. Drake


 Anyway.. I'm trying to figure out the chunk size for the raid. I'm
 using 4k chunks since I'm reading that for DSS type queries, lots of
 Large Reads, I should be using small chunks. [1] and I've aligned the
 disks per [2]
 
 and my stride will 3 for ext3
 
 mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3
 --chunk=4 /dev/sdd1 /dev/sde1 /dev/sdf1 mkfs.ext3 -E stride=3 -O
 dir_index /dev/md1 mount /dev/md1 /pgsql/ -o noatime,data=writeback
 
 
 [1] http://wiki.centos.org/HowTos/Disk_Optimization
 [2] http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux
 
 Just wondering if there's any suggestions/comments on this from the
 PG ppl here.
 
 Thanks for any/all comments.
 
 
 ---(end of
 broadcast)--- TIP 5: don't forget to increase
 your free space map settings
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/



signature.asc
Description: PGP signature


[GENERAL] Raid Chunk Sizes for DSS type DB

2007-10-29 Thread Ow Mun Heng
It's not an optimal setup but since I only have 3x500G drives to play
with, I can't build a Raid10, so I'm going for Raid5 to test out
capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No Fault
tolerance since 3 drives)

Anyway.. I'm trying to figure out the chunk size for the raid. I'm using
4k chunks since I'm reading that for DSS type queries, lots of Large
Reads, I should be using small chunks. [1] and I've aligned the disks
per [2]

and my stride will 3 for ext3

mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 --chunk=4 
/dev/sdd1 /dev/sde1 /dev/sdf1
mkfs.ext3 -E stride=3 -O dir_index /dev/md1
mount /dev/md1 /pgsql/ -o noatime,data=writeback


[1] http://wiki.centos.org/HowTos/Disk_Optimization
[2] http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux

Just wondering if there's any suggestions/comments on this from the PG ppl here.

Thanks for any/all comments.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] subversion support?

2007-10-29 Thread Robert Treat
On Wednesday 24 October 2007 15:11, Roberts, Jon wrote:
 Yeah.  I think having to save the function to disk and then leave pgAdmin
 to execute subversion commands is going through hoops.

 Also, pgAdmin should be integrated so that you are notified if the function
 in the database is different from the last committed version.  A visual
 diff should be there so you can see what the differences are.


We have a script that runs nightly that dumps tables / functions to file, and 
then checks it in automagically to svn, which sends an email of the diffs.  
Perhaps that would work for you? 

-- 
Robert Treat
Database Architect
http://www.omniti.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Extracting Index Creation Script

2007-10-29 Thread Vyacheslav Kalinin
If you know the index name then:
SELECT pg_get_indexdef('your_index_name'::regclass)
will do.

In case you want a full list of indices for a certain table:
SELECT c2.relname, pg_get_indexdef(i.indexrelid, 0, true), c2.reltablespace
  FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
 WHERE c.oid = 'your_table_name'::regclass AND c.oid = i.indrelid AND
i.indexrelid = c2.oid

If you have more questions of that kind try starting psql with -E option
which enables internal queries' display (this is what I did).


Re: [GENERAL] autovacuum and locks

2007-10-29 Thread Dietmar Maurer
  
  Why cant postgres get the RowExclusiveLock in transaction 3369000? 
 
 Probably because the ExclusiveLock'ers are waiting in front 
 of RowExclusiveLock.  Locks are granted in order.
 
 It would help if you didn't mangle the pg_locks output so badly.

Yes, sorry about that.

I was able to reproduce the problem, and the problem is that locks are
granted in order (wonder why?). Anyways, i am trying to avoid locks now,
by using my own merge function to avoid update/insert race condition.

Or what is the suggested way to avoid the update/insert race condition?.

- Dietmar



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] sql

2007-10-29 Thread Eva Elizalde
hola
quiero saber que version de sql maneja postgres
gracias
mi correo es
[EMAIL PROTECTED]


[GENERAL] Partitioning: how to exclude unrelated partitions?

2007-10-29 Thread Sean Z.
Hi,

I partitioned a table events into 31 tables, based on day of event_time.

I did 3 steps to setup partition, after creating partition tables:

1. Add the constraint to the 31 partition tables like:

ALTER TABLE events_day_1
  ADD CONSTRAINT events_day_1_event_time_check CHECK (date_part('day'::text, 
event_time) = 1::double precision);

2. Add partition rules  like:

CREATE OR REPLACE RULE events_insert_day_1 AS
ON INSERT TO events
   WHERE date_part('day'::text, new.event_time) = 1::double precision
DO INSTEAD 
INSERT INTO events_day_1 (id, event_number, event_source, event_type, 
event_time, event_message)
  VALUES (new.id, new.event_number, new.event_source, new.event_type, 
new.event_time, new.event_message);

3. Set constraint_exclusion = on

But when I run the following query:

explain analyze select *
from events
where event_time  '10/25/2007 20:00:00'
order by event_time
limit 100
offset 3000;

I got the following query plan:

Limit  (cost=12897.77..12898.02 rows=100 width=144) (actual 
time=365.976..366.143 rows=100 loops=1)
  -  Sort  (cost=12890.27..13031.08 rows=56323 width=144) (actual 
time=362.225..364.929 rows=3100 loops=1)
Sort Key: public.events.event_time
-  Result  (cost=0.00..4207.48 rows=56323 width=144) (actual 
time=0.099..156.586 rows=50091 loops=1)
  -  Append  (cost=0.00..4207.48 rows=56323 width=144) (actual 
time=0.095..93.748 rows=50091 loops=1)
-  Seq Scan on events  (cost=0.00..17.25 rows=193 
width=106) (actual time=0.003..0.003 rows=0 loops=1)
  Filter: (event_time  '2007-10-25 
20:00:00-04'::timestamp with time zone)
-  Seq Scan on events_day_1 events  (cost=0.00..17.25 
rows=193 width=106) (actual time=0.001..0.001 rows=0 loops=1)
  Filter: (event_time  '2007-10-25 
20:00:00-04'::timestamp with time zone)

... repeated lines ignored here

-  Index Scan using events_day_25_idx1 on events_day_25 
events  (cost=0.00..3672.73 rows=50340 width=144) (actual time=0.053..53.129 
rows=49984 loops=1)
  Index Cond: (event_time  '2007-10-25 
20:00:00-04'::timestamp with time zone)

... repeated lines ignored here
 
-  Seq Scan on events_day_31 events  (cost=0.00..17.25 
rows=193 width=106) (actual time=0.001..0.001 rows=0 loops=1)
  Filter: (event_time  '2007-10-25 
20:00:00-04'::timestamp with time zone)



Every partition table is Seq Scanned, I think unrelated tables are not 
excluded in the query. Only table events_day_25 should be included in scan I 
believe.

Do I miss anything?

Best,
Sean


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Re: [GENERAL] Using Postgres as a embedded / SQL Lite database on Windows

2007-10-29 Thread Andy

Ron Johnson wrote:


Hmmm.  Is Paradox that bad?  



Yes.  I'm also a Delphi developer, who has already converted a paradox 
program (was just before pg on windows.. bummer)


Paradox is a file share'd database (like dbase), and when the program 
crashes, it'll hose up your pdox tables too.   Especially if they are 
shared on the network.


You can do pdox apps in two ways, with table operations, or sql.

By table operations I mean:

table.open;
table.IndexName := 'name';
table.findFirst('bob');
table.edit;
table.fieldByName('pay').asInteger := 4;
table.post;

You'd say, why do that when there is sql?  Because its not exactly 
stable.  I did code in sql to update some photos, by looking at the code 
you'd think it would work.. but it didnt.  (was a delete statement, then 
an insert).  Sometime it worked, sometimes not.  I replaced it with 
table operations and it always worked.


We ported over to firebird, it is so much more stable now (We used to 
have to tell people to restore from backup all the time, just cuz pdox 
is not stable) and is much simpler to maintain the db because we get to 
fire off sql.  Our database upgrade was a huge program that would 
restructure the database... now its just a little sql script.  Its so 
much nicer.


-Andy

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] autovacuum and locks

2007-10-29 Thread Dietmar Maurer
  Anyways, i am trying to avoid locks now, by using my own merge 
  function to avoid update/insert race condition.
  
  Or what is the suggested way to avoid the update/insert 
 race condition?.
 
 What update/insert race condition?  Maybe you are talking 
 about the subject of example 37-1 here:

Yes, i talk about Example 37-1

- Dietmar


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL and AutoCad

2007-10-29 Thread Andy

Josh Tolley wrote:

On 10/24/07, Bob Pawley [EMAIL PROTECTED] wrote:

Is there any way of converting text from an AutoCad (.dwg ot .dxf) file into
a PostgreSQL  Database??

Bob Pawley


I know nothing of AutoCad, but your message has been sitting for a
while without response, so I'll throw out the suggestion that you
probably want AutoCad to export the text to some more common format
(like a ASCII or UTF8 file or some such) and import that.

-Josh/eggyknap


I think .dxf is text, its a vector image description (lines, line 
styles, layers, etc, etc).  But being text or not, you can store it into 
a bytea field no problem.


Bob:  Do you just want to store the file as-is in the database?  Then 
yes you can.  Or are you looking to break it up into tables so that you 
can query it somehow?  In that case, I'd say not likely (or at least not 
cheaply).


Do you want AutoCad to edit the drawings right out of the database?  How 
would you want to put them in/get them out, of the database?


-Andy

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Data cube in PostgreSQL

2007-10-29 Thread Gowrishankar L
Hi All,

I need to make certain changes to cube.c file  which comes with cube contrib
( http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/cube/). I am not
sure how to compile it so that I can use those changes in postgresql. Could
you help me?

Thanks
Gowrishankar L

On 10/19/07, Gowrishankar L [EMAIL PROTECTED] wrote:

 Hi,

 I am using cube contrib from   http://developer. postgresql
 .org/cvsweb.cgi/pgsql/contrib/cube/http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/cube/

 I am able to use most of the functions like cube_distance,cube_contains
 etc. I am facing a problem when passing an argument to these functions which
 are table columns.

 For example :

  select cube_distance('(0)','(1)');

  cube_distance
 ---
  1
 (1 row)

 works fine.

 But

 select cube_distance('(0)',(select a from temp));

 ERROR:  function cube_distance(unknown, integer) does not exist
 HINT:  No function matches the given name and argument types. You may need
 to add explicit type casts.

 I am not sure what to typecast to. Could you please help me correct this
 error.

 Thanks
 Gowrishankar L

 On 10/1/07, Dimitri Fontaine  [EMAIL PROTECTED] wrote:
 
  Hi,
 
  Le Wednesday 26 September 2007 20:58:38 Gowrishankar, vous avez écrit:
   Is there any other way of extending postgresql to include cubes?
 
  Something like the cube contrib?
 http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/cube/
 
  Contribs are part of PostgreSQL sources and maintained as such, but not
  included into the 'core' project. They are certainly distributed along
  with
  PostgreSQL in your packaging system of choice.
 
  Regards,
  --
  dim
 




[GENERAL] Pgaccess

2007-10-29 Thread Joanne Salerno

I recently went from Postgres 7.4 to 8.24 (Linux Platform).

In 7.4 I relied on forms created locally within the utility Pgaccess to 
perform various database changes.  Upon conversion to 8.24 I have not 
been able to get Pgaccess to read Postgres pga_forms table contents.  
Pgaccess recognizes the Postgres 8.24 pga tables, however, within 
Pgaccess, when I choose an existing form and select design a form. a 
blank form comes up instead of the defined widgets and code.  Any clues?


Is Pgaccess compatible with Postgres 8.24?  I noticed the Pgaccess 
web-site no longer exists.  If I can not get Pgaccess to perform with 
Postgres 8.24, is there a different package that the supporting 
community recommends?  



I am using pgaccess-0_99_0_20040219.


Thank you.
Joanne

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] (Never?) Kill Postmaster?

2007-10-29 Thread Vyacheslav Kalinin
Postgres starts separate process for each connection and you can just `kill`
(not `kill -9`) the unwanted process. Or you can do `select
pg_cancel_backend(pid)` from another session.


[GENERAL] SSL Connectivity on Solaris 10 x86

2007-10-29 Thread Dot Yet
Hello Everyone,

I have configured SSL for my postgreSQL setup. The environment is

PostgreSQL 8.2.5 (64-bit) built using Sun tools with ssl flag
OpenSSL 0.9.8d 28 Sep 2006
OpenSolaris 5.11 snv_73

I have configured the server key and certificate file using openssl. The
connection between a remote client and the server works fine. I can see the
following message at the client side

SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

The problem part now...

I am seeing this Connection reset by peer message in the
postmaster.logfile, but the connection between the client and server
does get established
and works fine too...

Secondly, How do i enable SSL authentication between client and server. I
think right now it's just communication at the SSL layer.


LOG:  could not load root certificate file root.crt: No such file or
directory
DETAIL:  Will not verify client certificates.
LOG:  database system was shut down at 2007-10-28 01:24:52 EDT
LOG:  checkpoint record is at 0/61799D90
LOG:  redo record is at 0/61799D90; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 0/1183; next OID: 32774
LOG:  next MultiXactId: 1; next MultiXactOffset: 0
LOG:  database system is ready


LOG:  could not receive data from client: Connection reset by peer



The document talks about creating root certificate file and signing the
client certificates with it, but somehow I am a bit confused about it.

Any help would be much appreciated.

Regards,
dotyet


Re: [GENERAL] function and passing the table name to be used with SQL stmnt

2007-10-29 Thread kamiseq
some other question I dont like to make a new topic so I will add it
here.

I ve modified my function to
CREATE OR REPLACE FUNCTION bookid(_tabela varchar) RETURNS integer AS $
$
DECLARE
_id integer := 0;
BEGIN
--insert bez id wyzwala nextwal na tabela_columna_seq
EXECUTE 'insert into '|| quote_ident(_tabela) || '(idjm, waga,
stawka_vat) values(0, 0, )';
select into _id lastval();

RETURN _id;
END;
$$ LANGUAGE plpgsql;

and is that safe operation, can I rely on lastval() that it will
return value of id that was inserted before?
can it happened that other user will insert something between my
insert and select?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] function and passing the table name to be used with SQL stmnt

2007-10-29 Thread kamiseq
 SELECT INTO _id id FROM _tabela order by id desc limit 1;

ok I ve found that

EXECUTE 'SELECT id FROM ' || quote_ident(_tabela) ||' order by id desc
limit 1' INTO _id;


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] function and passing the table name to be used with SQL stmnt

2007-10-29 Thread kamiseq
ok so my probelm is I have a simple function like this
CREATE OR REPLACE FUNCTION bookid(_tabela varchar) RETURNS integer AS $
$
DECLARE
_id integer := 0;
BEGIN
SELECT INTO _id id FROM _tabela order by id desc limit 1;

RETURN _id+1;
END;
$$ LANGUAGE plpgsql;

and the problem is i can run this function cos im passing string to
SQL statement and not the table object. hym what can I do about it to
have function that will give me next id number back;


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] reg. rewrite rules

2007-10-29 Thread U P C Bharat
Hi,

Can anyone tell me how rewrite rules are applied in
pg_analyze_and_rewrite?

regs,
bharat.


Re: [GENERAL] postgresql in PHP

2007-10-29 Thread brian

[EMAIL PROTECTED] wrote:

When I try to add or update a postgresql field from php, even though I
escape it, when I
try to include a single quote in a varchar field, the result is the
quote and everything following
is missing after being INSERTed or UPDATEd.  Anybody have any ideas
what is going on?
Thanks in advance!


What is the setting for magic_quotes_gpc in your php.ini?

hint: get_magic_quotes_gpc() will return TRUE if this is enabled, so you 
can use stripslashes() where necessary.


Are you using a database wrapper (eg. PEAR MDB2) or the pg_whatever() 
functions?


If the former, are you using a prepared statement?

brian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How can I easily and effectively support (language) translation of column values?

2007-10-29 Thread Dennis Brakhane
Hi,

I have some problems supporting translatable columns in a way that
is both efficient and comfortable for the programmer who has to write
SQL-Queries (we use iBatis)

Suppose I have a simple table that contains an id field and (amongst
others) a name field. Let's call this table foo. The names are english
names.

These english names need to be translatable (and the translations have
to be by id, not by name; for example: the pair (1, 'foobar') has
to be translated as (1, 'barfoo'), but (2, 'foobar') might be (2,
'baz') and not (2, 'foobar')

I've tried to solve the problem in the following way (if you know of a
better method, please tell me)

I created a table foo_translations, that contains the columns foo_id,
lang_id, and (translated) name:

  SELECT * FROM foo WHERE id = 1;
   id |  name
  +
1 | foobar

  SELECT * FROM foo_translations WHERE foo_id=1;
  foo_id | lang_id |  name
  +-+
1 |   1 | barfoo
1 |   2 | boofoo

Now, whenever I create a query (I use iBatis and therefore more or
less write each query by hand), I can get the result I want by writing
something like this:

  SELECT foo.id, COALESCE(foo_translations.name, foo.name) AS name
FROM foo LEFT JOIN foo_translations ON foo.id=foo_id AND lang_id=?
WHERE id = ?

While this works, it's quite cumbersome having to do this for every
query that involves tables with translatable names.

So my first idea was to create a view that will give me the correctly
translated fields, like

  CREATE VIEW foo1 AS SELECT foo.id, COALESCE(foo_translations.name,
foo.name) AS name FROM foo LEFT JOIN foo_translations ON foo.id =
foo_translations.foo_id AND foo_translations.lang_id = 1;

This way I could rewrite the last SELECT-statement (assuming the
language-code is 1) as simply

  SELECT * FROM foo1 WHERE id = ?

But this would mean I'd have to define (potentially) 50+ VIEWS for
every table with translatable fields and I'd have the create a new
SQL-Query everytime I use it (because I'd have to insert the language
code right after FROM foo, and I don't think PreparedStatement (we
use Java) can handle this kind of placeholder FROM foo? )

So what I really need is a view that takes a parameter. Unfortunately,
it seems like these don't exist. I can work around it by using a
stored proc:

  CREATE FUNCTION foo(int) RETURNS SETOF foo AS $$ SELECT foo.id,
COALESCE(foo_translations.name, foo.name) AS name FROM foo LEFT JOIN
foo_translations ON foo.id=foo_id AND lang_id=$1$$ LANGUAGE SQL;

While this works - I can now define a query like

  SELECT * FROM foo(?) WHERE id=?

- the planner has no clue how to optimize this (dummy table contains 1
million entries):

  EXPLAIN ANALYSE SELECT * FROM foo(1) WHERE id=1;
  QUERY PLAN
  
--
   Function Scan on foo  (cost=0.00..15.00 rows=5 width=36) (actual
time=2588.982..3088.498 rows=1 loops=1)
 Filter: (id = 1)
   Total runtime: 3100.398 ms
  (3 rows)

which obviosly is intolerable. So, do I have to bite the bullet and
use the ugly COALESCE/JOIN statements everywhere or is there a
better way?

Any help is appreciated. Thanks in advance,
  Dennis

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently

2007-10-29 Thread Reg Me Please
Hai all again.

Maybe I've solved the problem, but would like to have some hint on why.

In the second query I've substituted the last join (natural join tt_rice)
with an additional where condition. I can do this as I am sure that
the tt_rice table will always contain just one row with one field.

The main difference with the first query is that in the first case the
single row with a single field is a bigint, while in the second one it
is text.

Is there any deeper tutorial on how to read (and understand) the explain
analyze output?

Many thanks again.

Il Thursday 25 October 2007 10:17:23 Reg Me Please ha scritto:
 Hi all.
 On the very same database and session I have two different (but similar)
 queries behaving in a very different way as far as timings.

 This is the first one:

 prove=# explain analyze select d.* from t_vcol natural join v_dati_attuali
 d natural join tt_elem where vtab_id='TEST';
  QUERY PLAN
 ---
-- Nested Loop 
 (cost=5.65..8562012.60 rows=88104022 width=73) (actual time=36.579..36.772
 rows=7 loops=1)
-  Hash Join  (cost=1.19..442967.06 rows=408730 width=73) (actual
 time=36.547..36.660 rows=7 loops=1)
  Hash Cond: (d.camp_id = t_vcol.camp_id)
  -  Nested Loop  (cost=0.00..430860.08 rows=1603700 width=73)
 (actual time=36.480..36.558 rows=24 loops=1)
-  Seq Scan on tt_elem  (cost=0.00..29.40 rows=1940
 width=8) (actual time=0.012..0.013 rows=1 loops=1)
-  Index Scan using i_dati_0 on t_dati d 
 (cost=0.00..211.74 rows=827 width=73) (actual time=36.461..36.498 rows=24
 loops
  Index Cond: (d.elem_id = tt_elem.elem_id)
  Filter: dato_flag
  -  Hash  (cost=1.12..1.12 rows=5 width=15) (actual
 time=0.039..0.039 rows=5 loops=1)
-  Seq Scan on t_vcol  (cost=0.00..1.12 rows=5 width=15)
 (actual time=0.015..0.026 rows=5 loops=1)
  Filter: (vtab_id = 'TEST'::text)
-  Bitmap Heap Scan on tt_data  (cost=4.46..16.62 rows=216 width=8)
 (actual time=0.009..0.009 rows=1 loops=7)
  Recheck Cond: ((d.dato_validita = tt_data.data_corr) AND
 (d.dato_scadenza  tt_data.data_corr))
  -  Bitmap Index Scan on tt_data_pkey  (cost=0.00..4.41 rows=216
 width=0) (actual time=0.006..0.006 rows=1 loops=7)
Index Cond: ((d.dato_validita = tt_data.data_corr) AND
 (d.dato_scadenza  tt_data.data_corr))
  Total runtime: 36.922 ms
 (16 rows)

 And this is the second one:
 prove=# explain analyze SELECT d.* from t_campi_ricerche natural join
 v_dati_attuali d natural join tt_rice where rice_id='CODICE';
  QUERY PLAN
 ---
-- Nested Loop 
 (cost=43.29..38167065.82 rows=409498649 width=73) (actual
 time=2927.890..56922.415 rows=1 loops=1)
-  Hash Join  (cost=38.83..430557.39 rows=1899736 width=73) (actual
 time=2915.990..56910.510 rows=1 loops=1)
  Hash Cond: (d.dato_t = tt_rice.dato_t)
  -  Hash Join  (cost=1.15..402765.04 rows=2335285 width=73)
 (actual time=191.261..55238.816 rows=2394966 loops=1)
Hash Cond: (d.camp_id = t_campi_ricerche.camp_id)
-  Seq Scan on t_dati d  (cost=0.00..326867.12
 rows=14011712 width=73) (actual time=16.612..42797.766 rows=14011712 loops
  Filter: dato_flag
-  Hash  (cost=1.09..1.09 rows=5 width=15) (actual
 time=0.053..0.053 rows=5 loops=1)
  -  Seq Scan on t_campi_ricerche  (cost=0.00..1.09
 rows=5 width=15) (actual time=0.031..0.041 rows=5 loops=1)
Filter: (rice_id = 'CODICE'::text)
  -  Hash  (cost=22.30..22.30 rows=1230 width=32) (actual
 time=0.009..0.009 rows=1 loops=1)
-  Seq Scan on tt_rice  (cost=0.00..22.30 rows=1230
 width=32) (actual time=0.003..0.004 rows=1 loops=1)
-  Bitmap Heap Scan on tt_data  (cost=4.46..16.62 rows=216 width=8)
 (actual time=11.885..11.886 rows=1 loops=1)
  Recheck Cond: ((d.dato_validita = tt_data.data_corr) AND
 (d.dato_scadenza  tt_data.data_corr))
  -  Bitmap Index Scan on tt_data_pkey  (cost=0.00..4.41 rows=216
 width=0) (actual time=0.033..0.033 rows=1 loops=1)
Index Cond: ((d.dato_validita = tt_data.data_corr) AND
 (d.dato_scadenza  tt_data.data_corr))
  Total runtime: 56922.563 ms
 (17 rows)


 The v_dati_attuali is a view and is common to both queries.
 The structure of indexes is on t_vcol and t_campi_ricerche is very similar
 and both tt_rice and tt_elem have just one row wirh one field being primary
 key.

 Of course I'd like the second query to behave the same as the first one but
 have no clue 

[GENERAL] nonstandard use of

2007-10-29 Thread Peckham, Martin
I recently installed the latest non-beta version 8.2 of postgresql.

 

I get the following warning and hint in the server status GUI:

 

WARNING:nonstandard use of \\ in a string literal at characters 64

HINT:   Use the excape string syntax for backslashes, e.g, E'\\'.

 

The raw string I want to write is:

 

[\x09-\x0A\x0D\x20-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]MI[\x09-\x0A\x0D\x20-
\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]

 

After escaping, I write:  _ 64th

 /

[\\x09-\\x0A\\x0D\\x20-\\x2F\\x3A-\\x40\\x5B-\\x60\\x7B-\\x7E]MI[\\x09-\
\x0A\\x0D\\x20-\\x2F\\x3A-\\x40\\x5B-\\x60\\x7B-\\x7E]

 

Assuming the count starts at 0, the character at position 64 is the [
character.

 

I've read an earlier post that tells of how to SUPRESS the warning.  I
want to eliminate the warning.

 

TIA

 

Marty Peckham

 


Note: The information contained in this message may be privileged and
confidential and thus protected from disclosure. If the reader of this
message is not the intended recipient, or an employee or agent responsible 
for delivering this message to the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited.  If you have received this
communication in error, please notify us immediately by replying to the 
message and deleting it from your computer.  Thank you.



Re: [GENERAL] print command in sproc

2007-10-29 Thread Jorge Godoy
Em Tuesday 23 October 2007 07:17:53 Goboxe escreveu:
 Hi,

 What is the equivalent MSSQL 'print' command in pg sproc?

What does the MSSQL 'print' command prints?

-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Locale collation issues

2007-10-29 Thread Jim Gates
I'm investigating a problem reported by a customer where it seems the 
PostgreSQL server collation sequence for certain locales don't match the 
operating system's collation sequence (for the same locale set during 
initdb).


I can reproduce this on both 8.1.9  8.2.5. I'm using Solaris Nevada, 
but I also see the same issue on Solaris 10.




1) First I have a text file containing some LATIN1 (ISO8859-1) 
characters in random order (followed by their decimal byte code):


$ cat barf
Ö   214
A   65
a   97
Å   197
ä   228
Ä   196

2) Perform ascending byte order  dictionary sorts (on 1st field only) 
with the C locale (Note that the results are the same when using a 
POSIX locale):


$ locale
LANG=C
LC_CTYPE=C
LC_NUMERIC=C
LC_TIME=C
LC_COLLATE=C
LC_MONETARY=C
LC_MESSAGES=C
LC_ALL=

$ sort +1 -0 barf
A   65
a   97
Ä   196
Å   197
Ö   214
ä   228

$ sort +0 -1 -d barf
Ä   196
Å   197
Ö   214
ä   228
A   65
a   97

3) Perform ascending byte order  dictionary sorts (on 1st field only) 
with the customer's locale (Note that the results are the same when 
using any ISO8859 locale):


$ locale
LANG=sv_SE.ISO8859-1
LC_CTYPE=sv_SE.ISO8859-1
LC_NUMERIC=sv_SE.ISO8859-1
LC_TIME=sv_SE.ISO8859-1
LC_COLLATE=sv_SE.ISO8859-1
LC_MONETARY=sv_SE.ISO8859-1
LC_MESSAGES=sv_SE.ISO8859-1
LC_ALL=

$ sort +1 -0 barf
A   65
a   97
Ä   196
Å   197
Ö   214
ä   228

$ sort +0 -1 -d barf
a   97
A   65
Å   197
ä   228
Ä   196
Ö   214

(Note that in the CLDR definition for 8859-1 locales, lower case letters 
appear before the equivalent upper case letter in a dictionary sort)


4) Now create a db with the appropriate locale:

$ locale
LANG=sv_SE.ISO8859-1
LC_CTYPE=sv_SE.ISO8859-1
LC_NUMERIC=sv_SE.ISO8859-1
LC_TIME=sv_SE.ISO8859-1
LC_COLLATE=sv_SE.ISO8859-1
LC_MONETARY=sv_SE.ISO8859-1
LC_MESSAGES=sv_SE.ISO8859-1
LC_ALL=

$ initdb
The files belonging to this database system will be owned by user 
postgres.

This user must also own the server process.

The database cluster will be initialized with locale sv_SE.ISO8859-1.
The default database encoding has accordingly been set to LATIN1.

fixing permissions on existing directory 
/var/tmp/postgres/8.2.5/sv_SE.ISO8859-1 ... ok

creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers/max_fsm_pages ... 32MB/204800
creating configuration files ... ok
creating template1 database in 
/var/tmp/postgres/8.2.5/sv_SE.ISO8859-1/base/1 ... ok

initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
 etc 

5) Check the correct collation sequence  char set in psql:

postgres=# \l
List of databases
   Name|  Owner   | Encoding
---+--+--
 postgres  | postgres | LATIN1
 template0 | postgres | LATIN1
 template1 | postgres | LATIN1
(3 rows)

postgres=# show lc_collate;
   lc_collate
-
 sv_SE.ISO8859-1
(1 row)

6) And try the same sort on a table containing the same characters:

postgres=# select barf,ascii(barf) from jim order by barf asc;
 barf | ascii
--+---
 Å|   197
 ä|   228
 Ä|   196
 Ö|   214
 a|97
 A|65
(6 rows)

Notice that the results don't match the operating system's (either byte 
code or dictionary) sort order for the same locale, or even the C or 
POSIX locales.


What is most unusual, is that 'Ö' appears before 'a' or 'A', which you 
wouldn't expect for a dictionary style sort, but this clearly isn't a 
byte code order either.


In fact, I can't tell where this order is derived from. I thought that 
calling initdb with a particular locale meant that the database used all 
aspects of the operating systems locale, including collation sequence. 
This is implied in the docs at:


http://www.postgresql.org/docs/8.2/interactive/sql-select.html#SQL-ORDERBY

 Character-string data is sorted according to the locale-specific
 collation order that was established when the database cluster was
 initialized.

I'm probably missing (or misunderstanding) something, since I don't 
fully understand PostgreSQL's localization. Can someone please explain?


Many thanks.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgres SQL Client for Suse Linux

2007-10-29 Thread Jorge Godoy
Em Tuesday 23 October 2007 09:03:23 yogesh escreveu:
 Hello Frnds,

 I have installed postgres on Suse Linux using the YAST--Software.
 I have to check and Manipulate the tables and check the data..is
 there are any client application to work on the Suse
 Linux.

Sure!

A lot of them.  They include psql, OpenOffice.org, gda, ODBC and other.  If 
you can be more specific it will be easier to help you.

-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Migration questions for upcoming 8.3 release and fts

2007-10-29 Thread Robert Treat
On Wednesday 24 October 2007 21:10, Chris Travers wrote:
 Hi all;

 I know -hackers is the preferred place to discuss beta releases, but I
 suspect that this is a further-reaching wuestion among the general
 community than is typical so I am posting it here.

 I have been looking at the changes expected for 8.3 and have noticed
 that tsearch2 has been modified and merged with core.  I think this is a
 great idea, but it poses problems for applications which must run both
 both on 8.2 and 8.3 with fts capabilities.

 I am trying to determine the best way forward so that LedgerSMB can
 support 8.3 as soon as it is released.  Reading through various email
 list archives it looks like the function names have changed.  I guess I
 am trying to determine the best way forward.

 1)  How safe is it likely to be to create a set of functions in 8.3
 which mimic 8.2/tsearch2 interfaces?  Is this likely to create any
 serious conflicts?   Would such a project be sufficiently useful that a
 pg_foundry project might be helpful?

 2)  If this is a problem, is the community interested in (possibly as a
 pg-foundry project) an abstraction layer for supporting both sets of
 interfaces?


The talk is that the tsearch2 contrib module will be redone to access the 
built-in functions for 8.3.  If this is done, backwards compatability should 
be pretty easy to work out.  Help in that area would be greatly appreciated 
(and it may not get done if no one offers to help)

That said, I think it is in a DBA's best interest to migrate to the built in 
stuff asap. From what I've seen most of the data types are easily compatible, 
the biggest problem is removing the cruft from your schema dumps for doing 
the upgrade. Not that much different than going from 8.1 - 8.2, which I have 
blogged about previously.   HTH. 

-- 
Robert Treat
Database Architect
http://www.omniti.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] postgres bash prompt error while user creation

2007-10-29 Thread Pingale . amol
hello group
  i want to create a shell script  in that i want to create postgresql
user and database.both this task are created properly and executed
properly but after user and DB creation i am there on ( -bash-3.00$  )
and i wanna come back to [EMAIL PROTECTED]:localdomain ~]#  and my
script is
#!/bin/bash
su -l postgres;
psql -U postgres -c CREATE USER admin27 WITH PASSWORD 'admin27'
CREATEDB; template1
psql -U admin27 -c CREATE DATABASE emcc27; template1 \q exit;
plzz help me to back to root from postgres bash prompt


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] postgresql in PHP

2007-10-29 Thread davidrboyer
When I try to add or update a postgresql field from php, even though I
escape it, when I
try to include a single quote in a varchar field, the result is the
quote and everything following
is missing after being INSERTed or UPDATEd.  Anybody have any ideas
what is going on?
Thanks in advance!

Dave


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] plperl ubuntu problem

2007-10-29 Thread rski

I've installed postgres v. 8.2.4 on ubuntu with 2.6.18.1 kernel with plperl
support (--with-perl option with configure).But when I try to create plperl
languge (createlang plperl) i've got an error

createlang: language installation failed: ERROR: could not load library
/usr/ocal/pgsql/lib/plperl.so: /usr/local/pgsql/lib/plperl.so: undefined
symbol: Perl_croak


can somebody explain where the problem is. File /usr/local/pgsql/plperl.so
exists
-- 
View this message in context: 
http://www.nabble.com/plperl-ubuntu-problem-tf4706685.html#a13452633
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Postgres SQL Client for Suse Linux

2007-10-29 Thread yogesh
Hello Frnds,

I have installed postgres on Suse Linux using the YAST--Software.
I have to check and Manipulate the tables and check the data..is
there are any client application to work on the Suse
Linux.


Regards,
 Yogesh Arora.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] print command in sproc

2007-10-29 Thread Goboxe
Hi,

What is the equivalent MSSQL 'print' command in pg sproc?

Thanks,
G


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL .msi Installation Failure

2007-10-29 Thread Johnson Jesse
I am trying to install postgreSQL v8.2.5 as a prerequisite for Firefox
Bugzilla application on one of our network servers (Microsoft Server
2000).  It goes through the install and everything runs ok to a point
where it is Initializing Database Cluster (this may take a minute or
two) and it pops up an error:

PostgreSQL 8.2
Failed to run initdb: 1!
Please see the logfile in 'C:\Program
Files\PostgreSQL\8.2\tmp\initdb.log'.
Note! You must read/copy this logfile before you click OK, or it will
automatically be removed.


The logfile reads:

The files belonging to this database system will be owned by user
postgres.
This user must also own the server process.

The database cluster will be initialized with locale C.

fixing permissions on existing directory C:/Program
Files/PostgreSQL/8.2/data ... ok
creating subdirectories ... initdb: could not create directory
C:/Program Files: File exists
initdb: removing contents of data directory C:/Program
Files/PostgreSQL/8.2/data



ANY SUGGESTIONS??

Thanks!


Jesse Johnson 
 http://www.bnymellon.com  Picture (Metafile) 
Asset Servicing - Information Technology
500 Grant St, Pittsburgh, PA 15258
412-236-6608
412-234-2019 (fax)
[EMAIL PROTECTED]

The information and attachments contained in this communication should
be treated as Confidential and/or Restricted.
 
The information contained in this e-mail may be confidential and is intended 
solely for the use of the named addressee.
Access, copying or re-use of the e-mail or any information contained therein by 
any other person is not authorized.
If you are not the intended recipient please notify us immediately by returning 
the e-mail to the originator.(16b)
 
Disclaimer Version MB.US.1
ole0.bmp

Re: [GENERAL] execute pg_dump via python

2007-10-29 Thread Andy

Garry Saddington wrote:
I am using zope on windows with an external python method to backup my 
database. I am struggling to run the following command:


pg_dump.exe database  file

I have tried using os.popen - no luck
and also subprocess.Popen.

eg:
import subprocess

subprocess.Popen(['c:/dir/dir/pg_dump.exe','database','','c:/dir/dir/output 
file'])


The command string works perfectly in a terminal. Does anyone know how I 
should be doing this? I get no errors or traceback when I try the method 
through Zope.


regards
garry


the redirect operation '' is done by cmd.exe.  I assume popen is 
directly spawing pg_dump, but pg_dump does not know the command line 
argument .


you need to run:  'cmd.exe /c pg_dump database  out.txt'

(is it /c or /k, I always get them confused)

-Andy

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Migration questions for upcoming 8.3 release and fts

2007-10-29 Thread Andy

Chris Travers wrote:

Hi all;

I know -hackers is the preferred place to discuss beta releases, but I 
suspect that this is a further-reaching wuestion among the general 
community than is typical so I am posting it here.


I have been looking at the changes expected for 8.3 and have noticed 
that tsearch2 has been modified and merged with core.  I think this is a 
great idea, but it poses problems for applications which must run both 
both on 8.2 and 8.3 with fts capabilities.


I am trying to determine the best way forward so that LedgerSMB can 
support 8.3 as soon as it is released.  Reading through various email 
list archives it looks like the function names have changed.  I guess I 
am trying to determine the best way forward.


1)  How safe is it likely to be to create a set of functions in 8.3 
which mimic 8.2/tsearch2 interfaces?  Is this likely to create any 
serious conflicts?   Would such a project be sufficiently useful that a 
pg_foundry project might be helpful?


2)  If this is a problem, is the community interested in (possibly as a 
pg-foundry project) an abstraction layer for supporting both sets of 
interfaces?


Best Wishes,
Chris Travers


I don't really have answers... but more questions:

How many fts functions do you use?  I had two different functions 
(to_query and rank I think).


Are you thinking of always using the abstraction layer, or at some 
point, dropping it and converting to the new names?


How hard would it be to test for the existence of the function and use 
the one you find?  (test for the old fts function name and use that if 
it exists, else use the new fts function name)  Ahh, shoot, that would 
assume the arguments are exactly the same though... (they might be, 
guess it depends on which functions you're using).


-Andy

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Delete/Update with order by

2007-10-29 Thread Evandro Andersen
In Oracle you can use this:

DELETE FROM A WHERE A1 = 10 ORDER BY A2

There is something in the Postgresql ?

Evandro Andersen
Brazil
Postgresql 8.2

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Which index can i use ?

2007-10-29 Thread Abandoned
Hi..
I want to do index in postgresql  python.
My table:
id(int) | id2(int) | w(int) | d(int)

My query:
select id, w where id=x and id2=y (sometimes and d=z)

I have too many insert and select operation on this table.
And which index type can i use ? Btree, Rtree, Gist or Hash ?
Also I want to unique (id, id2)..
Now this is my index. is it give me good performance ?
CREATE UNIQUE INDEX ind1 ON test USING btree (id, id2)
CREATE INDEX ind2 ON test USING btree (id)
CREATE INDEX ind3 ON test USING btree (id2)
CREATE INDEX ind4 ON test USING btree (w)
CREATE INDEX ind5 ON test USING btree (d)

I'm too sorry my bad english.
King regards..


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Selecting K random rows - efficiently!

2007-10-29 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED], cluster  [EMAIL PROTECTED] wrote:
%  How important is true randomness?
% 
% The goal is an even distribution but currently I have not seen any way 
% to produce any kind of random sampling efficiently. Notice the word 

How about generating the ctid randomly? You can get the number of pages
from pg_class and estimate the number of rows either using the number
of tuples in pg_class or just based on what you know about the data.
Then just generate two series of random numbers, one from 0 to the number
of pages and the other from 1 to the number of rows per page, and keep
picking rows until you have enough numbers. Assuming there aren't too
many dead tuples and your estimates are good, this should retrieve n rows
with roughly n look-ups.

If your estimates are low, there will be tuples which can never be selected,
and so far as I know, there's no way to construct a random ctid in a stock
postgres database, but apart from that it seems like a good plan. If
efficiency is important, you could create a C function which returns a
series of random tids and join on that.
-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select count() out of memory

2007-10-29 Thread Paul Boddie
On 25 Okt, 17:36, [EMAIL PROTECTED] wrote:

 The design is based on access patterns, i.e. one partition represents a
 group of data along a discrete axis, so the partitions are the perfect for
 modeling that. Only the last partition will be used on normal cases. The
 previous partitions only need to exists until the operator deletes them,
 which will be sometime between 1-6 weeks.

This has been interesting reading because I'm working on a system
which involves a more batch-oriented approach in loading the data,
where I've found partitions to be useful both from a performance
perspective (it looks like my indexes would be inconveniently big
otherwise for the total volume of data) and from an administrative
perspective (it's convenient to control the constraints for discrete
subsets of my data). However, if all but the most recent data remains
relatively stable, why not maintain your own statistics for each
partition or, as someone else suggested, use the pg_class statistics?

I'd just be interested to hear what the best practices are when tables
get big and where the access patterns favour the most recently loaded
data and/or reliably identifiable subsets of the data, as they seem to
in this case and in my own case. The various tuning guides out there
have been very useful, but isn't there a point at which partitioning
is inevitable?

Paul


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Delete/Update with order by

2007-10-29 Thread Scott Marlowe
On 10/25/07, Evandro Andersen [EMAIL PROTECTED] wrote:
 In Oracle you can use this:

 DELETE FROM A WHERE A1 = 10 ORDER BY A2

 There is something in the Postgresql ?

That doesn't work in Oracle 9i, I tried it.

No, there's nothing exactly like it in pgsql.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Collation sequence and use of operatings system's locale

2007-10-29 Thread Tom Lane
James Gates [EMAIL PROTECTED] writes:
 I'm working with a issue where it seems the PostgreSQL server collation 
 sequence for certain locales don't match the operating system's 
 collation sequence for the same locale (set during initdb).
 I can reproduce this on both 8.1.9  8.2.5 on Solaris (both Nevada  10).

FWIW, your example works as expected for me with 8.3 CVS tip on Fedora
Core 6:

postgres=# \encoding
LATIN1
postgres=# show lc_collate ;
   lc_collate   

 sv_SE.iso88591
(1 row)

postgres=# select barf,ascii(barf) from jim order by barf asc;
 barf | ascii 
--+---
 a|97
 A|65
 Å|   197
 ä|   228
 Ä|   196
 Ö|   214
(6 rows)

 In fact, I can't tell where this order is derived from?

In this context Postgres believes whatever strcoll() tells it.  I don't
see any obvious hole in your methodology (except that I'm dubious about
the exhibited arguments for sort(1)) so it seems possible you've got a
bug in Solaris' strcoll().  But you should probably triple-check the
question of whether what's arriving at strcoll() is in the encoding it
expects.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL and AutoCad

2007-10-29 Thread Richard Broersma Jr
--- On Thu, 10/25/07, Andy [EMAIL PROTECTED] wrote:
  Is there any way of converting text from an
 AutoCad (.dwg ot .dxf) file into
  a PostgreSQL  Database??
 Do you want AutoCad to edit the drawings right out of the
 database?  How 
 would you want to put them in/get them out, of the
 database?

I think the more traditional problem is to extract information embedded (within 
blocks) in a drawing to produce a bill of material.  As long as the text is 
stored in a block it is a trivial task.  On the other hand, if the text is free 
floating in the drawing, finding it is a little more difficult but still 
possible using lisp or vba.

Auto cad has prebuilt tools to extract/link data from blocks to any ODBC 
compliant database.  Of course, the holy grail would be to eliminate auto cad 
altogether and then render drawings from the data stored in the database. :-)
Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pgaccess

2007-10-29 Thread Scott Marlowe
On 10/25/07, Joanne Salerno [EMAIL PROTECTED] wrote:
 I recently went from Postgres 7.4 to 8.24 (Linux Platform).

 In 7.4 I relied on forms created locally within the utility Pgaccess to
 perform various database changes.  Upon conversion to 8.24 I have not
 been able to get Pgaccess to read Postgres pga_forms table contents.
 Pgaccess recognizes the Postgres 8.24 pga tables, however, within
 Pgaccess, when I choose an existing form and select design a form. a
 blank form comes up instead of the defined widgets and code.  Any clues?

 Is Pgaccess compatible with Postgres 8.24?  I noticed the Pgaccess
 web-site no longer exists.  If I can not get Pgaccess to perform with
 Postgres 8.24, is there a different package that the supporting
 community recommends?

pgaccess is pretty old.  Try pgadmin3

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Raid Chunk Sizes for DSS type DB

2007-10-29 Thread Scott Marlowe
On 10/29/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
 On Tue, 30 Oct 2007 09:42:37 +0800
 Ow Mun Heng [EMAIL PROTECTED] wrote:

  It's not an optimal setup but since I only have 3x500G drives to play
  with, I can't build a Raid10, so I'm going for Raid5 to test out
  capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No
  Fault tolerance since 3 drives)
 

 Uhhh RAID 1 is your best bet. You get fault tolerance (mirrored) plus
 you get a hot spare (3 drives).

 RAID 5 on the other hand will be very expensive on writes.

I agree. Note that at least in linux, you can have 2 disks in a
mirror.  makes reads faster, writes usually not affected too negatvely

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to unsubscribe from this group?

2007-10-29 Thread Troy Rasiah

http://archives.postgresql.org/pgsql-general/

-- 
Troy Rasiah

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] how to unsubscribe from this group?

2007-10-29 Thread Amrita Prasad
  




Re: [GENERAL] SSL Connectivity on Solaris 10 x86

2007-10-29 Thread Tom Lane
Dot Yet [EMAIL PROTECTED] writes:
 I am seeing this Connection reset by peer message in the
 postmaster.logfile, but the connection between the client and server
 does get established
 and works fine too...

What pg_hba.conf setup are you using?  I'm thinking that this might be
expected behavior if it's password-based, because psql drops the
connection before prompting the user for a password when the server
tells it a password is needed.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Base Backups from PITR Standby

2007-10-29 Thread Tom Lane
Brian Wipf [EMAIL PROTECTED] writes:
 The process I use that leads to the warnings is simple:

 I use pg_controldata to determine the current checkpoint WAL location  
 of the standby server. I ensure I have this WAL file and all newer  
 WALs. I backup all files under the standby's database cluster  
 directory, including all tablespaces, which are soft linked. I restore  
 the database cluster directory to its new location and create soft  
 links in pg_tblspc to point to the proper tablespace directories just  
 backed up. I create the pg_xlog directory containing an archive_status  
 directory.

This seems not exactly per spec.  Why are you not using pg_start_backup
and pg_stop_backup around the copying of the database files?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Raid Chunk Sizes for DSS type DB

2007-10-29 Thread Richard Broersma Jr
--- On Mon, 10/29/07, Ow Mun Heng [EMAIL PROTECTED] wrote:
 (Raid1 = No Fault tolerance since 3 drives)
Raid1 with three drives will have fault tolerance.  You will have three disks 
with the same image.  This is triple redundancy.  This could greatly improve 
select performance.

Having said this, I've used software raid5 and am currently using raid10 
implemented from PCI ide cards but have had data loss errors occur with both 
setups.  I am not sure if the problem is in the drives, the pci cards, or the 
software raid setup. (Thank goodness that this is my toy computer.) 

However, I've used RAID1 with great success for my OS partitions and haven't 
had any problems of the last couple of years.

Regards,
Richard Broersma

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings