Re: [GENERAL] How to implement a value alias or synonym

2013-07-10 Thread Carlos Oliva
http://www.postgresql.org/message-id/440d446e.7040...@cybertec.at

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gauthier, Dave
Sent: Wednesday, July 10, 2013 3:49 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] How to implement a value alias or synonym

 

Hi:

 

v9.0.1 on linux

 

Is there a way to query on synonyms of a value transparent to the user?

For example, a column called animal can have any text value, including
'horse' and 'cow' and 'pig'.  But I want the user to find all the
animal='pig' records if they specify 'hog' instead.  

So.. 

   select * from thetable where animal = 'pig'

..would generate the same results as...

   select * from thetable where animal = 'hog'

 

There are no 'hog' records in the table, just 'pig' records.  And I want
to have control over the synonyms, perhaps in some sort of ref table ?

 

Thanks in Advance for any ideas.






Re: [GENERAL] How to implement a value alias or synonym

2013-07-10 Thread Carlos Oliva
Also this:

http://www.postgresql.org/docs/9.2/static/textsearch-configuration.html

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gauthier, Dave
Sent: Wednesday, July 10, 2013 3:49 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] How to implement a value alias or synonym

 

Hi:

 

v9.0.1 on linux

 

Is there a way to query on synonyms of a value transparent to the user?

For example, a column called animal can have any text value, including
'horse' and 'cow' and 'pig'.  But I want the user to find all the
animal='pig' records if they specify 'hog' instead.  

So.. 

   select * from thetable where animal = 'pig'

..would generate the same results as...

   select * from thetable where animal = 'hog'

 

There are no 'hog' records in the table, just 'pig' records.  And I want
to have control over the synonyms, perhaps in some sort of ref table ?

 

Thanks in Advance for any ideas.






[GENERAL] Dynamic File Name for COPY TO in Stored Procedure

2012-02-23 Thread Carlos Oliva
Hi,
What would it be the correct format for using a variable in a stored
procedure that uses COPY TO?
 
I have the current stored procedure:
CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$
   COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1'
WITH CSV QUOTE ' ';
$delimeter$
LANGUAGE SQL;
 
When I run the stored procedure: psql -d db name -c select
Table_To_File('some_absolute_file_name');  I get the error that I must
use absolute file names.
 
When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure
runs fine and creates the local file.
 
I have tried several different ways to escape the $1 variable to no
avail.  At best, I get the same error.
 
 





[GENERAL] COPY TO File: Using dynamic file name in stored procedure

2012-02-23 Thread Carlos Oliva
Hi,
What would it be the correct format for using a variable in a stored
procedure that uses COPY TO?
 
I have the current stored procedure:
CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$
   COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1'
WITH CSV QUOTE ' ';
$delimeter$
LANGUAGE SQL;
 
When I run the stored procedure: psql -d db name -c select
Table_To_File('some_absolute_file_name');  I get the error that I must
use absolute file names.
 
When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure
runs fine and creates the local file.
 
I have tried several different ways to escape the $1 variable to no
avail.  At best, I get the same error.
 
 





Re: [GENERAL] Dynamic File Name for COPY TO in Stored Procedure

2012-02-23 Thread Carlos Oliva
That worked.  Thank you Adrian


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Thursday, February 23, 2012 9:58 AM
To: pgsql-general@postgresql.org
Cc: Carlos Oliva
Subject: Re: [GENERAL] Dynamic File Name for COPY TO in Stored Procedure

On Thursday, February 23, 2012 6:42:53 am Carlos Oliva wrote:
 Hi,
 What would it be the correct format for using a variable in a stored
 procedure that uses COPY TO?
 
 I have the current stored procedure:
 CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$
COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1'
 WITH CSV QUOTE ' ';
 $delimeter$
 LANGUAGE SQL;
 
 When I run the stored procedure: psql -d db name -c select
 Table_To_File('some_absolute_file_name');  I get the error that I
must
 use absolute file names.
 
 When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure
 runs fine and creates the local file.
 
 I have tried several different ways to escape the $1 variable to no
 avail.  At best, I get the same error.

You will need to switch to using plpgsql and its EXECUTE statement to
build the COPY
statement :

http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#P
LPGSQL-STATEMENTS-EXECUTING-DYN


-- 
Adrian Klaver
adrian.kla...@gmail.com




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema, databse, or tables in different system folder

2009-06-05 Thread Carlos Oliva

Would the backup be unrecoverable if I shutdown the databse first?
Chris Browne cbbro...@acm.org wrote in message 
news:87ab4qfs48@dba2.int.libertyrms.com...

Carlos Oliva car...@pbsinet.com writes:

Is there a way to create a database or a table of a database in its own
folder?  We are looking for ways to backup the sytem files of the 
database
to tape and one to exclude some tables from this backup.  We can 
selectively

backup folders of the file system so we figure that if we can create a
schema or database or table in its own folder, we can backup our database
and exclude the tables selectively.  We are using Linux RedHat.  Thank 
you.


What you are describing is the use of tablespaces, which are documented 
here:

 http://www.postgresql.org/docs/8.3/static/manage-ag-tablespaces.html

There is, however, a severe problem with your backup plans, namely
that an attempt to selectively include/exclude tables in a physical
file backup is more or less certain to result in a totally
unrecoverable database.

Quoting from the section on filesystem level backup:
http://www.postgresql.org/docs/8.3/static/backup-file.html

 If you have dug into the details of the file system layout of the
 database, you might be tempted to try to back up or restore only
 certain individual tables or databases from their respective files
 or directories. This will not work because the information contained
 in these files contains only half the truth. The other half is in
 the commit log files pg_clog/*, which contain the commit status of
 all transactions. A table file is only usable with this
 information. Of course it is also impossible to restore only a table
 and the associated pg_clog data because that would render all other
 tables in the database cluster useless. So file system backups only
 work for complete backup and restoration of an entire database
 cluster.

Let me reiterate that last sentence:

 So file system backups only work for complete backup and restoration
 of an entire database cluster.

Your attempt to selectively backup specific directories will render
the backup effectively useless.
--
cbbrowne,@,linuxdatabases.info
http://linuxdatabases.info/info/unix.html
Microsoft   builds   product  loyalty   on   the   part  of   network
administrators and consultants, [these are] the only people who really
count  in the  Microsoft scheme  of  things. Users  are an  expendable
commodity.  -- Mitch Stone 1997 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema, databse, or tables in different system folder

2009-06-05 Thread Carlos Oliva
Thank you for your response.  The tablespace should work for us.  Perhaps you 
can help me with the following questions:
1) If we were to create a different table space for a database that has 
archival tables -- they will be backed up once, is it sufficient to backup the 
tablespace folder once?  We want to make sure that we can restore from tape the 
tablespace folder and we will in fact restore the full database and data
2) Can a schema have its own tablespace in a database that has more than one 
schema?

Thank you very much.
-Original Message-
From: Grzegorz Jaśkiewicz [mailto:gryz...@gmail.com] 
Sent: Tuesday, June 02, 2009 10:27 AM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Schema, databse, or tables in different system folder

yes, it is called tablespace.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Schema, databse, or tables in different system folder

2009-06-05 Thread Carlos Oliva
Thank you for your response.  The tablespace should work for us.  Perhaps you 
can help me with the following questions:

1) If we were to create a different table space for a database that has 
archival tables -- they will be backed up once, is it sufficient to backup the 
tablespace folder once?  We want to make sure that we can restore from tape the 
tablespace folder and we will in fact restore the full database and data

2) Can a schema have its own tablespace in a database that has more than one 
schema?

 

Thank you very much.

-Original Message-

From: Grzegorz Jaśkiewicz [mailto:gryz...@gmail.com]

Sent: Tuesday, June 02, 2009 10:27 AM

To: Carlos Oliva

Cc: pgsql-general@postgresql.org

Subject: Re: [GENERAL] Schema, databse, or tables in different system folder

 

yes, it is called tablespace.

 



[GENERAL] Schema, database, or tables in different folders?

2009-06-05 Thread Carlos Oliva
Is there a way to create a database or a table of a database in its own
folder?  We are looking for ways to backup the sytem files of the
database to tape and one to exclude some tables from this backup.  We
can selectively backup folders of the file system so we figure that if
we can create a schema or database or table in its own folder, we can
backup our database and exclude the tables selectively.  We are using
Linux RedHat.  Thank you.

 



[GENERAL] Creating schema, database, or table in different folder.

2009-06-05 Thread Carlos Oliva
Is there a way to create a database or a table of a database in its own
folder?  We are looking for ways to backup the sytem files of the
database to tape and one to exclude some tables from this backup.  We
can selectively backup folders of the file system so we figure that if
we can create a schema or database or table in its own folder, we can
backup our database and exclude the tables selectively.  We are using
Linux RedHat.  Thank you.



Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
Thank you for your response Grzegorx.  It is helping us a great deal to
understand the issues around backups.  Would any of the pg_xlog, pg_clog,
etc change for a table that has a stable structure and data?  That is, the
table undergoes several inserts and then it is never updated
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906031217h2a0bfe0t674f266d4397e...@mail.gmail.com...
On Wed, Jun 3, 2009 at 8:14 PM, Bill Moran wmo...@potentialtech.com wrote:
 In response to Carlos Oliva car...@pbsinet.com:

 Woudl it be possible to keep the current postgresql version running in a
 different port, install a new version of postgresql, and copy the data 
 from
 one version to the other while both versions are running? This might give
 us time to copy the tables and databases one at a time and reconfigure 
 the
 database access for parts of the application until we complete the 
 migration
 to the new version.

 Your best bet would be to install Slony-I. One of the main design goals
 for Slony is to allow interruption-free upgrades.

I don't think it is easy, but will do if you need to synchronize
data before switching.

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
Thank you for your response Bill.  It is helping us a great deal to
understand the issues around backups.  Would any of the pg_xlog, pg_clog,
etc change for a table that has a stable structure and data?  That is, the
table undergoes several inserts and then it is never updated
Bill Moran wmo...@potentialtech.com wrote in message 
news:20090603161817.131e706e.wmo...@potentialtech.com...
 In response to Carlos Oliva car...@pbsinet.com:

 Can the synchronization with Slony run while the old database is still 
 being
 updated daily?  I am wondering if we can just let Slony run until the
 databases are fully synchronized and then switch databases.

 Yes, and yes.  That's exactly what Slony is for.

 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
Thanks again Grzgorz for your expedicious reply.  Would anything else change 
in the database for a table once it ceases to be updated?  We have several 
tables for which a number of records is inserted and never again updated --  
data is never updated, deleted, or inserted again.  We are moving these 
tables into their own tablespace and getting some kind of snapshot copy of 
these tables.  We were thinking that we can restore them in the future 
without having to back them up several times.
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906040446s6b5a38b7ka1a4b1f3a4ce...@mail.gmail.com...
pg_xlog and clog is something that is used during operation, and for
point in time recovery.
It doesn't go to database dump at all, not needed.


On Thu, Jun 4, 2009 at 12:32 PM, Carlos Oliva car...@pbsinet.com wrote:
 Thank you for your response Grzegorx. It is helping us a great deal to
 understand the issues around backups. Would any of the pg_xlog, pg_clog,
 etc change for a table that has a stable structure and data? That is, the
 table undergoes several inserts and then it is never updated
 Grzegorz Jaskiewicz gryz...@gmail.com wrote in message
 news:2f4958ff0906031217h2a0bfe0t674f266d4397e...@mail.gmail.com...
 On Wed, Jun 3, 2009 at 8:14 PM, Bill Moran wmo...@potentialtech.com 
 wrote:
 In response to Carlos Oliva car...@pbsinet.com:

 Woudl it be possible to keep the current postgresql version running in a
 different port, install a new version of postgresql, and copy the data
 from
 one version to the other while both versions are running? This might 
 give
 us time to copy the tables and databases one at a time and reconfigure
 the
 database access for parts of the application until we complete the
 migration
 to the new version.

 Your best bet would be to install Slony-I. One of the main design goals
 for Slony is to allow interruption-free upgrades.

 I don't think it is easy, but will do if you need to synchronize
 data before switching.

 --
 GJ

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
In which state do we need to put the db?  We can use both types of backup 
strategy.  We can pg_dump the table and copy the tablespace folder along 
with anyhting else that we may need.
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906040518l190af73dpff180755d567f...@mail.gmail.com...
On Thu, Jun 4, 2009 at 1:07 PM, Carlos Oliva car...@pbsinet.com wrote:
 Thanks again Grzgorz for your expedicious reply. Would anything else 
 change
 in the database for a table once it ceases to be updated? We have several
 tables for which a number of records is inserted and never again 
 updated --
 data is never updated, deleted, or inserted again. We are moving these
 tables into their own tablespace and getting some kind of snapshot copy of
 these tables. We were thinking that we can restore them in the future
 without having to back them up several times.


Well, if you do pg_dump - that data will be there.
If you do copy of data directory, that isn't going to work, unless you
put db in proper state.
if you do that, you need to copy pg_xlog directory with it.

So you have to specify which backup strategy you are going to use.

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
I think that I understand.  Would we need to stop the databse and then do 
the copy?  Is this the state to which you are refering?  If the tables never 
changed after a week or so, what else would change in the database for these 
tables after a month, two months, or a year?  Would we need to put the 
databse in the correct state a week later, a month later, a year later?
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906040549u53bafe7br772033214d43e...@mail.gmail.com...
On Thu, Jun 4, 2009 at 1:23 PM, Carlos Oliva car...@pbsinet.com wrote:
 In which state do we need to put the db? We can use both types of backup
 strategy. We can pg_dump the table and copy the tablespace folder along
 with anyhting else that we may need.
Well, not quite. Pg_dump is fine, but you can't just copy data itself,
when server is running.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
Thank you for the link to the document.  It provides  a wealth of 
information that re-inforces your stements.  It is still somewhat unclear to 
me what it is that would change in the database for tables that are never 
updated (not inserts, updates, or deltes) after a certain point in time. 
That is, if a table is unchanged after a week, what in the database would 
change for the table later on?  We have some tables that we will use as a 
type of archive into which we woudl just insert some data for about a week 
or so and that will never again be updated.
Bill Moran wmo...@potentialtech.com wrote in message 
news:20090604095554.c2d57008.wmo...@potentialtech.com...
 In response to Carlos Oliva car...@pbsinet.com:

 I think that I understand.  Would we need to stop the databse and then do
 the copy?  Is this the state to which you are refering?  If the tables 
 never
 changed after a week or so, what else would change in the database for 
 these
 tables after a month, two months, or a year?  Would we need to put the
 databse in the correct state a week later, a month later, a year later?

 You really need to work on your posting etiquette a bit.  This thread is
 painful to read because everything is jumbled together.

 There are two supported methods for backing up data.  These are separate,
 you can do either or both, they have advantages and disadvantages.

 You should really read this chapter:
 http://www.postgresql.org/docs/8.3/static/backup.html

 It seems to me that all of the questions you're asking are answered in
 there.

 But, specifically, if you're using pg_dump, you can specify to only back
 up certain tables, or to back up everything _except_ certain tables, and
 that would allow you to back up tables that don't change much infrequently
 and tables that change a lot more often.  That will work fine from a
 database server standpoint.  Whether it works for you data in particular,
 is a question that only someone familiar with your data can answer.  My
 opinion: if you can't answer that question yourself, just back up 
 everything
 to be safe.

 With filesystem level backup (or PITR, which is just filesystem backup
 without having to stop the sever and a few other cool perks) you back up
 the entire database or nothing.

 Hope this helps.

 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-04 Thread Carlos Oliva
We will probably pg_dump the data for backups and look at using Slony for 
replication.  We thank you and Gregor for the time that you spent sharing 
your insights with us.

Bill Moran wmo...@potentialtech.com wrote in message 
news:20090604104302.50e23318.wmo...@potentialtech.com...
 In response to Carlos Oliva car...@pbsinet.com:

 Thank you for the link to the document.  It provides  a wealth of
 information that re-inforces your stements.  It is still somewhat unclear 
 to
 me what it is that would change in the database for tables that are never
 updated (not inserts, updates, or deltes) after a certain point in time.
 That is, if a table is unchanged after a week, what in the database would
 change for the table later on?  We have some tables that we will use as a
 type of archive into which we woudl just insert some data for about a 
 week
 or so and that will never again be updated.

 Your question is ambiguous, thus it's difficult to answer.  What do you 
 mean
 by change?  At what level are you looking a things?

 If you're talking about doing a pg_dump, then nothing changes.  If you 
 don't
 update/delete from that table, then it's going to be the same table every
 time you pg_dump it.

 If you're talking about doing a filesystem-level backp, then I wouldn't
 assume anything.  Depending on various maintenance schedules, a vacuum
 or reindex could change the files around (although the data doesn't
 change).

 Hope that clarifies.

 Bill Moran wmo...@potentialtech.com wrote in message
 news:20090604095554.c2d57008.wmo...@potentialtech.com...
  In response to Carlos Oliva car...@pbsinet.com:
 
  I think that I understand.  Would we need to stop the databse and then 
  do
  the copy?  Is this the state to which you are refering?  If the tables
  never
  changed after a week or so, what else would change in the database for
  these
  tables after a month, two months, or a year?  Would we need to put the
  databse in the correct state a week later, a month later, a year 
  later?
 
  You really need to work on your posting etiquette a bit.  This thread 
  is
  painful to read because everything is jumbled together.
 
  There are two supported methods for backing up data.  These are 
  separate,
  you can do either or both, they have advantages and disadvantages.
 
  You should really read this chapter:
  http://www.postgresql.org/docs/8.3/static/backup.html
 
  It seems to me that all of the questions you're asking are answered in
  there.
 
  But, specifically, if you're using pg_dump, you can specify to only 
  back
  up certain tables, or to back up everything _except_ certain tables, 
  and
  that would allow you to back up tables that don't change much 
  infrequently
  and tables that change a lot more often.  That will work fine from a
  database server standpoint.  Whether it works for you data in 
  particular,
  is a question that only someone familiar with your data can answer.  My
  opinion: if you can't answer that question yourself, just back up
  everything
  to be safe.
 
  With filesystem level backup (or PITR, which is just filesystem backup
  without having to stop the sever and a few other cool perks) you back 
  up
  the entire database or nothing.
 
  Hope this helps.
 
  -- 
  Bill Moran
  http://www.potentialtech.com
  http://people.collaborativefusion.com/~wmoran/
 
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 



 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
If I were to upgrade the database version, would I need to dump and restore 
all the data? 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
For large databases or tables, this will take a long time to do.  Are there 
any alternatives to benefit from the improved versions of PostgreSQL without 
having to dump some large tables that never change?
Bill Moran wmo...@potentialtech.com wrote in message 
news:20090603114544.434edcce.wmo...@potentialtech.com...
 In response to Carlos Oliva car...@pbsinet.com:

 If I were to upgrade the database version, would I need to dump and 
 restore
 all the data?

 If you upgrade patch releases (i.e. from 8.3.4 - 8.3.5) then usually no,
 but see the release notes for the versions you're upgrading, because there
 are occasionally exceptions.

 If you're updating to a major release (8.2.x - 8.3.x), then yes.

 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema, databse, or tables in different system folder

2009-06-03 Thread Carlos Oliva
Thank you for your response Chris.  It is helping us a great deal to 
understand the issues around backups.  Would any of the pg_xlog, pg_clog, 
etc change for a table that has a stable structure and data?  That is, the 
table undergoes several inserts and then it is never updated.
Chris Browne cbbro...@acm.org wrote in message 
news:87prdlgurv@dba2.int.libertyrms.com...
 Carlos Oliva oli...@earthlink.net writes:
 Would the backup be unrecoverable if I shutdown the databse first?

 If the backup includes pg_xlog and pg_clog, as well as all of the
 database metadata files, then whatever portions *are* included are
 likely to be somewhat usable.

 The portions not included in the backup will obviously not be usable.

 But supposing your backup does not include *all* metadata (e.g. -
 pg_catalog), pg_xlog and pg_clog, then it is unlikely that you'll be
 able to recover *anything at all* from it.

 The intent is that filesystem backups only work for complete backup
 and restoration of an entire database cluster.

 Backing up *part* of your filesystem is Not Recommended in the
 documentation.  If you set a policy of doing so, you have to accept
 that you are doing this contrary to documented recommendations, and at
 considerable peril to your data.
 -- 
 (reverse (concatenate 'string ofni.secnanifxunil @ enworbbc))
 http://cbbrowne.com/info/multiplexor.html
 Rules of the Evil Overlord #25.  No matter how well it would perform,
 I  will never  construct any  sort  of machinery  which is  completely
 indestructible  except  for   one  small  and  virtually  inaccessible
 vulnerable spot. http://www.eviloverlord.com/ 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
Would I need to upgrade the entire cluster or can I just upgrade database 
wise?
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906030845t526db574q464c17072fadd...@mail.gmail.com...
 if you upgrade to different major version, yes. If between minor
 releases (say 8.1.4-8.1.5) than no, with few exceptions. Make sure
 you read release notes between versions to find out.

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
Woudl it be possible to keep the current postgresql version running in a 
different port, install a new version of postgresql, and copy the data from 
one version to the other while both versions are running?  This might give 
us time to copy the tables and databases one at a time and reconfigure the 
database access for parts of the application until we complete the migration 
to the new version.
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906031157v32fb9810j9476fd950e494...@mail.gmail.com...
 On Wed, Jun 3, 2009 at 5:11 PM, Carlos Oliva car...@pbsinet.com wrote:
 Would I need to upgrade the entire cluster or can I just upgrade database 
 wise?
 Obviously whole installation has to be dumped and restored. Wether you
 are going to dump database by database, or do pg_dumpall, is up to
 you,and mainly depends on cluster size.
 It's just that binary format of stored files changes from major
 release to release, and postgresql will refuse to start with old
 format data directory (where it stores all databases).

 -- 
 GJ

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
Grzegorz Jaskiewicz gryz...@gmail.com wrote in message 
news:2f4958ff0906031214k3dfaa4b2mae5733d7345f7...@mail.gmail.com...
On Wed, Jun 3, 2009 at 8:03 PM, Carlos Oliva car...@pbsinet.com wrote:
 Woudl it be possible to keep the current postgresql version running in a
 different port, install a new version of postgresql, and copy the data 
 from
 one version to the other while both versions are running? This might give
 us time to copy the tables and databases one at a time and reconfigure the
 database access for parts of the application until we complete the 
 migration
 to the new version.
Yes it is possible. You can setup new version of postgresql with new
data directory, in different location (say /var/pg_new/data), run it
on different port, and have multiple postgresql installations running
side by side.
What OS do you run, and where do you get postgresql binaries from.


_Please stop replying on top_. Top posting is a baad thing on mailing lists.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Probably this will work for us.  We are using RedHat and I think that we may 
be getting binaries from a couple of sources. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Database: need to dump and restore?

2009-06-03 Thread Carlos Oliva
Can the synchronization with Slony run while the old database is still being 
updated daily?  I am wondering if we can just let Slony run until the 
databases are fully synchronized and then switch databases.
Bill Moran wmo...@potentialtech.com wrote in message 
news:20090603153556.f05e6bd2.wmo...@potentialtech.com...
 In response to Grzegorz Jaskiewicz gryz...@gmail.com:

 On Wed, Jun 3, 2009 at 8:14 PM, Bill Moran wmo...@potentialtech.com 
 wrote:
  In response to Carlos Oliva car...@pbsinet.com:
 
  Woudl it be possible to keep the current postgresql version running in 
  a
  different port, install a new version of postgresql, and copy the data 
  from
  one version to the other while both versions are running? This might 
  give
  us time to copy the tables and databases one at a time and reconfigure 
  the
  database access for parts of the application until we complete the 
  migration
  to the new version.
 
  Your best bet would be to install Slony-I. One of the main design goals
  for Slony is to allow interruption-free upgrades.

 I don't think it is easy, but will do if you need to synchronize
 data before switching.

 easy was not the point.  I gathered from his comments that downtime is
 an issue, and I know (from experience) that Slony provides the ability
 to upgrade with almost no downtime, even with very large databases.

 His plan of migrating tables one at a time may work, but it's 
 frighteningly
 error-prone.  If he copies a table, how does he know the data hasn't
 changed during the copy?  What if he doesn't quite get all the clients
 switched over all at once?  How do you do a JOIN when one table is in
 one database and the other somewhere else?

 Once the DBs are in sync with Slony, a single command will switch to the
 new server.  If it doesn't go well (because he has a client compatibility
 problem, for example -- casts anyone?) it's a simple process to switch
 back, all with a guarantee that his data will never be lost, out of sync
 or corrupted.

 And if his application requires small downtime windows, this is a process
 he will benefit from getting familiar with anyway.

 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Schema, databse, or tables in different system folder

2009-06-02 Thread Carlos Oliva
Is there a way to create a database or a table of a database in its own 
folder?  We are looking for ways to backup the sytem files of the database 
to tape and one to exclude some tables from this backup.  We can selectively 
backup folders of the file system so we figure that if we can create a 
schema or database or table in its own folder, we can backup our database 
and exclude the tables selectively.  We are using Linux RedHat.  Thank you.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexplained case insensitive results

2007-04-02 Thread Carlos Oliva
Hi Lloyd,

Please ignore searches on names that are in lower case.  The PBSI-EMR will
search only on first and last names that are in upper case.  In this way, we
are synchronized with PBSI-DOC which uses and sends only upper case names,
and we avoid searching problems.  Teresa and I have discussed adding extra
fields to the patient tables in which the users can enter names with any
capitalization that they wish to use.  These will be the names that they
will see on the screen but the queries and searches will be done on the
upper case names.  I have discussed this with Graydon and Teresa several
times in the past two years so you may I prefer that to postpone the
research on this and devote your energies to other endeavors.

 

  _  

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Lloyd Mason
Sent: Friday, March 30, 2007 4:44 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Unexplained case insensitive results

 

There appears to be a difference in the results from the following SQL
statement when run on an 8.1.5 version versus a 8.1.8 version:

SELECT T1.MRPtPro AS MRPtPro, T1.MRPtDOB, T1.MRPtSSN, T1.MRPtNbr, T2.MRMDNam
AS MRPtProN, T1.MRPtChtCmpFlg, T1.MRPtPhA, T1.MRPtPhP, T1.MRPtPhS,
T1.MRPtFNam, T1.MRPtLNam, T1.MRPtSts, T1.PrtNbr FROM (MR0011 T1 LEFT JOIN
MR00051 T2 ON T2.PrtNbr = T1.PrtNbr AND T2.MRMDNbr = T1.MRPtPro) WHERE
(T1.PrtNbr = 1 and T1.MRPtSts = 'A' and T1.MRPtLNam = 'person
') AND (T1.MRPtLNam  'persoo') ORDER BY T1.PrtNbr,
T1.MRPtSts,

T1.MRPtLNam, T1.MRPtFNam

 

The column MRPtLAN has both upper and lower case data.

 

The 8.1.5 version shows all matches both upper and lower case.  While the
8.1.8 version shows only those entries that are lower case.  Any ideas as to
why this is?  

 

We would like it to show all matches (case-insensitive) that are returned in
8.1.5 but would like to be on a newer version of the database 8.1.8 and
eventually 8.2.X.

 

Both databases are UTF-8 encoding and running on REDHAT EL 4.0.  Both
databases were created from a pg_dump that came out of a 7.4.5 version of
Postgresql with SQL_ASCII encoding.  I don't think the encoding should
matter but included it for your review.

 



[GENERAL] Backwards index scan

2006-06-06 Thread Carlos Oliva








Are there any configurations/flags that we should re-set for
the database (v 7.4.x) in order to enable a backwards scan on an index?
We are trying to query a table in descending order. We added an index
that we were hoping would be scanned backwards but EXPLAIN never indicates that
the optimizer will carry out a backwards scan on the index that we added to the
table. EXPLAIN indicates that the optimizer will always use a sequential
scan if we order the query in descending order.



OUR TESTS

We are conducting a simple test to asses if the optimizer
ever uses the index. The table has several columns and the select statement
is as follows: select * from ord0007 order by prtnbr, ordschdte desc. The
index that we added is ord0007_k btree (prtnbr, ordschdte). Prtnbr
is numeric(10,0) not null, and ordschdte is date.



We find that the optimizer uses the index for the query if
we set enable_sort to off and the query uses ordschdte in ascending order as
follows: select * from ord0007 order by prtnbr, ordschdte. For this query,
EXPLAIN returns the following output:


QUERY PLAN



Index Scan using ord0007_k on ord0007
(cost=0.00..426.03 rows=232 width=1816)

(1 row)



However the optimizer uses a sequential scan if we order by
a descending ordschdte as follows: select * from ord0007 order by prtnbr,
ordschdte desc. For this query, whether we set the enable_sort to on or
off, EXPLAIN returns the following output:


QUERY PLAN



Sort (cost=10155.44..10156.02 rows=232
width=1816)

 Sort Key: prtnbr, ordschdte

 - Seq Scan on ord0007
(cost=0.00..146.32 rows=232 width=1816)

(3 rows)








Re: [GENERAL] Backwards index scan

2006-06-06 Thread Carlos Oliva
Thank for your response Alan.  This indeed corrects the problem as long as
we configure the database to enable_seqscan=false.

Perhaps, you can help me with a side effect of using this index:  Rows with
null dates seem to fall off the index.  When the ordschdte is null, the
query fails the rows of the data for which the ordschdte is null.  We had to
resort to a second query that uses a sequential scan to retrieve the rows
that have a null ordschdte.  Is there any kind of index that we can create
that would allow us to order by ordshcdte and which would retrieve rows with
null dates?

Thanks in advance for your response.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alan Hodgson
Sent: Tuesday, June 06, 2006 11:05 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Backwards index scan

On June 6, 2006 07:59 am, Carlos Oliva [EMAIL PROTECTED] wrote:
 We are conducting a simple test to asses if the optimizer ever uses the
 index.  The table has several columns and the select statement is as
 follows: select * from ord0007 order by prtnbr, ordschdte desc.  The
 index that we added is ord0007_k btree (prtnbr, ordschdte).  Prtnbr is
 numeric(10,0) not null, and ordschdte is date.

You have to order by prtnbr desc, ordschdte desc to have the index used 
the way you want.  You can re-order in an outer query if you need to.

-- 
Alan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Reindexdb

2006-05-11 Thread Carlos Oliva








Hi,

Where can I find information about installing and
running contrib/reindexdb? I have searched the manuals and appendixes without
much luck. I just know that there is a contrib for reindexing an entire
database.



We run postgresql (v 7.4.x) in Linux RedHat.








Re: [GENERAL] Reindexdb

2006-05-11 Thread Carlos Oliva
Hi Joshua,
I think that it was installed from a RedHat distribution and the
installation left out reindexdb from the contrib folder.  I will look into
this.

In the mean time maybe I can ask you:  Does VACUUM FULL ANALYZE of a
database would also reindex the user indexes of a database?  I am trying to
eliminate index bloating.
 
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua D. Drake
Sent: Thursday, May 11, 2006 11:27 AM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Reindexdb

Carlos Oliva wrote:
 --
 
 Hi,
 
 Where can I find information about installing and running 
 contrib/reindexdb?  I have searched the manuals and appendixes without 
 much luck.  I just know that there is a contrib for reindexing an entire 
 database.
 
  
 
 We run postgresql (v 7.4.x) in Linux RedHat.
 

Did you install from source? If so... it is in the source tree under 
contrib.

If you are running an RPM it should be in the rpm postgresql-contrib

Sincerely,

Joshua D. Drake



-- 

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Reindexdb

2006-05-11 Thread Carlos Oliva
Thank you again Joshua.

Perhaps I can ask you another question: How does the VACUUM FULL ANALYZE
differ from a reindex?  A VACUUM FULL ANALYZE is outputting messages about
pages recuperated for indexes.  Hence I was under the impression that
obsolete index pages were recuperated by the VACUUM FULL ANALYZE.

Clustering could be tricky for me because in my multi-developer environment,
I will have difficulty controlling which indexes are deleted and created.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua D. Drake
Sent: Thursday, May 11, 2006 12:01 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Reindexdb

Carlos Oliva wrote:
 Hi Joshua,
 I think that it was installed from a RedHat distribution and the
 installation left out reindexdb from the contrib folder.  I will look into
 this.
 
 In the mean time maybe I can ask you:  Does VACUUM FULL ANALYZE of a
 database would also reindex the user indexes of a database?  I am trying
to
 eliminate index bloating.

Well on 7.4 vacuum full analyze will deal with part of the problem but 
not all. If you are willing to run a full, I would suggest looking into 
cluster instead.

Sincerely,

Joshua D. Drake

  
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Joshua D. Drake
 Sent: Thursday, May 11, 2006 11:27 AM
 To: Carlos Oliva
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Reindexdb
 
 Carlos Oliva wrote:
 --

 Hi,

 Where can I find information about installing and running 
 contrib/reindexdb?  I have searched the manuals and appendixes without 
 much luck.  I just know that there is a contrib for reindexing an entire 
 database.

  

 We run postgresql (v 7.4.x) in Linux RedHat.

 
 Did you install from source? If so... it is in the source tree under 
 contrib.
 
 If you are running an RPM it should be in the rpm postgresql-contrib
 
 Sincerely,
 
 Joshua D. Drake
 
 
 


-- 

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ExclusiveLock without a relation in pg_locks

2006-02-23 Thread Carlos Oliva








Would connections to a database require crating an extra
ExclusiveLock? We have some connections to the database that happen to be idle
in transaction and their pids have a granted Exclusive Lock
in pg_locks. I cannot discern the tables where the ExclusiveLock is being
held because the relation field is blank.



Moreover, there are other connections to the database coming
from the same ip address as that of the connection with the ExclusiveLock.
Some of the pids of these other connections seem to have different kinds of
locks (AccessShareL0ck) so I am not quite sure why the pids with the
ExclusiveLocks are necessary.



How could I find out the tables that are being locked when I
see an ExclusiveLock in pg_locks. 








Re: [GENERAL] ExclusiveLock without a relation in pg_locks

2006-02-23 Thread Carlos Oliva
Thank you very much for your answer.  I think that I am seeing those self
transaction id locks as ExclusiveLocks

Would you expect to see an ExclusiveLock with a query of type Select (not
Select Update or Update or Insert)?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Fuhr
Sent: Thursday, February 23, 2006 10:05 AM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ExclusiveLock without a relation in pg_locks

On Thu, Feb 23, 2006 at 08:54:36AM -0500, Carlos Oliva wrote:
 Would connections to a database require crating an extra ExclusiveLock? We
 have some connections to the database that happen to be idle in
 transaction and their pids have a granted Exclusive Lock in pg_locks.
I
 cannot discern the tables where the ExclusiveLock is being held because
the
 relation field is blank.

http://www.postgresql.org/docs/8.1/interactive/view-pg-locks.html

Every transaction holds an exclusive lock on its transaction ID
for its entire duration.  If one transaction finds it necessary to
wait specifically for another transaction, it does so by attempting
to acquire share lock on the other transaction ID.  That will succeed
only when the other transaction terminates and releases its locks.

If the relation column is null then you're probably seeing these
transaction ID locks.

 How could I find out the tables that are being locked when I see an
 ExclusiveLock in pg_locks. 

An easy way to convert a relation's oid to its name is to cast it
to regclass:

SELECT relation::regclass AS relname, * FROM pg_locks;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] ExclusiveLock without a relation in pg_locks

2006-02-23 Thread Carlos Oliva
Thank you very much for your answer.  I think that I am seeing those self
transaction id locks as ExclusiveLock

Would you expect to see an ExclusiveLock with a query of type Select (not
Select Update or Update or Insert)?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Martijn van
Oosterhout
Sent: Thursday, February 23, 2006 10:04 AM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ExclusiveLock without a relation in pg_locks

On Thu, Feb 23, 2006 at 08:54:36AM -0500, Carlos Oliva wrote:
 Would connections to a database require crating an extra ExclusiveLock? We
 have some connections to the database that happen to be idle in
 transaction and their pids have a granted Exclusive Lock in pg_locks.
I
 cannot discern the tables where the ExclusiveLock is being held because
the
 relation field is blank.

AIUI each backend has an exclusive lock on its own transaction. If
you're idle in transaction you've acquired a lock on your transaction
so other people can wait on you if necessary. That's why there's
nothing in the relation field, because it's not a table lock.

 Moreover, there are other connections to the database coming from the same
 ip address as that of the connection with the ExclusiveLock.  Some of the
 pids of these other connections seem to have different kinds of locks
 (AccessShareL0ck) so I am not quite sure why the pids with the
 ExclusiveLock's are necessary.

AccessShareLock is the normal lock you acquire when selecting data.
It's doesn't really do much other than say I'm using this table, don't
delete it. See the documentation for all the details.

 How could I find out the tables that are being locked when I see an
 ExclusiveLock in pg_locks. 

It's rare to see exclusive locks on tables except for things like
VACUUM FULL and CLUSTER and other such admin commands...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ExclusiveLock without a relation in pg_locks

2006-02-23 Thread Carlos Oliva
Yes.  I am seeing that situation often in our database.

The query field of pg_stat_activity is SELECT ..., not SLECT UPDATE or
UPDATE or INSERT or DELETE.  I was expecting the query to say something like
SLECT UPDATE or something like that.  Also the query seems to have just
columns in the select statement; not functions.

I will look further into these queries in case that they are using
functions.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Fuhr
Sent: Thursday, February 23, 2006 1:09 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ExclusiveLock without a relation in pg_locks

On Thu, Feb 23, 2006 at 11:08:07AM -0500, Carlos Oliva wrote:
 Thank you very much for your answer.  I think that I am seeing those self
 transaction id locks as ExclusiveLocks
 
 Would you expect to see an ExclusiveLock with a query of type Select
(not
 Select Update or Update or Insert)?

Not in general, unless perhaps the select called a function that
acquired such a lock.  The Concurrency Control chapter in the
documentation has a section on lock types and the commands that
acquire them:

http://www.postgresql.org/docs/8.1/interactive/explicit-locking.html

Are you just curious or are you seeing such a situation?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ExclusiveLock without a relation in pg_locks

2006-02-23 Thread Carlos Oliva
The ExclusiveLock seems to be granted on the transaction id instead of
tables.  So I am guessing that, for a connection, the first lock is granted
to the transaction id and later other locks are granted on specific tables.

I am running the following from the console:
psql -d emrprod -c select
pg_stat_activity.datname,pg_class.relname,pg_locks.
transaction, pg_locks.mode,
pg_locks.granted,pg_stat_activity.usename,substr(pg_
stat_activity.current_query,1,30) as query, pg_stat_activity.query_start,
age(
now(),pg_stat_activity.query_start) as age, pg_stat_activity.procpid from
pg_s
tat_activity,pg_locks left outer join pg_class on (pg_locks.relation =
pg_class.
oid)  where pg_locks.pid=pg_stat_activity.procpid order by
query_start;|grep -v IDLE

Typical outputs are the following:
1) First example
emrprod |  | 950 | ExclusiveLock| t   | emruser |
SELECT PrtNbr, BilGurID, BilGu | 2006-02-23 14:02:28.369815-05 |
00:00:03.001737 |
 6193
 emrprod | mr0011   | | AccessShareLock  | t   | emruser |
SELECT PrtNbr, BilGurID, BilGu | 2006-02-23 14:02:28.369815-05 |
00:00:03.001737 |
 6193
 emrprod | sy0001a  | | AccessShareLock  | t   | emruser |
SELEC
T PrtNbr, BilGurID, BilGu | 2006-02-23 14:02:28.369815-05 | 00:00:03.001737
|
 6193
 emrprod | mr0050   | | AccessShareLock  | t   | emruser |
SELEC
T PrtNbr, BilGurID, BilGu | 2006-02-23 14:02:28.369815-05 | 00:00:03.001737
|

2) Second Example
emrprod |  | 9509136 | ExclusiveLock| t   | emruser |
SELEC
T SYTmpWks, SYTmpDir, SYT | 2006-02-23 14:04:49.498836-05 | 00:00:00.558588
|
 9667
 emrprod | sy0001   | | AccessShareLock  | t   | emruser |
SELEC
T SYTmpWks, SYTmpDir, SYT | 2006-02-23 14:04:49.498836-05 | 00:00:00.558588
|
 9667
 emrprod | sy0001a  | | AccessShareLock  | t   | emruser |
SELEC
T SYTmpWks, SYTmpDir, SYT | 2006-02-23 14:04:49.498836-05 | 00:00:00.558588
|
 9667
 emrprod | sy0004   | | AccessShareLock  | t   | emruser |
SELEC
T SYTmpWks, SYTmpDir, SYT | 2006-02-23 14:04:49.498836-05 | 00:00:00.558588
|
 9667

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Fuhr
Sent: Thursday, February 23, 2006 1:36 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ExclusiveLock without a relation in pg_locks

On Thu, Feb 23, 2006 at 01:23:36PM -0500, Carlos Oliva wrote:
 Yes.  I am seeing that situation often in our database.
 
 The query field of pg_stat_activity is SELECT ..., not SLECT UPDATE or
 UPDATE or INSERT or DELETE.  I was expecting the query to say something
like
 SLECT UPDATE or something like that.  Also the query seems to have just
 columns in the select statement; not functions.
 
 I will look further into these queries in case that they are using
 functions.

Are the ExclusiveLock locks for relations or for transaction IDs?
Also, once a lock is acquired it's held until the transaction
completes, so if the transaction ever acquired that lock then the
transaction would still be holding it.

If you can't figure out what's happening then it might be useful
to see the output of

SELECT relation::regclass, * FROM pg_locks;

A self-contained test case might also be useful.  If you show what
commands you're running and what pg_locks output you don't understand,
then somebody might be able to explain what's happening.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Performance of autovacuum and full vacuum of database

2005-11-11 Thread Carlos Oliva
Hi Matthew,
Would rebooting the server interfere with the work of pg_autovacuum?  I
imagine that pg_autovacuum would loose the information that it gathered
prior to the reboot.

Would not pg_autovacuum need to gather data about the database again before
it can judge if it should analyze/vacuum a table of the database?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthew T. O'Connor
Sent: Thursday, November 10, 2005 3:44 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance of autovacuum and full vacuum of database

Carlos Oliva wrote:
 Thank you for your response Matthew.  Currently I run pg_autovacuum with
the
 following scripts.
 su -l postgres -c pg_autovacuum -D -U postgres  /dev/null 21

 Do you suggest that I could change it to something like the following:
 su -l postgres -c pg_autovacuum -d2 -D -U postgres  /tmp/vacuum.log
21

Yes exactly, and if you find that pg_autovacuum is never or not often 
enough firing off vacuum comands, then you will need to play with the 
threshold settings.  The default thresholds for pg_autovacuum are too 
conservative for most people, so you may very well have to do this.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Performance of autovacuum and full vacuum of database

2005-11-10 Thread Carlos Oliva








Hi Forum,

Should autovacuum reclaim most of the free space of a
database? We are trying to configure our database and running
pg_autovacuum to streamline our database. We have increased the
max_fsm_pages to a value larger than the total pages needed (see the output
from a full vacuum bellow LAST FEW LINES OF FULL VACUUM)
and turned on pg_autovacuum.



Nevertheless, it seems that a full vacuum that we run at
night finds a lot of free space (see EXCERPT FROM THE FULL VACUUM TO SHOW THE
VACUUM OF ONE TABLE). I would have expected
that with the configuration of our database and with autovacuum working during
the day, the amount of space that a full vacuum would find would be minimal.



We are running pg_autovacuum with its defaults
parameters. I can see that autovacuum is working because the CPU
utilization for the autovacuum PID goes up every five minutes or so and then it
goes down to almost nothing.



LAST FEW
LINES OF FULL VACUUM

INFO: free space map: 483 relations, 219546 pages
stored; 153104 total pages needed

DETAIL: Allocated FSM size: 1000 relations + 17
pages = 1057 kB shared memory.





EXCERPT
FROM THE FULL VACUUM TO SHOW THE VACUUM OF ONE TABLE

INFO: en0029: found 66035 removable,
1310162 nonremovable row versions in 417

87 pages

DETAIL: 0 dead row versions cannot be removed yet.

Nonremovable row versions range from 233 to 1165 bytes long.

There were 1746 unused item pointers.

Total free space (including removable row versions) is
20825932 bytes.

1453 pages are or will become empty, including 0 at the end
of the table.

2345 pages containing 16260040 free bytes are potential move
destinations.

CPU 2.20s/0.22u sec elapsed 62.59 sec.








Re: [GENERAL] Performance of autovacuum and full vacuum of database

2005-11-10 Thread Carlos Oliva
Thank you for your response Matthew.  Currently I run pg_autovacuum with the
following scripts.
su -l postgres -c pg_autovacuum -D -U postgres  /dev/null 21

Do you suggest that I could change it to something like the following:
su -l postgres -c pg_autovacuum -d2 -D -U postgres  /tmp/vacuum.log 21

Thank you in advance for your response.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthew T. O'Connor
Sent: Thursday, November 10, 2005 2:46 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance of autovacuum and full vacuum of database

Couple of thing here:
1) Just because autovacuum is running, doesn't mean that it has actually 
tried to vacuum a table. 5 minutes is the time that it sleeps in between 
investigating activity to see if a vacuum is needed. If you want to see 
if pg_autovacuum has actually tried to do anything you should up the the 
logging with a -d2 switch on the pg_autovacuum command line.

2) pg_autovacuum only performs normal VACUUM commands, not VACUUM FULL. 
As such, there will usually be some freespace left in the table. This is 
not a bad thing as normal table activity will usually result in that 
free space being reused, and it can be faster when it can use existing 
free space rather than having to add space to the end off a table.

Matt


Carlos Oliva wrote:

 Hi Forum,

 Should autovacuum reclaim most of the free space of a database? We are 
 trying to configure our database and running pg_autovacuum to 
 streamline our database. We have increased the max_fsm_pages to a 
 value larger than the total pages needed (see the output from a full 
 vacuum bellow *_LAST FEW LINES OF FULL VACUUM_*) and turned on 
 pg_autovacuum.

 Nevertheless, it seems that a full vacuum that we run at night finds a 
 lot of free space (see *_EXCERPT FROM THE FULL VACUUM TO SHOW THE 
 VACUUM OF ONE TABLE_*). I would have expected that with the 
 configuration of our database and with autovacuum working during the 
 day, the amount of space that a full vacuum would find would be 
 minimal.*__*

 We are running pg_autovacuum with its defaults parameters. I can see 
 that autovacuum is working because the CPU utilization for the 
 autovacuum PID goes up every five minutes or so and then it goes down 
 to almost nothing.

 *_LAST FEW LINES OF FULL VACUUM_*

 INFO: free space map: 483 relations, 219546 pages stored; 153104 total 
 pages needed

 DETAIL: Allocated FSM size: 1000 relations + 17 pages = 1057 kB 
 shared memory.

 *_EXCERPT FROM THE FULL VACUUM TO SHOW THE VACUUM OF ONE TABLE_*

 INFO: en0029: found 66035 removable, 1310162 nonremovable row 
 versions in 417

 87 pages

 DETAIL: 0 dead row versions cannot be removed yet.

 Nonremovable row versions range from 233 to 1165 bytes long.

 There were 1746 unused item pointers.

 Total free space (including removable row versions) is 20825932 bytes.

 1453 pages are or will become empty, including 0 at the end of the table.

 2345 pages containing 16260040 free bytes are potential move destinations.

 CPU 2.20s/0.22u sec elapsed 62.59 sec.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Setting max_fsm_pages

2005-11-07 Thread Carlos Oliva
Hi Jim,
Thank you for your help.  We are going to increase the max_fxm_pages
according to the test I have been running through out the week.

If we increase the max_fsm_pages, do we need to bump up the shared_buffers
and the size of the shared memory segment of the Linux kernel(shmmax)?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim C. Nasby
Sent: Monday, November 07, 2005 2:38 PM
To: Carlos Oliva
Cc: 'Tom Lane'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Setting max_fsm_pages

On Sun, Nov 06, 2005 at 08:05:29PM -0500, Carlos Oliva wrote:
 Thank you for your response Tom.
 
 Should I set the max_fsm_pages to the total pages needed obtained from a
 full vacuum or from a analize vacuum?  When I run a vacuum analyze
(vacuumdb
 -z -v), I get a smaller number of pages needed than when I run a full
vacuum
 with analyze (vacuumdb -f -z -v)

There shouldn't be any difference because of analyze. But remember that
as the tables change in size (as well as in the number of dead tuples),
total pages needed can change. For example, if you run a vacuum
immediately after a vacuum full on a system with no other activity,
you'll get:

INFO:  free space map: 0 relations, 0 pages stored; 0 total pages needed

That's because there's no dead space to be reclaimed.

Your best bet is to do a vacuum verbose (vacuumdb -v) after the database
has been running for a while using whatever vacuuming scheme you're
going to use (such as pg_autovacuum). That will give you a pretty good
estimate of how many pages you really need. Even that's not 100%
reliable though, so you still need to include extra space as a safety
margin.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Setting max_fsm_pages

2005-11-06 Thread Carlos Oliva
Thank you for your response Tom.

Should I set the max_fsm_pages to the total pages needed obtained from a
full vacuum or from a analize vacuum?  When I run a vacuum analyze (vacuumdb
-z -v), I get a smaller number of pages needed than when I run a full vacuum
with analyze (vacuumdb -f -z -v)

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Saturday, November 05, 2005 10:23 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Setting max_fsm_pages 

Carlos Oliva [EMAIL PROTECTED] writes:
 Should I set the max_fsm_pages to the value reported (vacuum verbose) as
 pages stored or the value reported as total pages needed?

Total pages needed ... if not more.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Setting max_fsm_pages

2005-11-05 Thread Carlos Oliva








Should I set the max_fsm_pages to the value reported (vacuum
verbose) as pages stored or the value reported as total pages needed? I
ran full + analyze vacuums in my database (vacuumdb f z v database
name) a couple of times and I got the following reports:

INFO: free space map: 454 relations,
22274 pages stored; 147328 total pages needed

DETAIL: Allocated FSM size: 1000
relations + 2 pages = 178 kB shared memory



And



INFO: free space map: 454 relations,
22242 pages stored; 147328 total pages needed

DETAIL: Allocated FSM size: 1000
relations + 2 pages = 178 kB shared memory



I am not sure if I need to raise the
max_fsm_pages to something larger than 22242 (e.g. 4) or larger than
147,328.










[GENERAL] pg_dump fails when it gets to table containing bytea

2005-10-27 Thread Carlos Oliva








Could anyone suggest something that we can check to
ascertain why pg_dumps fail? The pg_dump for our database just started to fail
this week. Dumps of the same database succeeded just last week. Moreover, we
can create a new database using the database (that we are trying to dump) as a
template and the data is copied into the new database.



We are getting the following error message whe we run pg_dump
-Ft database name  database.tar:

pg_dump: ERROR: canceling query due to user request

pg_dump: SQL command to dump the contents of table
blob failed: PQendcopy() fa

iled.

pg_dump: Error message from server: ERROR: canceling query
due to user request

pg_dump: The command was: COPY public.blob (prtnbr, bkey,
bdsc, btypnbr, bcrtdte

, bcrttme, bcrtusr, bflepath, bflenam, bfleext, bsetnbr,
cblob) TO stdout;








[GENERAL] Installing soundex, metaphone, lenshtein

2005-08-19 Thread Carlos Oliva








Where can I find instructions to install these functions in
my 7.4 version? I am awaret hat they are in the contrib. folder but I have
never installed functions from the contrib folder. Thanks in advance.

Carlos








Re: [GENERAL] Installing soundex, metaphone, lenshtein

2005-08-19 Thread Carlos Oliva
Thank you Phillip.  I was able to install the functions with the command
that you posted.  I will look for the documents on the usage of these
functions.

-Original Message-
From: Philip Hallstrom [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 19, 2005 2:48 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Installing soundex, metaphone, lenshtein

 Where can I find instructions to install these functions in my 7.4
version?
 I am awaret hat they are in the contrib. folder but I have never installed
 functions from the contrib folder.  Thanks in advance.

From postgresql-7.4.2/contrib/README:

Each subdirectory contains a README file with information about the
module.  Most items can be built with `gmake all' and installed with
`gmake install' in the usual fashion, after you have run the `configure'
script in the top-level directory.  Some directories supply new
user-defined functions, operators, or types.  After you have installed
the files you need to register the new entities in the database system
by running the commands in the supplied .sql file.  For example,

 $ psql -d dbname -f module.sql

See the PostgreSQL documentation for more information about this
procedure.


good luck!



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Installing soundex, metaphone, lenshtein

2005-08-19 Thread Carlos Oliva
Thank you Philip.  I found everything and installed the functions
successfully.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Philip Hallstrom
Sent: Friday, August 19, 2005 2:48 PM
To: Carlos Oliva
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Installing soundex, metaphone, lenshtein

 Where can I find instructions to install these functions in my 7.4
version?
 I am awaret hat they are in the contrib. folder but I have never installed
 functions from the contrib folder.  Thanks in advance.

From postgresql-7.4.2/contrib/README:

Each subdirectory contains a README file with information about the
module.  Most items can be built with `gmake all' and installed with
`gmake install' in the usual fashion, after you have run the `configure'
script in the top-level directory.  Some directories supply new
user-defined functions, operators, or types.  After you have installed
the files you need to register the new entities in the database system
by running the commands in the supplied .sql file.  For example,

 $ psql -d dbname -f module.sql

See the PostgreSQL documentation for more information about this
procedure.


good luck!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Bytea to File

2005-04-08 Thread Carlos Oliva








Hi Forum,

Is there a SQL sentence that one could build in order to
pull a bytea out of a table and save it into a file? Currently I do this by
getting a record set and outputting to a stream. I am using a developer
interface which would let me send a SQL sentence easily without having to write
any code.








Re: [GENERAL] Size of data stored in bytea record?

2005-01-26 Thread Carlos Oliva
Thank you Michael.  I am using the BIT_LENGTH function which seems to return 
the size in bits of the stored blob
Michael Fuhr [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On Tue, Jan 25, 2005 at 04:58:33PM -0500, Carlos wrote:

 Is there a way that I can find out the size of a bytea record?  I am 
 storing
 images into bytea fields and I just want to know the size of the image 
 that
 it has been saved in the field

 See the Binary String Functions and Operators section of the
 Functions and Operators chapter in the documentation:

 http://www.postgresql.org/docs/8.0/static/functions-binarystring.html

 -- 
 Michael Fuhr
 http://www.fuhr.org/~mfuhr/

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Changing access permissions without re-starting the database

2004-10-27 Thread Carlos Oliva
Thank you Michael.  This should work

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Glaesemann
Sent: Wednesday, October 27, 2004 9:27 AM
To: Carlos
Cc: '[EMAIL PROTECTED]'
Subject: Re: [GENERAL] Changing access permissions without re-starting the
database


On Oct 27, 2004, at 9:23 PM, Carlos wrote:

 PostgreSQL?  I often want to preclude all IP address but one from 
 accessing the database.  Currently, I change the pg_hba.conf file and 
 re-start the database but I would like to be able to do the same thing 
 without having to re-start.

Try pg_ctl reload instead of pg_ctl restart

Michael Glaesemann
grzm myrealbox com


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Eliminating start error message: unary operator

2003-06-24 Thread Carlos Oliva
Hi Tom,
This log has the error but I do not understand why it occurs.  Attached
is the file also

+ PGVERSION=7.3
+ INITD=/etc/rc.d/init.d
+ . /etc/rc.d/init.d/functions
++ TEXTDOMAIN=initscripts
++ TEXTDOMAINDIR=/etc/locale
++ umask 022
++ export PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
++ PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
++ '[' -z '' ']'
++ COLUMNS=80
++ '[' -f /etc/sysconfig/i18n -a -z '' ']'
++ . /etc/sysconfig/i18n
+++ LANG=en_US.iso885915
+++ SUPPORTED=en_US.iso885915:en_US:en
+++ SYSFONT=lat0-sun16
+++ SYSFONTACM=iso15
+++ /sbin/consoletype
++ '[' en_US.iso885915 = ja_JP.eucJP -a pty '!=' pty ']'
+++ /sbin/consoletype
++ '[' en_US.iso885915 = ko_KR.eucKR -a pty '!=' pty ']'
+++ /sbin/consoletype
++ '[' en_US.iso885915 = zh_CN.GB2312 -a pty '!=' pty ']'
+++ /sbin/consoletype
++ '[' en_US.iso885915 = zh_TW.Big5 -a pty '!=' pty ']'
++ export LANG
++ '[' -z '' ']'
++ '[' -f /etc/sysconfig/init ']'
++ . /etc/sysconfig/init
+++ BOOTUP=color
+++ RES_COL=60
+++ MOVE_TO_COL=echo -en \033[60G
+++ SETCOLOR_SUCCESS=echo -en \033[1;32m
+++ SETCOLOR_FAILURE=echo -en \033[1;31m
+++ SETCOLOR_WARNING=echo -en \033[1;33m
+++ SETCOLOR_NORMAL=echo -en \033[0;39m
+++ LOGLEVEL=3
+++ PROMPT=yes
++ '[' -x /sbin/consoletype ']'
+++ consoletype
++ '[' pty = serial ']'
++ '[' color '!=' verbose ']'
++ INITLOG_ARGS=-q
++ typeset -F
+ TYPESET=declare -f action
declare -f checkpid
declare -f confirm
declare -f daemon
declare -f echo_failure
declare -f echo_passed
declare -f echo_success
declare -f echo_warning
declare -f failure
declare -f killproc
declare -f passed
declare -f pidfileofproc
declare -f pidofproc
declare -f status
declare -f strstr
declare -f success
declare -f warning
+ . /etc/sysconfig/network
++ NETWORKING=yes
++ HOSTNAME=buyemr.pbsinet.com
++ GATEWAY=209.4.117.159
++ basename /etc/rc.d/init.d/postgresql
+ NAME=postgresql
+ PGPORT=5432
+ export PGDATA=/var/lib/pgsql
+ PGDATA=/var/lib/pgsql
+ '[' -f /var/lib/pgsql/PG_VERSION ']'
+ export PGDATA=/var/lib/pgsql/data
+ PGDATA=/var/lib/pgsql/data
+ '[' -f /etc/sysconfig/pgsql/postgresql ']'
+ export PGDATA
+ export PGPORT
+ export PGOPTS
+ '[' yes = no ']'
+ '[' -f /usr/bin/postmaster ']'
+ start
+ PSQL_START=Starting postgresql service: 
+ '[' -f /var/lib/pgsql/data/PG_VERSION ']'
+ '[' -d /var/lib/pgsql/data/base ']'
++ cat /var/lib/pgsql/data/PG_VERSION
+ '[' 7.3 '!=' 7.3 ']'
++ pidof -s /usr/bin/postmaster
+ pid=
+ '[' ']'
+ rm -f /tmp/.s.PGSQL.5432
+ echo -n 'Starting postgresql service: '
Starting postgresql service: + su -l postgres -s /bin/sh -c
'/usr/bin/pg_ctl  -D /var/lib/pgsql/data -p /usr/bin/postmaster -o
'\''-p 5432'\'' start   /dev/null 21'
-sh: [: ==: unary operator expected
+ sleep 1
++ pidof -s /usr/bin/postmaster
+ pid=6350
+ '[' 6350 ']'
+ echo 'declare -f action
declare -f checkpid
declare -f confirm
declare -f daemon
declare -f echo_failure
declare -f echo_passed
declare -f echo_success
declare -f echo_warning
declare -f failure
declare -f killproc
declare -f passed
declare -f pidfileofproc
declare -f pidofproc
declare -f status
declare -f strstr
declare -f success
declare -f warning'
+ grep 'declare -f success'
+ success 'Starting postgresql service: '
+ '[' -z '' ']'
+ initlog -q -n /etc/rc.d/init.d/postgresql -s 'Starting postgresql
service: ' -e 1
+ '[' color '!=' verbose -a -z '' ']'
+ echo_success
+ '[' color = color ']'
+ echo -en '\033[60G'
+ echo -n '[  '
[  + '[' color = color ']'
+ echo -en '\033[1;32m'
+ echo -n OK
OK+ '[' color = color ']'
+ echo -en '\033[0;39m'
+ echo -n '  ]'
  ]+ echo -ne '\r'

+ return 0
+ return 0
+ touch /var/lock/subsys/postgresql
+ echo 6350
+ echo

+ exit 0

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Monday, June 23, 2003 5:18 PM
To: Carlos
Cc: [EMAIL PROTECTED]; Dain
Subject: Re: [GENERAL] Eliminating start error message: unary operator 


Carlos Oliva [EMAIL PROTECTED] writes:
 I got this in standard out.  I also included the output in a text file

I don't see the complaint anywhere in there, though?

Also, it looks like this failed because postmaster was already running.
You probably need to stop the postmaster and then try the sh -x. Don't
forget to pipe both stdout and stderr into the same file, eg
sh -x /etc/rc.d/init.d/postgresql start mylog 21

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
+ PGVERSION=7.3
+ INITD=/etc/rc.d/init.d
+ . /etc/rc.d/init.d/functions
++ TEXTDOMAIN=initscripts
++ TEXTDOMAINDIR=/etc/locale
++ umask 022
++ export PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
++ PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
++ '[' -z '' ']'
++ COLUMNS=80
++ '[' -f /etc/sysconfig/i18n -a -z '' ']'
++ . /etc

Re: [GENERAL] Eliminating start error message: unary operator

2003-06-24 Thread Carlos Oliva
Hi Tom,
Thank you very much for your help.  In order to get the pg_ctl trace at
start up, I would appreciate it if you could advise me on how to modify
the postscrpt script.  The line in question in the script is:
su -l postgres -s /bin/sh -c /usr/bin/pg_ctl  -D $PGDATA -p
/usr/bin/postmaster -o '-p ${PGPORT}' start   /dev/null 21 
/dev/null

Should it be something like:
su -l postgres -s /bin/sh -c /bin/sh -x /usr/bin/pg_ctl  -D $PGDATA -p
/usr/bin/postmaster -o '-p ${PGPORT}' start   mylog 21  /dev/null

Thanks in advance for your response.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Tuesday, June 24, 2003 10:32 AM
To: Carlos
Cc: [EMAIL PROTECTED]; Dain
Subject: Re: [GENERAL] Eliminating start error message: unary operator 


Carlos Oliva [EMAIL PROTECTED] writes:
 This log has the error but I do not understand why it occurs.

 Starting postgresql service: + su -l postgres -s /bin/sh -c
 '/usr/bin/pg_ctl  -D /var/lib/pgsql/data -p /usr/bin/postmaster -o 
 '\''-p 5432'\'' start   /dev/null 21'
 -sh: [: ==: unary operator expected

Hm.  Apparently the error is actually occuring inside the pg_ctl script.
Try changing the above line in the initscript to do
/bin/sh -c /bin/sh -x /usr/bin/pg_ctl ...
so we can get tracing of the pg_ctl script too.

(I am now thinking that the error probably occurs because pg_ctl is
expecting some environment variable to be set that is not set when run
from the boot script?  If so, you might not see the failure if you try
to run pg_ctl by hand with sh -x.  But you could try that first if you
like.)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Eliminating start error message: unary operator

2003-06-23 Thread Carlos Oliva
Postgresql was installed from RPMS, downloaded from the following url:
ftp://ftp5.us.postgresql.org/pub/PostgreSQL/binary/v7.3.2/RPMS/redhat-7.
3/
The startup script was created when the rpms were installed.  


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of scott.marlowe
Sent: Monday, June 23, 2003 3:35 PM
To: Carlos
Cc: '[EMAIL PROTECTED]'
Subject: Re: [GENERAL] Eliminating start error message: unary operator


On Mon, 23 Jun 2003, Carlos wrote:

 Hi Forum,
 What should I correct in order to eliminate the following error 
 message on
 start:
 Executing /etc/rc.d/init.d/postgresql start .. 
 Starting postgresql service: -sh: [: ==: unary operator expected
 [  OK  ]
 
 We installed a v7.3.2 and PostgreSQL seems to be running very well but

 we want to get rid of this message.

How did you install postgresql (rpms, source code) and where did the 
startup script come from?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Eliminating start error message: unary operator

2003-06-23 Thread Carlos Oliva
 LC_TIME  $PGDATA/../initdb.i18n
# Initialize the database
su -l postgres -s /bin/sh -c /usr/bin/initdb
--pgdata=$PGDATA  /dev/null 21  /dev/null
[ -f $PGDATA/PG_VERSION ]  echo_success
[ ! -f $PGDATA/PG_VERSION ]  echo_failure
echo
fi

# Check for postmaster already running...
  # note that pg_ctl only looks at the data structures in PGDATA
  # you really do need the pidof()
pid=`pidof -s /usr/bin/postmaster`
if [ $pid ]  /usr/bin/pg_ctl status -D $PGDATA  /dev/null
21
then
echo $Postmaster already running.
else
#all systems go -- remove any stale lock files
rm -f /tmp/.s.PGSQL.${PGPORT}  /dev/null
echo -n $PSQL_START
su -l postgres -s /bin/sh -c /usr/bin/pg_ctl  -D
$PGDATA -p /usr/bin/postmaster -o '-p ${PGPORT}' start   /dev/null
21  /dev/null
sleep 1
pid=`pidof -s /usr/bin/postmaster`
if [ $pid ]
then
if echo $TYPESET|grep declare -f success
/dev/null
then
success $PSQL_START
else
echo   [ OK ]
fi
touch /var/lock/subsys/${NAME}
echo $pid  /var/run/postmaster.${PGPORT}.pid
echo
else
if echo $TYPESET|grep declare -f failure
/dev/null
then
failure $PSQL_START
else
echo  [ FAILED ]
fi
echo
fi
fi
}

stop(){
PSQL_STOP=$Stopping ${NAME} service: 
su -l postgres -s /bin/sh -c /usr/bin/pg_ctl stop -D $PGDATA -s
-m fast  /dev/null 21
ret=$? 
if [ $ret -eq 0 ]
then
if echo $TYPESET|grep declare -f success /dev/null
then
success $PSQL_STOP
else
echo   [ OK ]
fi

else
if echo $TYPESET|grep declare -f failure /dev/null
then
failure $PSQL_START
else
echo   [ FAILED ]
fi

fi
echo
rm -f /var/run/postmaster.${PGPORT}.pid
rm -f /var/lock/subsys/${NAME}
}

restart(){
stop
start
}

condrestart(){
[ -e /var/lock/subsys/${NAME} ]  restart
}

reload(){
su -l postgres -s /bin/sh -c /usr/bin/pg_ctl reload -D $PGDATA -s
 /dev/null 21
}

# This script is slightly unusual in that the name of the daemon
(postmaster)
# is not the same as the name of the subsystem (postgresql)

# See how we were called.
case $1 in
  start)
start
;;
  stop)
stop
;;
  status)
status postmaster
;;
  restart)
restart
;;
  condrestart)
condrestart
;;
  reload|force-reload)
reload
;;
  *)
echo $Usage: $0
{start|stop|status|restart|condrestart|reload|force-reload}
exit 1
esac

exit 0


-Original Message-
From: scott.marlowe [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 23, 2003 4:20 PM
To: Carlos
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Eliminating start error message: unary operator


Could you post a copy of the /etc/rc.d/init.d/postgresql file here?  It 
may have gotten corrupted or have a simple syntax error in it.

On Mon, 23 Jun 2003, Carlos Oliva wrote:

 Postgresql was installed from RPMS, downloaded from the following url:

 ftp://ftp5.us.postgresql.org/pub/PostgreSQL/binary/v7.3.2/RPMS/redhat-
 7.
 3/
 The startup script was created when the rpms were installed.  
 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of scott.marlowe
 Sent: Monday, June 23, 2003 3:35 PM
 To: Carlos
 Cc: '[EMAIL PROTECTED]'
 Subject: Re: [GENERAL] Eliminating start error message: unary 
 operator
 
 
 On Mon, 23 Jun 2003, Carlos wrote:
 
  Hi Forum,
  What should I correct in order to eliminate the following error
  message on
  start:
  Executing /etc/rc.d/init.d/postgresql start .. 
  Starting postgresql service: -sh: [: ==: unary operator expected
  [  OK  ]
  
  We installed a v7.3.2 and PostgreSQL seems to be running very well 
  but
 
  we want to get rid of this message.
 
 How did you install postgresql (rpms, source code) and where did the
 startup script come from?
 
 
 ---(end of 
 broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faqs/FAQ.html
 
 
 ---(end of 
 broadcast)---
 TIP 2: you can get off all lists at once

Re: FW: [GENERAL] Allowing user to connect to a database?

2003-06-19 Thread Carlos Oliva
Thanks again for your help.  I think that the picture is getting clearer
in my mind.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Paul Thomas
Sent: Thursday, June 19, 2003 10:37 AM
To: pgsql-general @ postgresql . org
Subject: Re: FW: [GENERAL] Allowing user to connect to a database?



On 19/06/2003 14:04 Carlos Oliva wrote:
 Hi Paul,
 This worked very well.  Perhaps, you can help me with another 
 question: If I write an aaa bbb ccc of 200.200.200.0, do all clientes 
 with IP addresses between 0.0.0.0 and 200.200.200.0 can connect?
 

No. The client ips would have to be in the range 200.200.200.1 to 
200.200.200.254. The rules for sub-nets and sub-net masks can be a bit 
confusing and I've yet to find a tutorial on-line which is easy to
follow. 
It might help you to think of 200.200.200.0 as 200.200.200.*.

HTH

-- 
Paul Thomas
+--+
-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+
-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly