Re: [GENERAL] Question about OID and TCID

2010-11-28 Thread Matthew Walden
> So the in-place update is a vital part of the program, because a range of
values
> of column col should be stored together on the disk. Is there any way to
do such
> a in-place update without generating much extra overhead?
>

Although in-place update is not possible, can he not use partitioning to at
least try to store the data together based on the value of column col?


Re: [GENERAL] Postgres 9.01 and WAL files issue

2010-11-25 Thread Matthew Walden
I thought that when I first read about WAL archiving but the documentation
explains quite well.  Basically it is to stop a successful result being
returned in the event that the file already exists in the archive
destination (to cause an error in the event it tries to overwrite a file).

On Thu, Nov 25, 2010 at 2:43 PM, Vick Khera  wrote:

> On Wed, Nov 24, 2010 at 12:52 PM, DM  wrote:
> > Here is my Archive Command:
> > archive_command = 'cp -i %p /mnt/nfs/primary/%f < /dev/null'
> >
>
> Just curious... why would you turn on the interactive version of cp
> for an automated script?  Is that why you feed it /dev/null as input?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
View my Linkedin profile 


Re: [GENERAL] Postgres 9.01 and WAL files issue

2010-11-24 Thread Matthew Walden
Deepak,

Does your backup script exclude the pg_xlog directory?  There is no point
backing up the WAL files but you will want to make sure the archived WAL
files are added to the backup set afterwards.

How many files are in pg_xlog at the point where you stop the backup?  It
may actually be that it takes a while to copy them all to the NFS if your
database has been active overnight.  I would hope that it is a low traffic
system if you are leaving it in backup mode all night though.

Ideally you would build the stop and start SQL into a backup script which
handles the rsync and then also configure the script to add the archived WAL
files to the backup set afterwards.

Another thing I would try is to run your exact archive command manually
(using one of the completed WAL files as an example) to the NFS.  See what
feedback you get as to why it isn't returning either a positive or negative
return.

On Wed, Nov 24, 2010 at 5:52 PM, DM  wrote:

> Hi All,
>
> pg_stop_backup is not stopping...
>
> Postgresql version 9.01
> OS: Centos
> Postgresql.conf ==> default configuration did not change anything
>
> Steps
>
> 1. Initiated psql -c "SELECT pg_start_backup('label', true)"
> 2. Started RSYNC job to sync slave box
>  rsync -a /var/lib/pgsql/data/ dbpg9-test-02:/var/lib/pgsql/data
>
> Left over night to rsync - it was only around 10 GB of data to be rsynced,
> but i left overnight...
>
> 3. psql -c "SELECT pg_stop_backup()"
>
>
> *Gettign Error Message as below*
>
> NOTICE:  pg_stop_backup cleanup done, waiting for required WAL segments to
> be archived
> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
> archived (60 seconds elapsed)
> HINT:  Check that your archive_command is executing properly.
> pg_stop_backup can be cancelled safely, but the database backup will not be
> usable without all the WAL segments.
> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
> archived (120 seconds elapsed)
> HINT:  Check that your archive_command is executing properly.
> pg_stop_backup can be cancelled safely, but the database backup will not be
> usable without all the WAL segments.
>
> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
> archived (240 seconds elapsed)
> HINT:  Check that your archive_command is executing properly.
> pg_stop_backup can be cancelled safely, but the database backup will not be
> usable without all the WAL segments.
>
> I waited quite long and it was not stopping, I had to issue Ctrl+c to
> cancel it.
>
>
> 
>
> I tried to start backup and stop backup again, same problem.
>
> Here is my Archive Command:
> archive_command = 'cp -i %p /mnt/nfs/primary/%f < /dev/null'
>
> Directory /mnt/nfs/primary ==> is an NFS mount.
>
>
> Any solution how to fix it.
>
>
> Thanks
> Deepak Murthy
>
>
>
>
>


Re: [GENERAL] postmaster.pid does not exist. cannot start postgres server on ubuntu

2010-11-23 Thread Matthew Walden
Jen,

Regarding the first point, is postgres actually running?  You can check this
by typing the following at a terminal -

ps -ef|grep postgres

Reload configuration is used to tell PostgreSQL to read in the configuration
file for any chances since the database started.  If you want to start the
database you can use pg_ctl.  More info on its use would be best gained from
the documentation.  It looks like the feedback is correct and you don't have
postgres working.  Either that or you you need to set the $PGDATA
environment variable or specify the data directory when using pg_ctl reload.

I can't really comment on the login script because I don't know its contents
but if you're trying to automatically start the database I would look at
init.d and the associated run level folders (again more info in Linux
documentation - its far too lengthy to explain here).

-- 
View my Linkedin profile 

On Tue, Nov 23, 2010 at 4:12 PM, Jennifer Trey wrote:

> I am getting the following error when I go to Application -> PostgreSQL ->
> Reload Configuration
>
> "
> Please enter your password if requested.
> pg_ctl: PID file "/home/Jen/Installed/PostgreSQL/data/postmaster.pid" does
> not exist
> Is server running?
>
> Press  to continue...
> "
>
> I have installed Postgresql from the Enterprise DB installer for Ubuntu.
>
> It has worked before. I just restarted my computer and now face this
> issue.
>
> Off topic:
> I have also had a different kind of issue, where a script added to
> /etc/profile.d  called Startup.sh
> causes postgre to fail to startup. This script is used on login to set
> global environment variables, but I get the error that Postgre cannot
> execute the script.
> I tried chmod 777 on the script but still no luck. If I remove it, postgre
> will start up so its not a biggy..
> but is not a bug ?
>
> Cheers, Jen
>
>
>


Re: [GENERAL] tablespace restore

2010-11-19 Thread Matthew Walden
The problem is that there is a lot of metadata outside the tablespace you
created (information that the catalog tables keep on your new database and
its objects) and this can only be restored with a full restore which would
overwrite your pre-existing databases on your target.  I hate to say it
can't be done because someone will prove me wrong but I think pg_dump is
your only answer even if you do need to give it some time and then rebuild
the indexes.

That said, perhaps someone else can confirm or deny this too.

On Fri, Nov 19, 2010 at 1:30 PM, Vangelis Katsikaros wrote:

> On 11/19/2010 03:12 PM, Matthew Walden wrote:
>
>> Vangelis,
>>
>> I don't believe you can do file level copying of single databases
>> (especially as they are different versions).
>>
>
> Hi Matthew, thanks for your answer.
>
> If the different versions is a problem, I can downgrade one server and then
> upgrade afterwards.
>
>
>  Take a look at pg_dump in the documentation.  This will do what you need I
>> think but at a logical level rather than physical.
>>
>
> Hm, from what I understand pg_dump doesn't dump indexes (ie the indexing
> information - not an SQL clause in CREATE TABLE). Also I have lots of Giga
> of data, so I imagine that pg_restore will take quite some time to execute
> the INSERTs and reindex.
>
> Regards
> Vangelis


Re: [GENERAL] tablespace restore

2010-11-19 Thread Matthew Walden
Vangelis,

I don't believe you can do file level copying of single databases
(especially as they are different versions).

Take a look at pg_dump in the documentation.  This will do what you need I
think but at a logical level rather than physical.

On Fri, Nov 19, 2010 at 11:52 AM, Vangelis Katsikaros wrote:

> Hello
>
> I use postgres 8.3.12 on machineA and 8.4.5 on machineB.
>
> On machineA I have created a tablespace with
> CREATE TABLESPACE tablelocation_name LOCATION '/my/location/machineA';
>
> I then created a database with
> CREATE DATABASE db_name TABLESPACE tablelocation;
>
> I created tables, inserted data and created indexes.
>
> I now want to "move" the db from '/my/location/machineA' of machine A to
> '/other/location/machineB' of machine B. My question is how I can do a
> filesystem backup/restore (I want to move the indexes too - too time
> consuming to reindex).
>
> Machine B already has a postgres running, and postgres on machine B already
> has other databases. During this process I have no problem of shutting down
> postgres.
>
>
> I have tried some things unsuccessfully:
> 1)  - stop postgres on machine B
>- copy dir of tablelocation of machine A to '/other/location/machineB'
> on machine B
>- start postgres on machine B
>- CREATE TABLESPACE tablelocation_name LOCATION
> '/other/location/machineB';
>  ERROR:  directory "/other/location/machineB" is not empty
>
>
> Regards
> Vangelis
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] interactive pager off

2010-11-17 Thread Matthew Walden
On Wed, Nov 17, 2010 at 2:45 PM, Gauthier, Dave wrote:

>  How does one set pager off in interactive sql ?
>
> I tried \set pager off, doesn't seem to work.
>
>
>
> Thanks in Advance !
>

\pset pager off


Re: [GENERAL] Storing old and new tuple values after an UPDATE, INSERT or DELETE

2010-11-17 Thread Matthew Walden
You can create a trigger on the table to store the old/new values in an
audit table if that is what you mean?


Re: [GENERAL] Expected frequency of auto_vacuum activity

2010-11-15 Thread Matthew Walden
Dave,

Does your application use temporary tables?  This may explain high
autovacuum activity in the catalog tables.

With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables.  I
deliberately keep this one simple and it doesn't include the associated
TOAST rows but you get the idea (it works on 9.0 you may need to test on 8).

select relid,
   schemaname,
   relname,
   n_live_tup as live_rows,
   n_dead_tup as dead_rows,
   round(cast(n_dead_tup as decimal)/cast(n_live_tup as decimal),3)
fraction_changed,
   to_char(greatest(last_
autovacuum,last_vacuum),'DD/MM/ HH24:MI:SS') last_vacuum,
   to_char(greatest(last_autoanalyze,last_analyze),'DD/MM/
HH24:MI:SS') last_analyze
from pg_stat_all_tables a
where schemaname not in ('pg_catalog','pg_toast','information_schema');

select name,
   setting
from pg_settings
where name in
('autovacuum_vacuum_threshold','autovacuum_vacuum_scale_factor','autovacuum_analyze_scale_factor','autovacuum_analyze_threshold');

If your tables are particularly large then the number of dead rows may not
qualify autovacuum_vacuum_scale_factor.  What is the ratio between live and
dead rows?


On Mon, Nov 15, 2010 at 6:55 PM, Dave Jennings
wrote:

> Hi there,
>
> I'm wondering if I'm seeing the appropriate amount of vacuuming in my 8.4
> database.
>
> I have a database with about twenty tables ranging from small, mostly
> static, tables to tables with tens or hundreds of thousands of rows and a
> fair number of inserts and updates (but very few deletes).
>
> I have auto_vacuum with the default values (except
> "log_autovacuum_min_duration=0") but when I check the logs I see lots of
> vacuuming of a few pg_catalog tables - every 30 mins or so - but very few
> vacuums of my public tables. Just two occurences in the last week.
>
> According to pg_stat_all_tables only two tables in the public schema have
> ever been auto-vacuumed.
>
> Is this expected?
>
> Thanks,
>
> Dave.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Indexes on individual columns of composite primary key

2010-11-15 Thread Matthew Walden
"Apparently (after reading the documentation link provided by Tom) there can
be value to indexes when accessed with leading columns missing".

That is a new one on me too - interesting.

I suppose it comes down to testing at the end of the day - if you "set
enable_seqscan to false" and "EXPLAIN ANALYSE" your query then you will see
whether the indexes you create are used.  Whether they are useful will
require you to set enable_seqscan back to true and see whether the optimizer
chooses to use them (but that will change as your data does).


Re: [GENERAL] Indexes on individual columns of composite primary key

2010-11-15 Thread Matthew Walden
Dan,

It depends on your application.  There is no point in creating an index with
the same 3 columns in the primary key (in the same order).

If you have an index on COL1, COL2 and COL3 (in that order) then if you have
a query such as SELECT COL1, COL2, COL3 from T1 then the index will be
considered.  Same if you have a query with the same columns but different
order ie SELECT COL2, COL1, COL3 from T1 and if you just select the first
column ie SELECT COL1 from T1.  The index won't be considered if you have a
query such as SELECT COL2 FROM T1 so if your application does this you may
wish to consider such indexes.

Also bear in mind the order of which you create the index or primary key.
They should be ordered by uniqueness starting with the most unique.

On Mon, Nov 15, 2010 at 8:01 PM, Dan Halbert  wrote:

> I have a table with four columns. Three of those columns are defined as the
> composite primary key. Does it make sense to create indexes on any or all of
> those three columns individually for performance reasons? PG does let me
> create the indexes. But perhaps it's redundant, since there's an
> implicitly-created index for the composite primary key.
>
>
>
> Thanks,
>
> Dan
>


Re: [GENERAL] Expected frequency of auto_vacuum activity

2010-11-15 Thread Matthew Walden
Dave,

Does your application use temporary tables?  This may explain high
autovacuum activity in the catalog tables.

With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables.  I
deliberately keep this one simple and it doesn't include the associated
TOAST rows but you get the idea (it works on 9.0 you may need to test on 8).

select relid,
   schemaname,
   relname,
   n_live_tup as live_rows,
   n_dead_tup as dead_rows,
   round(cast(n_dead_tup as decimal)/cast(n_live_tup as decimal),3)
fraction_changed,
   to_char(greatest(last_
autovacuum,last_vacuum),'DD/MM/ HH24:MI:SS') last_vacuum,
   to_char(greatest(last_autoanalyze,last_analyze),'DD/MM/
HH24:MI:SS') last_analyze
from pg_stat_all_tables a
where schemaname not in ('pg_catalog','pg_toast','information_schema');

select name,
   setting
from pg_settings
where name in
('autovacuum_vacuum_threshold','autovacuum_vacuum_scale_factor','autovacuum_analyze_scale_factor','autovacuum_analyze_threshold');

If your tables are particularly large then the number of dead rows may not
qualify autovacuum_vacuum_scale_factor.  What is the ratio between live and
dead rows?


On Mon, Nov 15, 2010 at 6:55 PM, Dave Jennings
wrote:

> Hi there,
>
> I'm wondering if I'm seeing the appropriate amount of vacuuming in my 8.4
> database.
>
> I have a database with about twenty tables ranging from small, mostly
> static, tables to tables with tens or hundreds of thousands of rows and a
> fair number of inserts and updates (but very few deletes).
>
> I have auto_vacuum with the default values (except
> "log_autovacuum_min_duration=0") but when I check the logs I see lots of
> vacuuming of a few pg_catalog tables - every 30 mins or so - but very few
> vacuums of my public tables. Just two occurences in the last week.
>
> According to pg_stat_all_tables only two tables in the public schema have
> ever been auto-vacuumed.
>
> Is this expected?
>
> Thanks,
>
> Dave.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>