Re: [GENERAL] Most effective settings for deleting lots of data?

2015-08-26 Thread Alan Hodgson
On Wednesday, August 26, 2015 08:25:02 PM Cory Tucker wrote:
> What settings would you recommend?  Also, it just occurred to me that I
> should try to disable/drop all indexes (especially since they will be
> recreated) later so that those are not updated in the process.

Don't drop the indexes your foreign keys use to find cascading deletes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Alan Hodgson
On Tuesday, May 31, 2016 10:13:14 AM Jim Longwill wrote:
> I am trying to setup a 2nd, identical, db server (M2) for development
> and I've run into a problem with starting up the 2nd Postgres installation.
> 
> Here's what I've done:
>1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>3) did a final 'CHECKPOINT' command on M1 postgres
>4) shutdown postgres on M1 with 'pg_ctl stop'
>5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>6) tried to startup postgres on M2
> 
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid
> checkpoint record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
> 
> I've tried several times to do this but always get this result.  So, do
> I need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?

What you describe should work fine. In order of likelihood of why it doesnt, I 
could guess:

1 - you're not waiting for the database to fully shut down before running the 
last rsync
2 - you're not in fact rsync'ing the entire data directory
3 - the target server is running a different version of PostgreSQL or has a 
different machine architecture



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on recovering my standby

2016-06-22 Thread Alan Hodgson
On Tuesday 21 June 2016 19:34:18 Ramalingam, Sankarakumar wrote:
> Hi I have my standby (streaming replication) down due to missing wal files.
> You would see the same error in the logs stating "cannot find the wal file
> ..." What is the best way to get it going so that when we switch between
> standby and primary once in a while they are in sync?
> 
> Currently I am working on a CERT server and hence there is no outage
> concerns. I need to repeat the same process on prod once I get it going
> successfully. Any help is appreciated.
> 

You should keep your WAL files from the master for at least as long as the 
slave might be offline (plus startup time), somewhere the slave can copy them 
from when needed (shared file system, object store, scp target, whatever).

See the postgresql.conf parameter archive_command and the corresponding  
recovery.conf parameter restore_command.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication and WAL

2016-10-25 Thread Alan Hodgson
On Tuesday 25 October 2016 17:08:26 t.dalpo...@gmail.com wrote:
> Hi,
> let's suppose I have:
>   - a primary server with its own local archive location, configured for
> continuous archiving
>   - a standby server without archive.
> These servers are configured for Sync streaming replication .
> Let's suppose that the standby stays down for a long time, then it
> restarts, goes into catchup mode and now needs some old WALs from the
> server archive location.
> Will the standby be able to automatically drain those files through the
> replication or only the WALs being currently updated by the primary ?
> 

It would need its own direct access to the master's archive.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HELP!!! The WAL Archive is taking up all space

2015-12-09 Thread Alan Hodgson
On Wednesday, December 09, 2015 07:55:09 AM FattahRozzaq wrote:
> archive_mode = on
> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
> 
> 
> The WAL archive folder is at /home/postgres/archive/master/, right?
> This directory consumes around 750GB of Disk-1.
> Each segment in the /home/postgres/archive/master/ is 16MB each
> There are currently 47443 files in this folder.
> 
> If I want to limit the total size use by WAL archive to around 200-400
> GB, what value should I set for the wal_keep_segments,
> checkpoint_segments?

PostgreSQL doesn't clean up files copied by your archive_command. You need to 
have a separate task clean those out. PostgreSQL's active wal_keep_segments 
etc. are in the data/pg_xlog directory.


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Does PostgreSQL support to write glusterfs by Libgfapi

2015-12-16 Thread Alan Hodgson
On Tuesday, December 15, 2015 11:26:40 PM zh1029 wrote:
> Hi,
>   It seems low performance PostgreSQL(9.3.6) while writing data to glusterFS
> distributed file system. libgfapi is provide since GlusterFS version 3.4 to
> avoid kernel visits/data copy which can improve its performance. But I
> didn't find out any instruction from the PostgreSQL web page. Do you know
> if PostgreSQL later release supports using libgfapi to optimized write data
> to ClusterFS file system.
> 

Putting a database on Gluster is a horrible idea in any case. But no, 
PostgreSQL does not have Gluster-specific support.

I'm actually trying to think of a use case where it would make sense; I 
suppose a large data warehouse could theoretically see some sequential read 
improvements from a scale-out cluster file system. But you could only have one 
client node accessing it.

signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Problem after replication switchover

2016-04-05 Thread Alan Hodgson
On Tuesday, April 05, 2016 12:55:04 PM Lars Arvidson wrote:
> Is there something I missed in the switchover or could this be a bug?
> 

I'd guess it's probably more like option 3 - Glusterfs ate my database.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after replication switchover

2016-04-06 Thread Alan Hodgson
On Wednesday, April 06, 2016 10:33:16 AM Lars Arvidson wrote:
> > I'd guess it's probably more like option 3 - Glusterfs ate my database.
> 
> Hi, thanks for your reply!
> We do archive logs on a distributed Glusterfs volume in case the streaming
> replication gets too far behind and the transaction logs have been removed.
> Would a restore of a corrupt archived log file give the symptoms we are
> seeing? Would not Postgresql detect that the logfile was corrupt? Are there
> some way I can analyze archived logs files to see if this is the problem?
> 

If it's just storing the logs, I doubt it's the cause of the problem. You can 
ignore my message. I had too much fun fighting with Gluster recently.

I reread your original full post, and the one thing that stuck out for me was 
"the clusters are now replicating from each other". I feel like that could be 
a problem. But someone more intimate with the replication might want to input 
on that.

Other than that, I wonder if you just have a hardware problem with your 
storage.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any difference between commit/rollback when only temp tables and \copy are used

2016-05-16 Thread Alan Hodgson
On Monday, May 16, 2016 03:41:23 PM David G. Johnston wrote:
> I have a psql script that obtains data via the \copy command and loads it
> into a temporary table.  Additional work is performed possibly generating
> additional temporary tables but never any "real" tables.  Then the script
> outputs, either to stdout or via \copy, the results.
> 
> Does it matter whether I issue a ROLLBACK or a COMMIT at the of the
> transaction?  More basically: does it matter whether I issue a BEGIN?
> 
> The script runs on Ubuntu inside a bash shell's heredoc.
> 

Some things will complete faster if you use BEGIN to start, as PostgreSQL will 
otherwise issue an implicit BEGIN and COMMIT before and after every statement.

If you don't need anything saved at the end it probably doesn't matter if you 
use ROLLBACK or COMMIT.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrate 2 DB's - v8.3

2016-05-27 Thread Alan Hodgson
On Friday, May 27, 2016 05:32:08 PM Melvin Davidson wrote:
> Well, Slony certainly will do the trick.
> Keep in mind you will need to do schema only first to the slave.
> You set up replication from the old server with the db on the new server as
> the slave. Then you initiate replication. It will probably take a long time
> to
> replicate, but then you have the option to promote the slave at your time
> preference (IE: your 2 hr window). It should only take a few minutes for
> Slony to do the switchover, but the best thing to do is a dry run first.
> IOW, you'll have to do the whole thing twice to get an accurate switch time,
> but you won't need to change your network until you are ready to go live.

Slony doesn't do BLOBs, afaik, unless he's using BYTEA fields.

Otherwise I believe dump/reload is OP's only choice. He should be able to do 
90GB in 2 hours on fast enough hardware; just pipe it over the network to do 
the restore simultaneous with the dump. 

Also remove as many indexes as possible beforehand and use create concurrently 
manually afterwards to add them back in.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DBI/DBD::Pg and transactions

2007-07-18 Thread Alan Hodgson
On Wednesday 18 July 2007 14:29, "Roderick A. Anderson" <[EMAIL PROTECTED]> 
wrote:
> In the mean time if the script gets triggered again and the first
> instance isn't finished the second needs to not be able to select those
> records already being handled.

select for update won't do that. It will sit waiting for locks on the same 
rows the first process is handling.

-- 
"Remember when computers were frustrating because they did exactly what
you told them to?  That actually seems sort of quaint now." --J.D. Baldwin


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-27 Thread Alan Hodgson
On Monday 27 August 2007 05:21, Sanjay <[EMAIL PROTECTED]> wrote:
>Wondering why it is not using the index, which would have
> been
> automatically created for the primary key.

Because you not only have just one row in the whole table, 100% of them will 
match the query. In short, one page fetch for a seq scan is faster than 
first looking it up in an index and then fetching the same page.

set enable_seqscan=false;


-- 
"Government big enough to supply everything you need is big enough to take
everything you have ... the course of history shows that as a government
grows, liberty decreases." -- Thomas Jefferson


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Inherited FK Indexing

2007-09-14 Thread Alan Hodgson
On Friday 14 September 2007, Ketema Harris <[EMAIL PROTECTED]> wrote:
> as expected I can do select * from states and get everything out of
> the child table as well.  What I can't do is create a FK to the
> states table and have it look in the child table as well.  Is this on
> purpose?  Is it possible to have  FK that spans into child tables?

No.


-- 
The only difference between conservatives and liberals regarding budget cuts
is tense. Conservatives say they will cut the budget, and then they increase
it. After the budget has increased, liberals say that it has been cut.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem dropping table

2007-09-18 Thread Alan Hodgson
On Tuesday 18 September 2007 10:30, Ken Logan <[EMAIL PROTECTED]> wrote:
> When we try to drop the table we get the error:
> ERROR:  "member_pkey" is an index

You have to remove the table from it's Slony set before you can drop it. 
Slony does some hackish things to subscriber tables that make them 
unusable for normal DDL operations.

Your master probably isn't too thrilled, either, since it probably 
thinks the table is still in a set.

In any case, you'd probably have better luck with this on the Slony 
list. I'm not at all sure your problem can be fixed without discarding 
the slave, but someone there might know.

-- 
"Corruptissima republica, plurimae leges" (The more corrupt the state, 
the more laws.) - Tacitus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] more problems with count(*) on large table

2007-10-01 Thread Alan Hodgson
On Monday 01 October 2007, Mike Charnoky <[EMAIL PROTECTED]> wrote:
> This is strange... count(*) operations over a period of one day's worth
> of data now take ~1-2 minutes to run or ~40 minutes.  It seems that the
> first time the data is queried it takes about 40 minutes.  If I try the
> query again, it finishes in 1-2 minutes!
>
> Again, nothing else is happening on this db server except for a constant
> insertion into this table and a few others.  I have done "set statistics
> 100" for the evtime field in this table.

The first time, you're reading from disk. The second time, you're reading 
from cache. Tens of millions of disk seeks don't come cheap.

-- 
We're Microsoft. Everything else is just patent infringement.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Upgrading PG

2007-10-01 Thread Alan Hodgson
On Monday 01 October 2007, "Gauthier, Dave" <[EMAIL PROTECTED]> wrote:
> I'm going to move from v7.4.13 to v8.2.0 (suse-64). I have 2
> pre-existing DBs.  Do I need to "convert" or port them to v8 in any way
> after I start up with a v8 postmaster?
>

All major version upgrades require a dump and reload. You should read all 
the update notes for intermediate versions.

You may also run into some bad-unicode data issues moving from 7.4 to 8.x. I 
would do a test run on a separate machine before committing to a conversion 
date (always a good idea IMO).

-- 
Ghawar is dying


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: RES: [GENERAL] 8.2.4 selects make applications wait indefinitely

2007-10-11 Thread Alan Hodgson
On Thursday 11 October 2007, "Carlos H. Reimer" 
<[EMAIL PROTECTED]> wrote:
> Don´t know but apparently the problem is not an issue in the client, as
> I´m able to connect and do the select * in other 8.2.4 servers.
>
> Don´t know what kind of tests I should do to help fixing this problem.
>
> Any suggestions?

It kind of sounds like a network problem, perhaps a duplex setting or driver 
issue on the server, or some sort of firewall problem.

-- 
The global consumer economy can best be described as the most efficient way 
to convert natural resources into waste.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DB errors

2007-10-12 Thread Alan Hodgson
On Friday 12 October 2007, "Akash Garg" <[EMAIL PROTECTED]> wrote:
> We had a crash of one of our db systems last night.  After doing a fsck
> of he file system and getting the db backup, we're getting a lot of these
> messages in our logs.  The DB will also occasionally crash now.
>
> Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR: 
> could not access status of transaction 2259991368
> Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL: 
> Could not read from file "pg_clog/086B" at offset 73728: Success.
>
> Any ideas?
>

Restore from backup. And find out why your server doesn't respect fsync.

-- 
The global consumer economy can best be described as the most efficient way 
to convert natural resources into waste.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Importing MySQL dump into PostgreSQL 8.2

2007-10-15 Thread Alan Hodgson
On Friday 12 October 2007, wido <[EMAIL PROTECTED]> wrote:
>
> hi! but what happens when somebody sent you a dump file and you can't
> convert the tables? all i have is a 116MB sql file, and i won't
> convert it by hand :P

Restore it into MySQL and then extract it in whatever form you like. Free 
Software is a wonderful thing.

-- 
Ghawar is dying


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Resetting SEQUENCEs

2007-10-18 Thread Alan Hodgson
On Thursday 18 October 2007, Laurent ROCHE <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am quite surprised I could not find a way to automatically reset the
> value of a sequence for all my tables.
>
> Of course, I can write:
> SELECT setval('serial', max(id)) FROM distributorsBut if I reload data
> into all my tables, it's a real pain to have to write something like this
> for every single table with a sequence.
>
> I would expect PostgreSQL to provide some command like:
> resynchAllSequences my_schema;
>
> Can this be a feature in the future ?
>

Restoring backups normally will set sequences to the correct values; you're 
doing something wrong if yours are not.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: re[GENERAL] lations does not exist

2007-10-18 Thread Alan Hodgson
On Tuesday 16 October 2007, ctorres <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I doing a simple insert into a table re Perl/DBI
> "INSERT INTO party (party_id, party_type_id, description, status_id)
>VALUES ($partyId, 'PERSON', 'Initial 
> Import','PARTY_ENABLED')
>
> and I'm getting a
> "ERROR: relations "party" does not exist"
> I get the same error message in pgadmin.
>
> The table "party" certainly exists.

Are you really connecting to the database you think you are?

-- 
Peak Oil is now accepted as inevitable, and the debate only becomes as 
to when - James Schlesinger, former US Secretary of Energy 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SQL spec/implementation question: UPDATE

2007-10-22 Thread Alan Hodgson
On Sunday 21 October 2007, Kevin Hunter <[EMAIL PROTECTED]> wrote:
> Heh.  And as Tom points out downthread, that "shortcut" probably doesn't
> gain anything in the long run.

Considering how expensive updates are in PostgreSQL, I suspect that isn't 
true.

However, the current behaviour does seem to be logical; we did in fact ask 
for the row to be updated ...

-- 
Ghawar is dying


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Resurrected" data files - problem?

2007-11-08 Thread Alan Hodgson
On Thursday 08 November 2007, "Albe Laurenz" <[EMAIL PROTECTED]> 
wrote:
> Can you give me a good reason why?
>
> > Try re-reading the instructions on backup in the manual.
>
> I know them well. That is why I ask if this questionable procedure
> could lead to damage.

You cannot backup a live database with a filesystem backup and expect it to 
work afterwards, unless you take all the steps required to produce a PITR 
base backup.

It's not even "questionable". It's a fundamental misunderstanding of what is 
required to backup a database.

-- 
Peak Oil is now accepted as inevitable, and the debate only becomes as 
to when - James Schlesinger, former US Secretary of Energy 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to automate password requests?

2007-11-28 Thread Alan Hodgson
On Wednesday 28 November 2007, Marten Lehmann <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm trying to automate some postgresql scripts, but I cannot find a way
> to pass a password directly to commands like pg_dump psql and so on.
> Even a
>
> echo "password" | psql
>
> doesn't work, the password prompt of psql is still waiting.
>
> mysql has the -p option. What would be the postgresql equivalent? I
> don't want to enter passwords dozend times.
>

create a .pgpass file.

-- 
The global consumer economy can best be described as the most efficient way  
to convert natural resources into waste.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Alan Hodgson
On Thursday 24 January 2008, Nathan Wilhelmi <[EMAIL PROTECTED]> wrote:
> Hello - Typically case of a software guy needing to spec hardware for a
> new DB server. Further typified case of not knowing exact amount of data
> and I/O patterns. So if you were to spec a disk system for a new general
> purpose PostgreSQL server any suggestions of what to start with?
>
> Details I have:
>
> 1) We think 500GB is enough storage for the DB.
> 2) I would guess 66% read 33% write patterns, maybe closer to 50/50.
>
> I would assume that we would want to go SCSI raid. Would you do the
> whole disk array as Raid 5 or whole you partition it up differently?
> Would you go with 3x300gb disks or would you use more smaller disks to
> get there?

RAID-5 is probably about the worst choice for a high-write activity 
database. Certainly pg_log should not be on a RAID-5 array. Also, RAID-5 is 
worse on smaller arrays than larger arrays.

SCSI (or SAS) is probably only worthwhile if you need 15K RPM drives.

More disks are usually better than fewer expensive disks.

Personally, I would get 8 new SATA-II drives, put them on a good SAS 
controller with a battery-protected write-back cache, and set them up in 
RAID-10. If you later find out you need more IOPs you can replace them with 
15K RPM SAS drives without throwing out the rest of the server.

Without any idea as to your transaction volume, though, this is a very 
general recommendation.

--
Alan


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?

2010-10-26 Thread Alan Hodgson
On October 26, 2010 10:18:41 am Ozz Nixon wrote:
> I have hung off indexes for each column, to resolve my previous
> "performance" issue from 3+ weeks ago. However, COUNT() is still dog slow
> - this table is a write once, read many... *never* update, nor delete.
> 
> Any suggestions?

If you need to do count(*) on 60 million row tables, you will probably need 
faster hardware.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 replication -- multiple hot_standby servers

2010-10-29 Thread Alan Hodgson
On October 29, 2010, "Dean Gibson (DB Administrator)" 
 wrote:
> My question is, how do I configure the other three (still) hot_standby
> boxes to now use the new primary?  Clearly I can change the
> "recovery.conf" file on each standby box, but that seems like an
> unnecessary nuisance.

I'm curious about this too. It seems that currently I'd have to rebuild any 
additional slaves basically from scratch to use the new master.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 replication -- multiple hot_standby servers

2010-10-29 Thread Alan Hodgson
On October 29, 2010, "Dean Gibson (DB Administrator)" 
 wrote:
> On 2010-10-29 11:17, Alan Hodgson wrote:
> > I'm curious about this too. It seems that currently I'd have to
> > rebuild any additional slaves basically from scratch to use the new
> > master.
> 
> I think so long as you "pointed" (via primary_conninfo) the additional
> slaves to the new (pending) master, before you "touch"ed the pending
> master's trigger file, you should be OK, as all the DBs should be in
> sync at that point.

Yeah they're in sync data-wise, but do they think they're the same WAL 
stream for continuity? Would be nice.


-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore problem

2010-12-28 Thread Alan Hodgson
On December 28, 2010, Adrian Klaver  wrote:
> On 12/28/2010 07:40 PM, Bob Pawley wrote:
> >> Open the file in Wordpad and see if it looks better.
> >> 
> >> I downloaded an sql editor and it looks the same in it as well.
> >> 
> >> At least the editor will make it easier to fix the problem. However I
> >> would like to know what happened so I can avoid it in the future.
> > 

It's often a good idea to maintain function definitions outside the database, 
under version control, and apply them to the database from there.

Also, try a unix2dos utility on the text of the functions before giving up 
and hand editing them.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore problem

2010-12-28 Thread Alan Hodgson
On December 28, 2010, "Bob Pawley"  wrote:
> It's often a good idea to maintain function definitions outside the
> database,
> under version control, and apply them to the database from there.
> 
> I would appreciate a more detailed explanation of this.

Treat them like source code.

> 
> Bob
> 
> Also, try a unix2dos utility on the text of the functions before giving
> up and hand editing them.
> 
> I'll look at that - I'm also looking at something called Vim
> http://www.vim.org/download.php

vim is an excellent open source text editor. Which may fix your problem if 
it's related to line endings.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup and restore sequences

2011-01-07 Thread Alan Hodgson
On January 7, 2011, gvim  wrote:
> PostgreSQL 9.0.1/pgAdminIII 1.12.1
> 
> I want to copy selected tables from one database to another and maintain
> the sequences which I originally setup with:
> 
> CREATE SEQUENCE venues_id_seq START WITH 1122;
> ALTER TABLE venues ALTER COLUMN id SET DEFAULT nextval('venues_id_seq');
> 
> ... along with their current values, which have been augmented since the
> database was setup. When I backup via pgAdminIII the sequences are not
> even included. I also can't find anything in:
> 
> man pg_dump
> 
> ... which specifies sequences.
> 
> gvim

--table=table
  Dump only tables (or views or sequences) matching table. 
Multiple tables can  be  selected
  by  writing  multiple  -t  switches. Also, the table parameter 
is interpreted as a pattern
  according to the same rules used by psql’s \d commands (see 
Patterns [psql(1)]), so multi-
  ple  tables can also be selected by writing wildcard 
characters in the pattern. When using
  wildcards, be careful to quote the pattern if needed to 
prevent the shell  from  expanding
  the wildcards.




-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] HA solution

2011-01-14 Thread Alan Hodgson
On January 14, 2011, "Jaiswal Dhaval Sudhirkumar"  
wrote:
> Hi,
> 
> I am looking for active-active clustering solution.
> 
> I have one SAN box and two separate NODES, where I need to create
> active-active cluster. My data directory would be one and mounted to the
> SAN box for both the nodes. (There will be one sharable data directory
> for both the nodes) So the query which will come to the load balancer
> (pgpool) it will route to the node which has a less load. However, it
> will use the same data directory. It is nothing but the RAC kind of
> structure.  Now, my question is.
> 
> 1)Is it possible above implementation in PostgreSQL?
> 2)Has someone implemented cluster in their production environment?
> 
> Please experts share your thought/comments/experience how I shall achieve
> that.
> 

You cannot run multiple PostgreSQL instances against the same data 
directory.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] upgrade

2011-02-03 Thread Alan Hodgson
On February 2, 2011, "William Bruton"  wrote:
> How do I know which version to upgrade to from 8.1.4?
> 

Well, 8.1 is no longer supported, it seems. So an upgrade to any supported 
version will likely require application changes, or at least thorough 
testing. You might as well go right to 9.0.3 to get all the latest features.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] bytea Issue - Reg

2014-06-06 Thread Alan Hodgson
On Wednesday, June 04, 2014 10:49:18 PM sramay wrote:
> relation "public.file_attachments" does not exist

.. is almost certainly not a size problem. What does your PostgreSQL log say? 
I suspect your app is connecting to the wrong database.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby replication problem

2014-06-09 Thread Alan Hodgson
On Monday, June 09, 2014 04:28:53 PM Khangelani Gama wrote:
> Please help me with this, my secondary server shows a replication problem.
> It stopped at the file called *00054BAF00AF …*then from here
> primary server kept on sending walfiles, until the walfiles used up the
> disc space in the data directory. How do I fix this problem. It’s postgres
> 9.1.2.
> 

It looks to me like your archive_command is probably failing on the primary 
server. If that fails, the logs will build up and fill up your disk as 
described. And they wouldn't be available to the slave to find.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby replication problem

2014-06-09 Thread Alan Hodgson
On Monday, June 09, 2014 08:05:41 PM Khangelani Gama wrote:
> Hi All
> 
> I would like to re-post the problem we have. The secondary server ran out
> the disc space due the replication problem (Connection Time out).

The secondary server would not (could not) run out of drive space due to a 
problem on the primary. You probably need to figure out why that server is out 
of drive space and fix it, and then I expect your replication problem will fix 
itself. If you do not have a process cleaning up old archived WAL files, and 
those are stored on the secondary, that could be the source of your problem.

If you also have a separate networking issue (for the connection timeout), 
then you might need to fix that, too.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store fixed size images?

2014-06-19 Thread Alan Hodgson
On Thursday, June 19, 2014 10:21:56 AM Andy Colson wrote:
> On 06/16/2014 08:10 PM, sunpeng wrote:
> > We have many small size(most fixed size) images, how to store them? There
> > are two options: 1. Store images in folders, managed by os file system,
> > only store path in postgresql 2. Store image as bytea in postgresql
> > How do you usually store images?
> > Thanks!
> > 
> > peng
> 
> I think it depends on how you are going to use them.  I, for example, have
> lots of images that are served on a web page, after benchmarks I found it
> was faster to store them on filesystem and let apache serve them directly.
> 
> -Andy

That will always be the (much) faster option. There's basically no CPU 
overhead, the web server can tell the kernel to copy the image from the 
filesystem cache directly into a network buffer, and off it goes. Even apache 
can 
serve line speed like that.

It's a lot easier to manage the images if they're in the database, though, 
especially if you run off multiple web servers. If CPU overhead is actually an 
issue, you can eliminate most of the speed hit by sticking a caching proxy 
server like Varnish in front of your site, or by offloading the image serving 
to 
a pass-through CDN. Just make sure images get a new URL path if they change 
content.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.3

2014-07-21 Thread Alan Hodgson
On Monday, July 21, 2014 09:05:18 PM Karthik Iyer wrote:
> Hello,
> 
>  We are planning to upgrade Postgres from 9.0 to 9.3. Was wondering
> if there are any serious changes that I have to look out for
> (syntax/datatypes changes) so that my code does not break.
> 

http://www.postgresql.org/docs/9.1/static/release-9-1.html
http://www.postgresql.org/docs/9.3/static/release-9-2.html
http://www.postgresql.org/docs/9.3/static/release-9-3.html



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Alan Hodgson
On Friday, August 29, 2014 04:14:35 AM Yogesh. Sharma wrote:
> Dear David,
> 
> > Are you currently using PostgreSQL?
> 
> Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
> Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in
> verion 9.0 I found least Compatibilities.
> 

Any of the currently maintained PostgreSQL versions will run fine on RHEL 6.5 - 
that would be the latest release of any version from 9.0 up. 

Only you can test and find out if your application(s) will need changes to work 
with those versions.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Alan Hodgson
On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
> The actual size of the table is around 33 MB.
> The myFunc function is called every 2.5 seconds and the wasUpdated function
> every 2 seconds by separate processes. I realize that running a FULL VACUUM
> or CLUSTER command on the table will resolve the issue but I am more
> interested in a root cause that explains why this table would end up in
> this state. I have tried to reproduce this issue by running the exact setup
> and have not been able to get the table to grow like this example. Any
> plausible cause'es or explanations would be much appreciated. Luke

I'd guess that some other process held a transaction open for a couple of 
week, and that prevented any vacuuming from taking place.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Alan Hodgson
On Tuesday, September 23, 2014 02:05:48 PM Nick Guenther wrote:
> I uninstalled all the postgres subpackages and rebuilt them from
> ports, and ended up with an identical plpython2.so, which has these
> checksums:
> SHA256 (/usr/local/lib/postgresql/plpython2.so) =
> 8c7ff6358d9bf0db342e3aca1762cd7c509075a6803b240291d60a21ca38198b
> MD5 (/usr/local/lib/postgresql/plpython2.so) =
> bb6122f27f48f0b3672dbc79cef40eea
> SHA1 (/usr/local/lib/postgresql/plpython2.so) =
> 4dd79641cbad3f71466648559d74e6b0c4f174a3
> 
> Any other OpenBSD users that have run into this, here?

I'm not a current BSD user, but I'd suggest ldd'ing that library and see if it 
has any shared libraries it can't find or if it references shared libraries 
that are not in whatever OpenBSD uses for a library search path (ie. the 
equivalent of Linux's ld.so.conf), or if any of those shared libraries have 
permissions issues.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Processor usage/tuning question

2014-10-03 Thread Alan Hodgson
On Friday, October 03, 2014 11:24:31 AM Israel Brewster wrote:
> I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some
> stats today, I saw that it was handling about 4-5 transactions/second
> (according to the SELECT sum(xact_commit+xact_rollback) FROM
> pg_stat_database; query), and an instance of the postmaster process was
> consistently showing 40%-80% utilization to handle this. I didn't think
> anything of that (the machine has plenty of capacity) until I mentioned it
> to a friend of mine, who said that utilization level seemed high for that
> many transactions. So if that level of utilization IS high, what might I
> need to tune to bring it down to a more reasonable level?
> 

You probably have some read queries not properly indexed that are sequentially 
scanning that 1.2 million row table over and over again. Enable slow query 
logging and see what's going on.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Video's or vedio file in DB.

2014-12-17 Thread Alan Hodgson
On Wednesday, December 17, 2014 01:55:46 PM Thomas Kellerer wrote:
> Albe Laurenz schrieb am 17.12.2014 um 11:07:
> > and the performance will be worse than reading files from the file system.
> 
> There is a Microsoft research [1] (from 2006) which tested this "myth" using
> SQL Server. It showed that the database might actually be faster than the
> file system.
> 
> As this topic comes up at my workplace every now and then as well, I created
> a little web application (Java/JDBC) to test this on Postgres and possibly
> other DBMS.
> 
> Turns out the Postgres as well isn't really slower at this than the file
> system.
> 
> For small files around 50k both perform similar: the average time to read
> the blob from a bytea column was around 2ms whereas the average time to
> read the blob from the filesystem was around 1ms. The test uses 50 threads
> to read the blobs using the PK of the table.
> 
> "Reading from the filesystem" means looking up the path for the file in the
> database table and then reading the file from the filesystem.
> 
> For larger files around 250k Postgres was actually faster in my tests: 130ms
> reading the bytea column vs. 260ms reading the file from disk.
> 
> The tests were done locally on my Windows laptop.
> I didn't have time yet to do this on a Linux server. I expect the filesystem
> to have some impact on the figures and NTFS is not known for being
> blazingly fast. So maybe those figures will change.
> 
> 
> My tests however do not take into account the actual time it takes to send
> the binary data from the server to the client (=browser). It might well be
> possible that serving the file through an Apache Web Server directly is
> faster than serving the file through a JEE Servlet. My intention was to
> measure the raw read speed of the binary data from the medium where it is
> stored.

You can get the data from disk about as fast, but actually serving it results 
in a large CPU hit that isn't present when serving files.

And if you're using bytea, your app server has to allocate memory to hold at 
least one full copy of the file (I seem to recall that it works out to 2 
copies, actually, but it's been a while since I tried it). Most languages 
aren't good about releasing that memory, so that hit stays around until the 
process gets recycled.

For a low volume app, both might be acceptable - any modern CPU can swamp most 
outbound bandwidth even while decoding bytea. But it is a large amount of 
overhead compared to a web server just dumping files into a network buffer 
straight from disk cache.

Also, maintaining large tables still sucks. You can partition them to make 
things friendlier. pg_upgrade makes things nicer, but it can't always be used, 
so major version upgrades can still be a problem. 

On the plus side, all your data is in one place, which makes it cluster-
friendly and easy to delete files when needed, and makes taking consistent 
backups much simpler.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_Restore

2013-01-21 Thread Alan Hodgson
On Monday, January 21, 2013 08:40:05 PM bhanu udaya wrote:
> Hello,Thanks alot for all your replies. I tried all settings suggested, it
> did not work. pg_restore is very slow. It does not come out less than 1 1/2
> hour. Can you please let me know the procedure for Template. Will it
> restore the data also . Please update. I need the database (metadata +
> data) to be restored back after certain Java executions.  Ex:- I have 9.5
> gb database (with 500 schemas + data). This is treated as base database and
> it needs to be restored every time after certain transactions. 

Don't use pg_restore, do the backups/restorations outside of PostgreSQL:

- run on a server with a snapshot-capable volume manager, use that for quick 
restores 
- just try using rsync from a backup copy of the base data directory 

(either of the above require PostgreSQL to not be running during the 
restorations)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with Postgresql RPM

2013-01-23 Thread Alan Hodgson
On Wednesday, January 23, 2013 09:10:40 AM Ian Harding wrote:
> The System:
> 
> Linux beta 2.6.32-279.19.1.el6.x86_64 #1 SMP Wed Dec 19 07:05:20 UTC 2012
> x86_64 x86_64 x86_64 GNU/Linux
> 

That looks like a CentOS 6 system.

Go to http://yum.postgresql.org/repopackages.php

Find the repo appropriate for your distribution. Download the repo rpm, 
install that, which will add the repository to your yum configuration.

Install PostgreSQL from there.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Alan Hodgson
On Thursday, June 20, 2013 07:52:21 AM Merlin Moncure wrote:
> OP needs to explore use of connection pooler, in particular pgbouncer.
>  Anyways none of this explains why the server is actually crashing.

It might be hitting file descriptor limits. I didn't dig into the earlier part 
of this thread much, though.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Database does not exist" weirdness

2017-02-17 Thread Alan Hodgson
On Friday 17 February 2017 18:18:20 Michael Tyson wrote:
> postgres=# \q
> pi@raspi ~ $ sudo -u postgres psql testdb
> psql: FATAL:  database "testdb" does not exist
> pi@raspi ~ $ sudo -u postgres createdb testdb
> createdb: database creation failed: ERROR:  duplicate key value violates
> unique constraint "pg_database_datname_index" DETAIL:  Key
> (datname)=(testdb) already exists.
> 

Something's seriously borked in your PostgreSQL data files. The message it 
should give you if the database exists is:

createdb: database creation failed: ERROR:  database "testdb" already exists

It looks like it partially exists in the catalog but not really. I can't guess 
how you got to such a state, but you should probably nuke your data directory 
and start over with a fresh initdb.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover PostgreSQL database folder data

2017-04-21 Thread Alan Hodgson
> On Fri, Apr 21, 2017 at 12:40 PM, Edson Lidorio 
> 
> wrote:
> > Hi,
> > There was a disaster in my development note. I was able to recover the
> > data folder. PostgreSQL 9.6.2, was installed in Centos 7.
> > 
> > Here are the procedures I'm trying to initialize Postgresql for me to do a
> > backup.
> > 
> > 1- I installed PostgreSQL 9.6.2 on a VM with Centos 7.
> > 2- I stopped the PostgreSQL service: sudo systemctl stop postgresql-9.6
> > 3- I renamed the /var/lib/pgsql/9.6/data folder to date data_old and
> > copied the old date folder
> > 
> > 4- I gave permission in the folder date:
> >  sudo chown postgres: postgres /var/lib/pgsql/9.6/data;

Ensure you actually have a folder named "/var/lib/pgsql/9.6/data" with your 
database in it (not, possibly, a folder named "date" per your original 
message).

as root:

chown -R postgres:postgres /var/lib/pgsql/9.6/data
restorecon -R /var/lib/pgsql/9.6/data

If it still won't start after that, please do an "ls -alRZ 
/var/lib/pgsql/9.6", pastebin the output, and send the link.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread Alan Hodgson
On Thursday 04 May 2017 14:21:00 John R Pierce wrote:
> or EBS, and I've heard from more than a few people that EBS can be
> something of a sand trap.
> 

Sorry for following up off-topic, but EBS has actually improved considerably 
in the last few years. You can get guaranteed (and very high) IOPS on SSD 
storage, and many instance types come with high-speed throughput to EBS. It's 
much much better for databases than it was 5 years ago.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread Alan Hodgson
On Thursday 04 May 2017 14:47:54 John R Pierce wrote:
> On 5/4/2017 2:28 PM, Alan Hodgson wrote:
> > On Thursday 04 May 2017 14:21:00 John R Pierce wrote:
> >> or EBS, and I've heard from more than a few people that EBS can be
> >> something of a sand trap.
> > 
> > Sorry for following up off-topic, but EBS has actually improved
> > considerably in the last few years. You can get guaranteed (and very
> > high) IOPS on SSD storage, and many instance types come with high-speed
> > throughput to EBS. It's much much better for databases than it was 5
> > years ago.
> 
> has it become more stable when Amazon has their occasional major hiccups?

No ... when they have a major problem it generally cascades across the region, 
and several outages have hit EBS in particular. In us-east-1, at least. Other 
regions seem more reliable.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Alan Hodgson
On Wednesday 10 May 2017 17:13:50 Ron Ben wrote:
> Not possible 
> https://www.postgresql.org/download/linux/debian/
>  
> To upgrade I do: apt-get install postgresql-9.3
> There is no way to "roll back" from here.
> I can not choose which version to install, it install the latest version
> packed for debian. Currently its 9.3.16 
> The docuntation says that upgrading minor version is just "replacing the
> executables", so there has to be a way to save them on the side for roll
> back. If not, the documntation is insuffecent. We all know that sometimes
> even the smallest change can cause troubles.
> 

Download the old packages from the repository. You can use dpkg to manually 
downgrade them if needed.  You can also use dpkg to get a list of the files in 
any particular package if you want to save them manually.

Or you should ask your system administrator for help.

This is not a PostgreSQL issue and that's why it (and many other things) are 
not in the docs. Every version of Linux uses a different package manager. The 
system administrator needs to know how to use it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installing in another directory

2008-02-08 Thread Alan Hodgson
On Friday 08 February 2008, "Hua-Ying Ling" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm trying to install postgresql in a custom directory.  when I run rpm
> --prefix I'm getting a package is not relocatable error.  Suggestions on
> how do I get around this?

Build your own package, or install from source.

-- 
Alan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] WAL Log Size

2008-02-28 Thread Alan Hodgson
On Thursday 28 February 2008, John Evans <[EMAIL PROTECTED]> wrote:
>I've heard that upgrading to 8.2 or 8.3 will allow me to setup a
> timeout value for WAL log creation, but upgrading at this time is not an
> option for various reasons.
>
>Any insight that you can provide will be greatly appreciated!

Write a script that does "something" that results in 16MB of WAL logging and 
run it whenever you want a rotation to occur. Inserting a few hundred 
thousand rows into an otherwise empty table should work. If it's not 
indexed it won't impact your server much, especially if your normal 
transaction volume is that low.


-- 
Alan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] WAL Log Size

2008-02-28 Thread Alan Hodgson
On Thursday 28 February 2008, Erik Jones <[EMAIL PROTECTED]> wrote:
> Or, even simpler:
>
> SELECT pg_switch_xlog();

The original poster is using 8.1.


-- 
Alan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Documenting a DB schema

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, "Shahaf Abileah" <[EMAIL PROTECTED]> wrote:
>
> However, Postgres doesn't support the "comment" keyword.  Is there an
> alternative?

comment on table table_name is 'comment';
comment on column table.column_name is 'comment';


-- 
Alan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] GRANT ALL ON recursive for all tables in my scheme?

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, "A Lau" <[EMAIL PROTECTED]> wrote:
> I recently searched for a way to grant permissions to a new created user
> for all tables in a scheme or database. I just found ways who uses psql
> and scripts. But I'm astonished that there is no way to do it with the
> "grant all on database [schema]..."-option. Actually i thought that a
> grant on a schema or database would recusivly set the accoding
> permissions to the corresponding objects (eg. tables, views...). Is there
> a way to do it easily in SQL-Syntax without psql and scripting? Why it
> can't be done with the grant-operator?
>

Normally you would have a group role or roles that have appropriate 
permissions already, and then just grant role to new_user for group 
membership.

-- 
Alan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] using warm standby with drbd???

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, "libra dba" <[EMAIL PROTECTED]> wrote:
> How else can i replicate the wal_files? ( i don't want to user common
> file system ,,, NFS,,, etc.)?

scp

>
> Another thing which i want to ask is that if we are generating archives
> every 1 minute. then what happens to the data which was written to the
> server A after 35 seconds after the last wal file generation.(server A
> crashes).
>

It's gone.

If that isn't acceptable then I would suggest putting the active pg_xlog 
directory on drbd. That may have speed implications.

-- 
Alan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] searching using indexes 8.3

2008-03-10 Thread Alan Hodgson
On Monday 10 March 2008, LARC/J.L.Shipman/jshipman 
<[EMAIL PROTECTED]> wrote:
> Hi,
>   When I do a search such as:
>
>   EXPLAIN ANALYZE SELECT * FROM itemsbyauthor;
>  QUERY PLAN
> 
> ---
>   Seq Scan on itemsbyauthor  (cost=0.00..971.34 rows=53634 width=34)
> (actual time=0.029..25.831 rows=53634 loops=1)
>   Total runtime: 34.851 ms
> (2 rows)
>
>
> I have an index for the table, but it is not being utilized.  Why is
> this?  How do I get the index to be used?
>

You asked for the whole table. How would an index help?


-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select any table

2008-03-26 Thread Alan Hodgson
On Tuesday 25 March 2008, "Roberts, Jon" <[EMAIL PROTECTED]> wrote:
> We are adding tables and schemas all of the time and we need to grant
> auditors read-only access to the database.

Make a "grant select on table to auditors;" a standard part of your table 
creation process.

--
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-26 Thread Alan Hodgson
On Wednesday 26 March 2008, Zdeněk Kotala <[EMAIL PROTECTED]> wrote:
> 1) What type of names do you prefer?
> ---
>
> a) old notation - createdb, createuser ...
> b) new one with pg_ prefix - pg_createdb, pg_creteuser ...
> c) new one with pg prefix - pgcreatedb, pgcreateuser ...
> d) remove them - psql is the solution
> e) remove them - pgadmin is the solution

c

Actually, I like a) because I'm used to them and they're in a lot of 
scripts, but I can see advantages to making them look pg-specific.

> 2) How often do you use these tools?
> ---
>
> a) every day (e.g. in my cron)
> b) one per week
> c) one time
> d) never

a

> 3) What name of initdb do you prefer?
> -- --
>
> a) initdb
> b) pg_initdb
> c) pg_init
> d) pg_ctl -d  init  (replace initdb with pg_ctl new functionality)
> e) What is initdb? My start/stop script does it automatically.

b

See notes on #1 though.

>
> 4) How do you perform VACUUM?
> -
>
> a) vacuumdb - shell command
> b) VACUUM - SQL command
> c) autovacuum
> d) What is vacuum?

c,a,b

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow pgsql tables - need to vacuum?

2008-04-07 Thread Alan Hodgson
On Monday 07 April 2008, Dan99 <[EMAIL PROTECTED]> wrote:
> Does TRUNCATE TABLE keep all necessary table
> information such as indexes, constraints, triggers, rules, and
> privileges?

Yes. It does require an exclusive lock on the table very briefly, though, 
which DELETE does not.

> Currently a mass DELETE is being used to remove the data. 

And that's why the table is bloating. Especially if you aren't VACUUMing it 
before loading the new data.

> Since VACUUM has never been done on the tables before, should a VACUUM
> FULL be done first?  If so, approximately how long does a VACUUM FULL
> take on a database with 25 tables each having anywhere form 1,000 to
> 50,000 rows?  

Honestly, you'd be better off dumping and reloading the database. With that 
little data, it would be pretty quick. Although, VACUUM is pretty fast on 
tables with no indexes.

> The reason I ask is because this is a live website, and 
> any down time is very inconvenient.  Also, would it be sufficient
> (after the first VACUUM FULL) to simply use a VACUUM ANALYZE after the
> tables are repopulated (ie. every night)?

If you do a TRUNCATE and then a fresh load, a simple ANALYZE is sufficient.

You really should create some indexes though. Right now your queries are 
looping through the whole table for every SELECT. The only reason you're 
not dying is your tables are small enough to completely fit in memory, and 
presumably your query load is fairly low.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after VACUUM ANALYZE

2008-04-08 Thread Alan Hodgson
On Tuesday 08 April 2008, [EMAIL PROTECTED] wrote:
> The problem is that we have peaktimes were everything is running fine. It
> has something to do with the vacuum process running. To simplify my
> problem:
>
> - I run vaccum analyze concurrently with some few user queries: slows
> down to a crawl after vacuum is finished.
>
> - if i run it while no user is connected, everything runs fine
> afterwards.
>
> It has something to do with the vacuum analyze process.

It's probably running the machine into swap. Check swap usage and disk I/O 
when it happens. 


-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In the belly of the beast (MySQLCon)

2008-04-18 Thread Alan Hodgson
On Friday 18 April 2008, Chris Browne <[EMAIL PROTECTED]> wrote:
> I note in the blog that the "in place upgrade" issue came up.
> (Interesting to observe that it *also* came up pretty prominently in
> the intro session at PG East...  This is obviously a matter of Not
> Inconsiderable Interest...) 

Upgrades are certainly the biggest PostgreSQL operational issue for me. 

Uhh, boss, I need to bring the database down for 3 days to do an upgrade. 
Yes, that means clients can't do anything during that time. Boss - [EMAIL 
PROTECTED]@#$#

Hence why I'm still running 8.1 

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In the belly of the beast (MySQLCon)

2008-04-18 Thread Alan Hodgson
On Friday 18 April 2008, Geoffrey <[EMAIL PROTECTED]> wrote:
> What about the:
>
> 8.1 -> slony -> 8.3
> switch users to 8.3 databases
>
> solution.

15+ million row inserts/updates a day across 1000+ tables. Oh, and an 
extensive existing Slony structure for some portions of the database.

I could conceivably construct Slony sets for the currently non-replicated 
tables and iteratively subscribe them (so the initial subscribe doesn't 
take a week ...).  I'm not at all sure Slony could keep up with our update 
load, though, especially not while maintaining current database operations 
and also handling those subscriptions. Slony doesn't really work well with 
high transaction volumes, in my experience.

I am going to play with this and see where it breaks, but it's going to be 
an enormous time investment to babysit it.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup setup

2008-04-23 Thread Alan Hodgson
On Wednesday 23 April 2008, "Gabor Siklos" <[EMAIL PROTECTED]> wrote:
> I need to back up our database off-site for disaster recovery. If I just
> back up the entire database data directory (i.e. /var/lib/pgsql/data)
> will I be able to restore from there?

Technically you can do this, if you do it per the PITR instructions in the 
manual (ie. you also need all the WAL logs produced during and around the 
backup window).

> Or should I instead just dump the 
> data, using pg_dump, and back up the dump?

This is easier to restore from.

> The advantage of the first method would be that I would not have to wait
> for pg_dump (it takes quite long on our 60G+ database) and would just be
> able to configure the backup agent to monitor the data directory and do
> differential backups of the files there every hour or so.

Differential backups of a running database will produce garbage. The proper 
way to keep running backups of the database is through the PITR mechanism.

Very few applications that write to disk can be safely backed up while 
running. You might want to look at your whole backup strategy to see if 
you're really backing up what you think you're backing up.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto-vacuum questions

2008-05-08 Thread Alan Hodgson
On Thursday 08 May 2008, John Gateley <[EMAIL PROTECTED]> wrote:
> But the new database, mydbtest, always has slow queries.
> I run an analyze and they speed up. 

Do the query plans actually change, or are you just seeing caching effects 
from running the analyze?

-- 
Alan


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Need Help Recovering from Botched Upgrade Attempt

2008-06-18 Thread Alan Hodgson
On Wednesday 18 June 2008, Craig Ringer <[EMAIL PROTECTED]> wrote:
> >   Every file from /var/lib/pgsql/ before I started this is on the
> > weekly backup tape from last Friday night. If need be I can restore
> > from that and start over.
>
> Well, no worries then. I'm sure you can understand that for many people
> - way TOO many people - that is not the case, so it's well worth
> stressing the point.

If the database was in use when _that_ backup was taken, it may also not be 
usable.

You can't just backup a live database from the filesystem level and expect 
it to work ...

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Losing data

2008-06-19 Thread Alan Hodgson
On Thursday 19 June 2008, Garry Saddington <[EMAIL PROTECTED]> 
wrote:
> I read in a
> Postgres manual that the hard disk may report to the OS that a write has
> occured when it actually has not, is this possible?

Yeah. But unless the power suddenly turned off that wouldn't cause data 
loss.

> Oh, and the problem 
> has been intermittant. Another thing that happened this morning is that
> Postgres had today as 18/06/2008 when in fact it was 19/06/2008 and the
> OS reported this correctly. Restarting postgres sorted it, could this be
> the problem?

I strongly suspect the problem is between the keyboard and the chair. 

In any case, however, if PostgreSQL reported the transaction complete and 
the machine didn't experience any hardware problems (like sudden power or 
disk failure), I would certainly not suspect PostgreSQL as the source of 
the problem.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema, database, or tables in different folders?

2009-06-05 Thread Alan Hodgson
On Tuesday 02 June 2009, "Carlos Oliva"  wrote:
> Is there a way to create a database or a table of a database in its own
> folder?  We are looking for ways to backup the sytem files of the
> database to tape and one to exclude some tables from this backup.  We
> can selectively backup folders of the file system so we figure that if
> we can create a schema or database or table in its own folder, we can
> backup our database and exclude the tables selectively.  We are using
> Linux RedHat.  Thank you.

You can put them in their own tablespace, but backing them up would not be 
useful. Filesystem backups of the cluster must include the whole database 
to be useful (and see the documentation on PITR for how to do it right).

You can easily back up single databases with pg_dump, though.


-- 
I contend that for a nation to try to tax itself into prosperity is like a  
man standing in a bucket and trying to lift himself up by the handle. 
 -- Winston Churchill

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema, databse, or tables in different system folder

2009-06-05 Thread Alan Hodgson
On Tuesday 02 June 2009, "Carlos Oliva"  wrote:
> Thank you for your response.  The tablespace should work for us.  Perhaps
> you can help me with the following questions:
>
> 1) If we were to create a different table space for a database that has
> archival tables -- they will be backed up once, is it sufficient to
> backup the tablespace folder once?  We want to make sure that we can
> restore from tape the tablespace folder and we will in fact restore the
> full database and data
>

This will absolutely not work.

-- 
I contend that for a nation to try to tax itself into prosperity is like a  
man standing in a bucket and trying to lift himself up by the handle. 
 -- Winston Churchill

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store text files in the postgresql?

2009-06-12 Thread Alan Hodgson
On Friday 12 June 2009, Greg Stark  wrote:
> Also, it makes backups a pain since it's a lot easier to back up a
> file system than a database. But that gets back to whether you need
> transactional guarantees. The reason it's a pain to back up a database
> is precisely because it needs to make those guarantees.

It's far easier to backup and restore a database than millions of small 
files. Small files = random disk I/O. The real downside is the CPU time 
involved in storing and retrieving the files. If it isn't a show stopper, 
then putting them in the database makes all kinds of sense.

-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having trouble restoring our backups

2009-06-12 Thread Alan Hodgson
On Friday 12 June 2009, Bryan Murphy  wrote:
> What am I doing wrong?  FYI, we're running 8.3.7.

See the documentation on PITR backups for how to do this correctly.

-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store text files in the postgresql?

2009-06-12 Thread Alan Hodgson
On Friday 12 June 2009, Scott Ribe  wrote:
> > It's far easier to backup and restore a database than millions of small
> > files. Small files = random disk I/O. The real downside is the CPU time
> > involved in storing and retrieving the files. If it isn't a show
> > stopper, then putting them in the database makes all kinds of sense.
>
> On the contrary, I think backup is one of the primary reasons to move
> files *out* of the database. Decent incremental backup software greatly
> reduces the I/O & time needed for backup of files as compared to a pg
> dump. (Of course this assumes the managed files are long-lived.)

We'll have to just disagree on that. You still have to do level 0 backups 
occasionally. Scanning a directory tree of millions of files to decide what 
to backup for an incremental can take forever.  And restoring millions of 
small files can take days. 

But I concede there are good arguments for the filesystem approach; 
certainly it's not a one size fits all problem. If your files are mostly 
bigger than a few MB each, then the filesystem approach is probably better. 

And of course big database tables get unwieldy too, for indexing and 
vacuuming - I wouldn't necessarily put most files into the large object 
interface, just the ones too big to want to fetch all in one piece.

-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disaster recovery (server died)

2009-06-19 Thread Alan Hodgson
On Friday 19 June 2009, Miguel Miranda  wrote:
> Hi, the worst have ocurred, my server died (cpu), so i reinstalled
> another server with the same postgres version.
> I have the old data directory from the old server, how can i restore my
> databases from this directory to the new one?
> I dont have a backup (pg_dump,etc), just the main previus live data
> directory from the old server.
> best regards

Just copy it into place where the new postmaster expects to live.

-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Controlling proliferation of postgres.exe processes

2009-06-23 Thread Alan Hodgson
On Tuesday 23 June 2009, Radcon Entec  wrote:
> Greetings!
>
> At the current moment, our customer's computer has 22 instances of
> postgres.exe running.  When a colleague checked a few minutes ago, there
> were 29.  Our contract specifies that we cannot consume more than 40% of
> the computer's memory, and we're over that level.  When does an instance
> of postgres.exe get created, and how can we make sure we create only the
> minimum number necessary?
>

Most of the memory reported in use by PostgreSQL is shared ... the actual 
per-process memory use is fairly low unless your work_mem is set  high and 
you have queries that use it.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres online backup and restore has errors that are concerning

2009-06-25 Thread Alan Hodgson
On Thursday 25 June 2009, Chris Barnes  
wrote:
>   I started an online backup of postgres, tar’d my data folder, copy to
> usb drive in production
> and restored it into my RC environment. Have I missed
> something important?
>

You need the transaction logs archived during and immediately after the 
backup, and you need to setup a recovery.conf for the initial startup that 
can access those files. See the PITR backup notes in the documentation, 
they're really quite complete.

-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] planned recovery from a certain transaction

2009-06-25 Thread Alan Hodgson
On Thursday 25 June 2009, "Chris Spotts"  wrote:
> The transaction itself works flawlessly, but every once and awhile the
> data the it uploads from comes in flawed and we have to find a way to
> reset it. This reset involves restoring a backup that was taken right
> before the proc started.   If we had the xid of the long running
> transaction, is there a better way to reset it right before that
> transaction happened?  Restoring the backup is a lengthy process because
> several of the tables that are affected are rather large.
>

PITR would permit you to restore the database to a point in time or a 
transaction ID. You could probably do something with filesystem snapshots  
to minimize backup/restore time for it.


-- 
WARNING:  Do not look into laser with remaining eye.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data corruption (8.2.5 Windows XP)

2009-06-30 Thread Alan Hodgson
On Tuesday 30 June 2009, regis.boum...@steria.com wrote:
> SELECT * FROM t_table t WHERE t.id=1;  => no result
>
> Is there a reason for this?
> Is there a way to "repair" the database?
>

reindex


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REINDEX "is not a btree"

2009-07-10 Thread Alan Hodgson
On Friday 10 July 2009, Vanessa Lopez  wrote:
> What do you mean by we can't simply take a filesystem copy of a
> running database? :-O ... How should we then do the backups (so next
> time I will not have the same problem again) ?

There is extensive documentation on how to do backups. For filesystem 
backups, see PITR.

You might also want to examine all your backup strategies  - most running 
applications are not happy about being backed up without taking special 
steps to ensure data consistency.

-- 
Anyone who believes exponential growth can go on forever in a finite world, 
is either a madman or an economist.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storage of Foreign Keys

2009-09-21 Thread Alan Hodgson
On Monday 21 September 2009, Christian Koetschan 
 wrote:
> Is everything I insert into mycolA and mycolB stored twice, or
> is there something like a pointer/reference from mycolA to the things
> stored in mycolB?
>

It's stored twice and for performance you need to index it in both tables. 
If you put ON UPDATE CASCADE on it, an update to the master will hit both 
tables though.

-- 
"No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast."

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Alan Hodgson
On Tuesday 29 September 2009, Sam Mason  wrote:
> ?? I'm not sure what you're implying about the semantics here, but it
> doesn't seem right.  COPY doesn't somehow break out of ACID semantics,
> it's only an *optimization* that allows you to get large quantities of
> data into the database faster.  The main reason it's faster is because
> parsing CSV data is easier than parsing SQL.
>
> At least I think that's the only difference; anybody know better? 

I think a big reason is also that the client can stream the data without 
waiting for a network round trip ack on every statement.


-- 
"No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast."

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Alan Hodgson
On Tuesday 29 September 2009, Sam Mason  wrote:
> > I think a big reason is also that the client can stream the data
> > without waiting for a network round trip ack on every statement.
>
> I don't think so.  I'm pretty sure you can send multiple statements in a
> single round trip.  libpq is defined to work in such cases anyway:
>
>   http://www.postgresql.org/docs/current/static/libpq-exec.html
>

I'm sure you probably _can_, but how many programming loops do so?


-- 
"No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast."

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned table question

2009-10-14 Thread Alan Hodgson
On Wednesday 14 October 2009, Reid Thompson  wrote:
> So we know have data in ~30 partitioned tables.
> Our requirements now necessitate adding some columns to all these tables
> ( done ) which will get populated via batch sql for the older tables and
> by normal processing as we move forward.
>
> The batch update is going to result in dead tuples in the older tables.
> What would be the recommended way to recover this dead space?
> Vacuum full children tables + reindex children tables?  or
>

cluster's faster.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-27 Thread Alan Hodgson
On Tuesday 27 October 2009, fox7  wrote:
> I have tries this:
> CREATE INDEX View1_index
>   ON View1
>   USING btree
>   (term1);
>
> It isn't correct because this syntax is for tables, instead View1 is a
> view. Do you know the syntax to create view index?
> thanks a lot

You can't create indexes on views. They will make use of appropriate indexes 
on the underlying tables, however.

-- 
"No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast."

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-24 Thread Alan Hodgson
On Thursday, October 24, 2013 11:31:38 AM John R Pierce wrote:
> On 10/24/2013 9:47 AM, Jeff Janes wrote:
> > I restore from my base backup plus WAL quite often.  It is how I get a
> > fresh dev or test instance when I want one.  (It is also how I have
> > confidence that everything is working well and that I know what I'm
> > doing should the time come to do a real restore).  When that starts to
> > take an annoyingly long time, I run a new base backup.  How often that
> > is, can be anywhere from days to months, depending on what's going on
> > in the database.
> 
> hey, silly idea formed on half a cup of coffee  if that base backup
> is in the form of a copy of the data directory (as opposed to tar.gz or
> something), could you 'update' it by pointing an instance of postgres at
> it, and then playing the WAL archive into it, then shutting that
> instance down?   or would it be impossible to synchronize the ongoing
> new WAL's from the master with the timeline of this?

That's basically what warm standby's do, isn't it? As long as they keep 
recovery open it should work.

You can also use rsync to take your base backup - just update the rsync copy. 
That's what I do (and keep a separate tarball of that rsync copy, made on the 
backup host).



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-25 Thread Alan Hodgson
On Thursday, October 24, 2013 11:13:34 PM Jayadevan wrote:
> Alan Hodgson wrote
> 
> > That's basically what warm standby's do, isn't it? As long as they keep
> > recovery open it should work.
> 
> A warn standby will be almost in sync with the primary, right? So recovery
> to point-in-time (like 10 AM this morning) won't be possible. We need a
> base, but it shouldn't be so old that it takes hours to catchup- that was my
> thought. As John mentioned, looking at the WAL/transaction numbers, time to
> recover etc need to be looked at.
> 

Well, yeah. The point was that you possibly could run it for a while to "catch 
up" without taking a new base backup if you desired. You should also keep 
copies of it for PITR.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to "pip" pg_dump output into new db ?

2014-03-25 Thread Alan Hodgson
On Tuesday, March 25, 2014 02:56:48 PM Frank Foerster wrote:
> Hi,
> 
> we are currently in the process of upgrading a production/live 1 TB
> database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
> 
> Fortunately we have a capable spare-server so we can restore into a clean,
> freshly setup machine.
> 
> I just wondered wether the intermediate step of writing the dump-file and
> re-reading it to have it written to the database is really necessary. Is
> there any way to "pipe" the dump-file directly into the new
> database-process or would such functionality make sense ?
> 
> I can only speak for us, but each time we do a dump/restore we need to
> extract/copy/move very large files and piping directly into something like
> psql/pg_restore on another machine etc. would greatly reduce
> upgrade-time/pain.
> 
> Thanks and best regards,
> 
> Frank

Sure. For maximum speed, something like:

pg_dump [options] source_db | pigz - | ssh -e none user@target "gunzip - | 
psql [options] target_db"

Depending on your hardware, though, doing a custom backup to a target file and 
then using it for a parallel restore would probably overall end up being 
faster, plus you get to keep the backup if needed. In my experience, the 
restore is a lot slower than the backup.

Slony is also great, to save most of the downtime. At the expense of a lot of 
setup and testing time.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it safe to stop postgres in between pg_start_backup and pg_stop_backup?

2014-04-03 Thread Alan Hodgson
On Thursday, April 03, 2014 02:48:03 PM Steven Schlansker wrote:
> On Apr 2, 2014, at 3:08 PM, Jacob Scott  wrote:
>   • pg_start_backup
>   • Take a filesystem snapshot (of a volume containing postgres data but 
> not
> pg_xlog) • pg_stop_backup
>   • pg_ctl stop
>   • Bring a new higher performing disk online from snapshot
>   • switch disks (umount/remount at same mountpoint)
>   • pg_ctl start

... with a recovery.conf in place when starting the new instance.

> 
> Assuming you ensure that your archived xlogs are available same to the new
> instance as the old

And make sure they're archived to a different disk.

> Another option you could consider is rsync.  I have often transferred
> databases by running rsync concurrently with the database to get a “dirty
> backup” of it.  Then once the server is shutdown you run a cleanup rsync
> which is much faster than the initial run to ensure that the destination
> disk is consistent and up to date.  This way your downtime is limited to
> how long it takes rsync to compare fs trees / fix the inconsistencies.
> 

This would be simpler.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-09 Thread Alan Hodgson
On Wednesday, April 09, 2014 09:02:02 PM Brent Wood wrote:
> Given the likely respective numbers of each OS actually out there, I'd
> suggests BSD is very over-represented in the high uptime list which is
> suggestive.

Suggestive of ... sysadmins who don't do kernel updates?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Alan Hodgson
On Monday, April 14, 2014 09:13:51 AM Steve Clark wrote:
> How did you deal with binaries and libraries, as well as third party apps
> like perl modules or php/apache modules?

The 8.4 library package usually ends up installed to satisfy other package 
requirements.

Binaries get handled through the alternatives system.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql 9.0.1 table corruption

2011-04-14 Thread Alan Hodgson
On April 14, 2011 08:10:47 am Dan Biagini wrote:

> I suspect that it may have occurred during a filesystem level backup
> (ie pg_start_backup(), tar -czf..., pg_stop_backup()), as I performed
> a backup and moved the database to a different system.  After
> restoring the files and starting postgres I began getting these
> errors.  I have tried restoring multiple times with the same tar
> archive with the same results (on different systems).

Did you perform a PITR restore using that tar as a base backup? Do any errors 
occur?  The tar file alone will not be an intact backup, as I'm sure you're 
aware.

http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#BACKUP-
PITR-RECOVERY

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locale and UTF8 for template1 in 8.4.4

2011-05-06 Thread Alan Hodgson
On May 6, 2011, Iain Barnett  wrote:
> Would anyone be able to point out to me how I can get the template1
> database to be utf8 and en_GB? (or US, I'm not *that* fussed)

Use the --encoding and --locale options to initdb.


Re: [GENERAL] replication problems 9.0

2011-06-07 Thread Alan Hodgson
On June 7, 2011 04:38:16 PM owen marinas wrote:
> Thx, Merci, Gracias Rodrigo
> it worked indeed, Im wondering why replication is not included in "All"
> 

Probably because it gives access to all the data being written to the 
database.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Write performance on a large database

2011-06-09 Thread Alan Hodgson
On June 9, 2011 05:15:26 AM Håvard Wahl Kongsgård wrote:
> Hi, I have performance issues on very large database(100GB). Reading from
> the database is no problem, but writing(or heavy writing) is a nightmare.
> I have tried tuning postgresql, but that does not seem to improving the
> writing performance.
> To improve the write performance, what are my options?

Buy fast disks (many), attach them to a good raid controller with a battery-
backed write cache, setup in RAID-10. Or move to SSD.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid byte sequence for encoding "UTF8": 0xedbebf

2011-06-15 Thread Alan Hodgson
On June 15, 2011 01:18:27 PM BRUSSER Michael wrote:
> Unless there's no other options I don't want to use sed or break file into
> pieces, if possible,

iconv loads everything into RAM. You can use "split", convert the pieces, and 
then recombine, I did that when converting a large database to utf-8 and it 
worked.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 10:40:11 AM Rich Shepard wrote:
> alter table station_information add column sta_type varchar(50)
> unique not null references station_type(sta_type);
> NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index
> "station_information_sta_type_key" for table "station_information"
> ERROR:  there is no unique constraint matching given keys for referenced
> table "station_type"
>Reading the alter table document page for 9.x does not show me what I'm
> doing incorrectly.

You need a unique index on station_type.sta_type


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 11:55:25 AM Rich Shepard wrote:
> On Thu, 7 Jul 2011, Alan Hodgson wrote:
> > You need a unique index on station_type.sta_type
> 
> Alan,
> 
>station_type(sta_type) is part of a composite primary key. Doesn't
> primary key automatically imply unique and not null?

It implies the composite is unique. Not sta_type.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 12:30:35 PM Rich Shepard wrote:
> On Thu, 7 Jul 2011, Alan Hodgson wrote:
> > It implies the composite is unique. Not sta_type.
> 
>OK. Now I understand. How, then, do I add a unique constraint to each
> component of the composite key so I can add them as foreign keys to the
> station_information table? Or, is there another way to add those two
> columns to station_information and then add the foreign key constraints?
> 
> Thanks for clarifying,
> 

create unique index index_name on table (column).

Or I think you can create a foreign key on a composite like "foreign key 
(column1,column2) references table (column1,column2)" which probably makes 
more sense if that is a natural key.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need linux uid in pg-psql

2011-08-19 Thread Alan Hodgson
On August 19, 2011 07:01:33 AM Gauthier, Dave wrote:
> Is there a way to get the linux idsid of a user, even for a remote network
> connection?
> 
> I could write a pg-perlu to get this, but I suspect it won't give me the
> original user when there's a remote connect.
> 
> Thanks for any suggestions !

There's an identd protocol for this. It's not commonly used anymore, and when 
present tends to deliberately obscure the results. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication: Observations, Questions and Comments

2011-08-24 Thread Alan Hodgson
On August 24, 2011 08:33:17 AM Samba wrote:
> One strange thing I noticed is that the pg_xlogs on the master have
> outsized the actual data stored in the database by at least 3-4 times,
> which was quite surprising. I'm not sure if 'restore_command' has anything
> to do with it. I did not understand why transaction logs would need to be
> so many times larger than the actual size of the database, have I done
> something wrong somewhere?

If you archive them instead of keeping them in pg_xlog, you can gzip them. 
They compress reasonably well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   >