Re: [GENERAL] Oracle to PostgreSQL transition?

2013-04-05 Thread Robert Treat
On Fri, Apr 5, 2013 at 2:12 AM, Szymon Guz mabew...@gmail.com wrote:
 On 4 April 2013 16:16, Roy Anderson roy.ander...@gmail.com wrote:

 Hey all,

 We have a very robust Oracle and SQL Server presence at work but we're
 looking to farm out some of the load to PostgreSQL to limit costs. I'm
 curious if there are any DBAs out there who have gone down this route
 before. Any tips, tricks, failures, successes, etc.? I would just like to
 hear some first-hand commentary on this topic.

 Thank you so much!

 Roy


 Hi Roy,
 maybe this would be helpful:

 http://wiki.postgresql.org/wiki/PostgreSQL_for_Oracle_DBAs
 http://wiki.postgresql.org/wiki/Oracle_to_Postgres_Conversion
 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#Oracle


Yeah, it's worth reading through those links to get an idea of things;
you'll find much less literature (and tools) for MSSQL, but the
principals are mostly the same. One thing to decide on is if you are
going to port applications wholesale, or try to run some kind of
hybrid oracle/mssql - postgres mix. If it's the latter, you'll
probably need to write your own tools; at least we've always done that
as we've never found anything that worked with both Oracle and MSSQL
into Postgres reliably; but really it shouldn't be too difficult;
basically just ETL or some home brew replication scripting to glue
things together.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrading from V8.3.4 to V9.2.4

2013-04-05 Thread Robert Treat
On Wed, Apr 3, 2013 at 10:02 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On 04/03/2013 10:14 AM, Paul Tilles wrote:

 We are going to be upgrading our postgres version from 8.3.4 to 9.2.4 in
 the near future.

 Can anyone give me a short list of gotchas concerning problems we
 might step into?

 We have a number of functions written in PL/pgSQL.  Will they work in
 9.2.4 without modification?


 Starting at 8.3.x means you have cleared the type casting changes. The issue
 I remember moving functions from 8.3 -- had to do with the changes in the
 string escaping, see here for a start:

 http://www.postgresql.org/docs/9.2/interactive/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE



Yeah, there were also some subtle breakage around keywords used as
variable naming when plpgsql was port to use the core lexer. Ideally
you'll have some kind of test suite / regression you can run to verify
all of this; if not you maybe you can set up some replication between
old/new servers (we use mimeo for that when sever versions are this
far apart) and point your app to both and see what happens.


Robert Treat
conjecture: xzilla.net
consulting: omniti.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default timezone changes in 9.1

2012-12-22 Thread Robert Treat
On Sat, Dec 22, 2012 at 3:41 AM, Jasen Betts ja...@xnet.co.nz wrote:
 On 2012-12-16, Terence Ferraro terencejferr...@gmail.com wrote:

 With the exception of a few parameters (max_connections and the ssl related
 variables that we enable), the default configuration file (circa 9.0) has
 worked extremely well across 100+ machines so far over the last two years
 and counting. However, we are simply deploying these on commodity machines
 ($300-400 off the shelf). Spec wise such machines have not changed
 significantly (I suppose the shift away from higher clock speeds to more
 cores can be thanked for that).

 You cam possibly get some of what you want using SQL like:

  alter database DB_NAME set timezone = 'localtime';

  You can do the similarly with other connection parameters on a
 per-user or per-database basis too.


If the goal is just to use a single config and have tz match the
system, the setting localtime in the postgresql.conf should suffice.
IIRC this is what we've started doing, since we we're bit by this as
well. (I think the first systems we noticed it on were ones where
system was UTC and Postgres was GMT, which was mostly a cosmetic
problem, but it surprised us elsewhere too). It makes me wonder if
there was enough thought put into the backwards compatibility angle of
this; either what the default should be, or to make sure people were
aware of the change.

Robert Treat
play: xzilla.net
work: omniti.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 9.0 EBS Snapshot Backups on Slave

2012-01-24 Thread Robert Treat
On Mon, Jan 23, 2012 at 8:02 PM, Alan Hodgson ahodg...@simkin.ca wrote:
 On Monday, January 23, 2012 07:54:16 PM Andrew Hannon wrote:
 It is worth noting that, the slave (seemingly) catches up eventually,
 recovering later log files with streaming replication current. Can I trust
 this state?


 Should be able to. The master will also actually retry the logs and eventually
 ship them all too, in my experience.


Right, as long as the failure case is temporary, the master should
retry, and things should work themselves out. It's good to have some
level of monitoring in place for such operations to make sure replay
doesn't get stalled.

That said, have you tested this backup? I'm a little concerned you'll
have ended up with something unusable because you aren't starting xlog
files that are going on during the snapshot time. It's possible that
you won't need them in most cases (we have a script called
zbackup[1] which does similar motions using zfs, though on zfs the
snapshot really is instantaneous, in I can't remember a time when we
got stuck by that, but that might just be faulty memory. A better
approach would probably be to take the omnipitr code [2], which
already had provisions for slaves from backups and catching the
appropriate   wal files, and rewrite the rsync bits to use snapshots
instead, which would give you some assurances against possibly missing
files.

[1] this script is old and crufty, but provides a good example:
http://labs.omniti.com/labs/pgtreats/browser/trunk/tools/zbackup.sh

[2] https://github.com/omniti-labs/omnipitr


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] General performance/load issue

2011-11-26 Thread Robert Treat
On Fri, Nov 25, 2011 at 6:48 PM, Gaëtan Allart gae...@nexylan.com wrote:
 Here are the latest checkpoint logs :

 LOG:  checkpoint complete: wrote 842 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 0 recycled; write=168.970 s, sync=0.005 s,
 total=168.977 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 318 buffers (0.0%); 0 transaction log
 file(s) added, 0 removed, 2 recycled; write=63.818 s, sync=0.006 s,
 total=63.825 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 744 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 0 recycled; write=147.035 s, sync=0.006 s,
 total=147.043 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 108 buffers (0.0%); 0 transaction log
 file(s) added, 0 removed, 3 recycled; write=35.410 s, sync=14.921 s,
 total=54.811 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 393 buffers (0.0%); 0 transaction log
 file(s) added, 0 removed, 0 recycled; write=88.835 s, sync=43.210 s,
 total=135.728 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 914 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 1 recycled; write=148.162 s, sync=14.249 s,
 total=170.481 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 202 buffers (0.0%); 0 transaction log
 file(s) added, 0 removed, 1 recycled; write=53.152 s, sync=0.004 s,
 total=53.159 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 897 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 1 recycled; write=42.414 s, sync=1.175 s,
 total=58.957 s
 LOG:  checkpoint starting: shutdown immediate
 LOG:  checkpoint complete: wrote 666 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 1 recycled; write=0.027 s, sync=1.600 s,
 total=1.630 s
 LOG:  checkpoint starting: time
 LOG:  checkpoint complete: wrote 627 buffers (0.1%); 0 transaction log
 file(s) added, 0 removed, 1 recycled; write=125.856 s, sync=0.006 s,
 total=125.864 s
 LOG:  checkpoint starting: time


 And pg_stat_bgwriter :

 postgres=# select *,now() from pg_stat_bgwriter;
  checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean
 | maxwritten_clean | buffers_backend | buffers_alloc |              now

 ---+-++---+
 --+-+---+--
 -
               388 |              13 |             494948 |       4306591
 |            13555 |         7458743 |    7835244602 | 2011-11-26
 00:43:47.232924+01
 (1 row)

 postgres=# select *,now() from pg_stat_bgwriter;
  checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean
 | maxwritten_clean | buffers_backend | buffers_alloc |              now

 ---+-++---+
 --+-+---+--
 -
               389 |              13 |             501802 |       4352198
 |            13809 |         7469220 |    7839778941 | 2011-11-26
 00:49:00.680779+01
 (1 row)

 Processes that were writing were SELECT queries against database.


What was the filesystem involved? What is the underlying disk layout
(you said it's SSD's, but how what type, how many, and in what
configuration? Also how is Postgres set up on top of the disks (all of
$PGDATA and OS on one volume? Split up?)

Also, how many active connections do you typically have? Can you
reduce your sort mem to something like 4MB, and set log_temp_files to
0?


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] General performance/load issue

2011-11-24 Thread Robert Treat
On Thu, Nov 24, 2011 at 9:09 AM, Tomas Vondra t...@fuzzy.cz wrote:
 On 24 Listopad 2011, 14:51, Gaëtan Allart wrote:
 Hello everyone,

 I'm having some troubles with a Postgresql server.
 We're using PG has a database backend for a very big website (lots of data
 and much traffic).

 The issue : server suddenly (1H after restart) becomes slow (queries not
 responding), load rises (20 instead of 1), iowait rises (20 to 70%)

 Version : 9.0.5
 Server : Dual Xeon X5650 (24  cores total)
 Memory : 48 GB
 Disks : SSD


 Top when overloaded :

 Top is not the most useful tool here, I guess. Use iotop (will show you
 which processes are doing the I/O) and tools like vmstat / iostat.

 Postgresql.conf :

 max_connections = 50
 shared_buffers = 12G
 temp_buffers = 40MB
 work_mem = 128MB
 maintenance_work_mem = 256MB
 max_files_per_process = 8192
 checkpoint_segments = 256
 checkpoint_timeout = 30min
 checkpoint_completion_target = 0.9

 Fine. Let's see the options that look suspicious.


I think you missed some suspicious settings... I'd recommend setting
shared buffers to 8gb, and I'd likely reduce checkpoint segements to
30 and set the checkpoint timeout back to 5 minutes. Everything about
the way this server is configured (including those vm settings) is
pushing it towards delaying the WAL/Buffer/Checkpoint as long as
possible, which matches with the idea of good performance initial
followed by a period of poor performance and heavy i/o.

On a side note, I'd guess your work_mem is probably too high. 50
(connections) x 128 (mb work mem) x 2 (sorts per query) = 12GB RAM,
which is 25% of total ram on the box. That doesn't necessarily mean
game over, but it seem like it wouldn't be that hard to get thrashing
being set up that way. YMMV.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby for postgresql8.2

2011-11-22 Thread Robert Treat
On Tue, Nov 22, 2011 at 4:09 AM, khizer khi...@srishtisoft.com wrote:
 Hi,

    May i know how to install pg_standby for postgresql8.2 in ubuntu 10.10
  OS
 I copied the pg_standby folder for compilation which has the files
 pg_standby.c, Makefile

 initially i tried with make, make install inside contrip/pg_standby folder
 but i got an err Makefile.global no such file r directory ...

 so i compiled and reinstalled postgresql8.2 but not able to find pg_standby,
 How can i solve this guys?

IIRC, the way to do this is to compile 8.3 for your platform,
including the pg_standby contrib module, and then just copy the binary
over to you system. Since it doesn't integrate directly, it will work
against 8.2, save for the %r macro (for removing unneeded xlog
segments). If you really need the %r capabilities, you might want to
look at OmniPITR, which doesn't require compiling and implements that
feature.

Robert Treat
play: xzilla.net
work: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] synchronous replication + fsync=off?

2011-11-22 Thread Robert Treat
On Tue, Nov 22, 2011 at 12:16 PM, Bruce Momjian br...@momjian.us wrote:
 Tomas Vondra wrote:
 On 17 Listopad 2011, 17:07, Jaime Casanova wrote:
  On Thu, Nov 17, 2011 at 7:52 AM, Schubert, Joerg jschub...@cebacus.de
  wrote:
  Hello,
 
  I have two servers with battery backed power supply (USV). So it is
  unlikely, that both will crash at the same time.
 
  Will synchronous replication work with fsync=off?
  That means we will commit to system cache, but not to disk. Data will
  not
  survive a system crash but the second system should still be consistent.
 
 
  you should never use fsync=off (in production at least)
 
  the appropiate parameter to use is synchronous_commit which is the one
  that controls synchronous replication:
  off = no local nor remote synchronous commit
  local = local synchronous commit but no remote
  on = both, local and remote, synchronous commit
 
  synchronous commit = flushed to disk

 While I don't recommend it, fsync=off definitely is an option, especially
 with sync replication. The synchronous_commit is not a 1:1 replacement.

 Imagine for example a master with lot of I/O, and a sync standby. By
 setting fsync=off on the master and fsync=on on the slave the master does
 not need to wait for the fsync (so the I/O is not that stressed and can
 handle more requests from clients), but the slave actually does fsync.

 So you don't force local fsync, but you're waiting for fsync from the
 standby. But standby doesn't need to handle all the I/O the primary has.

 You can't do this with synchronous_commit - that basically forces you to
 do local fsync on commit, or not to wait for the commit at all.

 Tomas

 Disclaimer: I haven't actually tried this, so maybe I missed something.

 I think you did.  synchronous_commit really means fsync so that the
 system is alway consistent --- there is no waiting for the fsync to
 happen on the master (unless I am totally missing something).

+1, synchronous_commit has (pretty much) nothing to do with
synchronous replication; it's all about controlling the relationship
between local commits and fsync.

   With
 fsync off, you can get into cases where the heap/index files are pushed
 to disk before the wal gets written to disk, causing the system to be
 inconsistent in case of a crash replay.


I think it's worth saying that this doesn't guarantee you will lose
your master as someone claimed upthread; more correctly it just
introduces the possibility that your database will be corrupt upon
server or OS crash (which is something most people should avoid).

 I think the only use of fsync off is for performance testing so see how
 expensive fynsc is.


Never speak in absolutes! ;-)

It's not unheard of to run with fsync = off when you have asynchronous
replicated failover. Given you've already decided that you're ok with
data loss, the extra amount that you lose with the fsync off can be
trivial compared to the performance boost you get, especially if
system crashes in your environment are rare (which hopefully they
should be).

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum stuck on a table for 18+ hours, consuming lots of CPU time

2011-11-22 Thread Robert Treat
On Tue, Nov 22, 2011 at 11:00 PM, Lonni J Friedman netll...@gmail.com wrote:
 On Tue, Nov 22, 2011 at 7:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Lonni J Friedman netll...@gmail.com writes:
 I suspect you're right.  I just ran strace against that PID again, and
 now all the lseek  read FD's are referrring to a different number
 (115), so that means its moved onto something new since I looked a few
 hours ago?

 Anyway, I think this is what you were referring to:
 /proc/30188/fd/115 -   /var/lib/pgsql/data/base/64793/72633.10

 How do I correlate that file to an actual database object?

 64793 is the pg_database.oid of the database, and 72633 is the
 pg_class.relfilenode value of the table/index.

 Its definitely an index.    Thanks for your help, I just need to be
 patient now that I understand how to better monitor this.


Well, it sounds like you have things set up for both a cost limit and
a cost delay, which means if you manually vacuumed the thing, it would
probably go quicker, at the cost of more i/o, but given the cpu
overhead, probably a trade worth making. Personally I'd throw out
those vacuum cost settings entirely as they cause more trouble than
they're worth (IMNSHO), and you'll likely see this again in the
future.


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this safe to perform on PostgreSQL 8.3.7 - Resize a column in a PostgreSQL table without changing data

2011-11-22 Thread Robert Treat
On Tue, Nov 22, 2011 at 10:21 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Tue, Nov 22, 2011 at 7:50 AM, Reid Thompson reid.thomp...@ateb.com wrote:
 Note that I man­u­ally added the 4 to the desired size of 35..again, for
 some legacy rea­sons inside PG. Done. That's it. Should we check?

 d TABLE1

 TABLE public.TABLE1
 COLUMN  |  TYPE                 | Modifiers
 +---+---
 COL1    | CHARACTER VARYING(35) |

 Such a sim­ple yet effec­tive trick. Of course it'd be nicer if this is
 some­how included in a more proper way in the data­base, but this does the
 job.

 Note that this method works around all the safe guards etc that make
 sure your data is safe and coherent.  It works, as long as you're
 careful what you're doing.


And by careful, be aware that there are certainly considerations you
need to have for indexes and/or partition mismatches that might be
involved here. At a minimum I'd suggest upgrading to 8.3.$latest
(ideally the next release, which will likely be out in a couple weeks)
as there are some bugs in this area in older releases (and 8.3.7
certainly qualifies).

 the real solution, to me, is to stop using varchar limits unless
 there's a real reason for them.  I.e. arbitrary limits on things like
 name lengths make no sense in the db.


Yeah, I have often subscribed to this idea in the past, though there
is a valid argument for saying that while you don't have a specific
limit you care about, there are values of length that are long enough
that they probably indicate garbage data or something gone wrong. In a
world where Postgres actually handled this problem gracefully (and I
think 9.1 does), I don't think this rule is as clear cut as it used to
be.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wal archiving on a hot-standby server

2011-11-22 Thread Robert Treat
On Mon, Nov 21, 2011 at 5:58 AM, Enrico Sirola enrico.sir...@gmail.com wrote:
 Hello,
 is it possible to archive the WAL files received by a hot-standby server? In 
 noticed nothing about this on the pgsql docs. The idea
 is to archive logs in two locations, at the primary site and at the replica 
 site (over a wan) in order to be able to perform a PITR also
 at the replica site.

You can do this 2 different ways; 1 is by adding multiple destinations
into your archive command to send the wal file to multiple
destinations;  here is an example using omnipitr (the -dr are the
remote destinations, we gzip the first for long term archiving)

/opt/OMNIpitr/bin/omnipitr-archive -dr
gzip=db4:/mnt/db/prod/walarchive/ -dr db2:/mnt/db/prod/db2-walarchive/
  %p

The other way to do this is to add something into your
archive_cleanup_command of your recovery.conf to archive the files to
the other destination.

Which method you want depends on the version / setup of postgres you
have, and whether you want the slave to be in the chain of the replica
site. (I probably wouldn't, which would make me lean towards something
like omnipitr)

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrading from 8.3 to 9.0

2011-11-20 Thread Robert Treat
You could also look into upgrading via pg_upgrade, if you don't want
to go through the dump/restore cycle. Even in that case if you can do
a test of pg_dump (one for schema, one for data) and make sure it
loads into the new db without any issues. 8.3 - 9.0 is pretty
harmless (you might want to think about 9.1 instead btw). Oh, it's
worth mentioning, you should really skim through the release notes and
make sure nothing in the incompatabilities applys to you; here's the
links to the docs:
http://www.postgresql.org/docs/9.1/interactive/release-9-0.html#AEN108545
http://www.postgresql.org/docs/9.1/interactive/release-8-4.html#AEN111313
http://www.postgresql.org/docs/9.1/interactive/release-8-3.html#AEN114593

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

On Thu, Nov 17, 2011 at 8:14 PM, David Morton davidmor...@xtra.co.nz wrote:
 I've performed a very similar upgrade including postgis upgrade at the same
 time, we used the following command examples ... also put some simple
 scripting together to dump multiple databases in parallel as downtime was
 critical:
 Dump database data: pg_dump -Fc database --compress=1 
 /mnt/dumps/database.dump
 Dump global data: pg_dumpall -g  /mnt/dumps/globals.sql
 Parse the global file and create a script to create new directory structure
 for table spaces etc (also changed paths to new mount points here)
 Run the global sql script: psql -f /mnt/dumps/globals.sql postgres
 Restore databases without GIS functionality: pg_restore -j 2 -C -d postgres
 /mnt/dumps/database.dump
 Restore databases with GIS functionality (upgrade of postgis version
 requires this): sh /tmp/postgis_restore.pl
 /usr/share/postgresql/contrib/postgis-1.5/postgis.sql database_user
 /mnt/dumps/gisdatabase.dump -E=UTF8
 Those were the basic essential steps ... there are other supporting things
 we did around the outside to streamline the transition, it all worked
 perfectly on the day.
 Best advise is that if its more than a scratch environment, test test test
 !!
 
 From: Pedro Doria Meunier pdo...@netmadeira.com
 To: pgsql-general@postgresql.org
 Sent: Friday, 18 November 2011 12:40 AM
 Subject: [GENERAL] upgrading from 8.3 to 9.0

 Hi,

 I'm on the verge of upgrading a server (Fedora 8 ehehe) running postgresql
 8.3

 It also has postgis 1.3 installed.

 Thinking of using pgadmin3 to perform the backup and then restore it after
 I've upgraded the server to fedora 15/16 and thus upgrading postgresql to
 9.0.

 I seem to remember problems with restoring from a pgadmin's .backup file in
 the
 past... :S

 Any pitfalls I should be aware of?

 Btw: it's a reasonably large DB with 30mil+ rows...

 Already thankful for any insight,

 --
 Pedro Doria Meunier
 GSM: +351 91 581 88 23
 Skype: pdoriam




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-20 Thread Robert Treat
On Mon, Nov 14, 2011 at 12:45 AM, Venkat Balaji venkat.bal...@verse.in wrote:
 Question: what can I do to rsync only the new additions in every table
 starting 00:00:01 until 23:59:59 for each day?

 A table level replication (like Slony) should help here.
 Or
 A trigger based approach with dblink would be an-other (but, a bit complex)
 option.

If you don't actually care about the rows of data specifically, and
just want incremental data diff, you might look at what options your
filesystem gives you. We often use incremental snapshots on ZFS to
give use smaller copies that can be shipped off to the backup server
and used to reconstruct the server if needed.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SIGNALNAME in pg_ctl kill

2011-11-09 Thread Robert Treat
If you are trying to kill one specific connection/backend, I'd
recommend using the pg_terminate_backend(pid_goes_here) function.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

On Wed, Nov 9, 2011 at 5:18 PM, Mike Blackwell mike.blackw...@rrd.com wrote:
 The manual section on the postmaster process has some info:
 http://www.postgresql.org/docs/current/static/app-postgres.html

 __
 Mike Blackwell | Technical Analyst, Distribution Services/Rollout Management
 | RR Donnelley
 1750 Wallace Ave | St Charles, IL 60174-3401
 Office: 630.313.7818
 mike.blackw...@rrd.com
 http://www.rrdonnelley.com



 On Wed, Nov 9, 2011 at 16:02, Gauthier, Dave dave.gauth...@intel.com
 wrote:

 pg_ctl --help lists the various SIGNALNAME options to use with pg_ctk
 kill...



 Allowed signal names for kill:

   HUP INT QUIT ABRT TERM USR1 USR2



 I can't find a description of what each does.  I'd like to abort just the
 procpid I enter, but using ABRT has a tendancy to kill lots of other stuff
 too.



 Thanks for any insights !


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Masquerading a unique index as a primary key in 8.4?

2011-11-08 Thread Robert Treat
On Tue, Nov 8, 2011 at 11:28 AM, Vick Khera vi...@khera.org wrote:
 On Tue, Oct 18, 2011 at 6:21 PM, David Pirotte dpiro...@gmail.com wrote:
 The underlying purpose is to get Londiste to acknowledge the table's key,
 and this strategy seems to work without any problems.  Londiste doesn't seem
 to care that the primary key is only reflected in pg_index and isn't
 accompanied by the relevant pg_constraint entry.  Is modifying the
 underlying pg_catalog tables like this Very Bad?  Will it have mysterious
 and unintended consequences, or can I get away with it?  Thanks!

 The badness I see that will eventually come back to bite you is that
 your unique constraint is lacking NOT NULL and a PK by definition
 has NOT NULL.  Therefore some other parts of the system is permitted
 to make that assumption, and when stuff fails because you lied to the
 system, you will probably never ever figure out or even know.


Agreed. I'd be more inclined to change londiste, or just ditch it for
something else that will recognize the unique index as a unique enough
identifier to enable replication. That limitation is kind of lame.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is there no 8.3.16 rpm with _id ?

2011-11-03 Thread Robert Treat
2011/11/3 Devrim GÜNDÜZ dev...@gunduz.org:
 On Wed, 2011-11-02 at 13:16 -0400, Robert Treat wrote:
 

 Hey Devrim, any chance you have published your rpm spec files you used
 on the earlier 8.3 -id builds? I looked around and couldn't find one.

 They were in the previous repo -- anyway, I just update the spec file to
 8.3.16:

 http://svn.pgrpms.org/browser/rpm/redhat/8.3/postgresql-intdatetime

 It also includes the patches.

 Anyway, here are the 8.3.16-id packages. I had some free cycles this
 morning, so I built them:

 http://yum.postgresql.org/8.3/redhat/rhel-5-x86_64-id/repoview/
 http://yum.postgresql.org/8.3/redhat/rhel-5-i386-id/repoview/


Oh, nice. Thanks Devrim!

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recommendations for SSDs in production?

2011-11-03 Thread Robert Treat
On Wed, Nov 2, 2011 at 11:02 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
 On Wednesday, November 02, 2011 11:39:25 AM Thomas Strunz wrote:
 I guess go Intel
 route or some other crazy expensive enterprise stuff.

 It's advice about some of the crazy expensive enterprise stuff that I'm
 seeking...? I don't mind spending some money if I get to keep up this level of
 performance, but also am not looking to make somebody's private plane payment,
 either.

There's a pretty varied mix of speed, durability, and price with any
SSD based architecture, but the two that have proven best in our
testing and production use (for ourselves and our clients) seem to be
Intel (mostly 320 series iirc), and Fusion-IO. I'd start with looking
at those.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: password authentication failed for user postgres

2011-11-03 Thread Robert Treat
On Thu, Nov 3, 2011 at 9:02 AM, Sabn Coanda s.coa...@deuromedia.ro wrote:
 Hi,

 I am using trust authentication since some years, but I need now to
 authenticate the user for a specific application when connect to postgresql
 database. I found trust ignores the password, so I changed it in
 pg_hba.conf to password.


You should probably switch this to md5. It gives the same basic
functionality, but it's more secure.

 My application is working now, but I have problems running the usual
 maintenance scripts. For instance, when I am running pg_dump with postgres
 user, it requires the password. The problem is it doesn't accept the
 password I used in pgAdmin to connect to the same database for the same
 database user.

 What is wrong ?

 My pg_hba.conf looks like:
 local all all password
 host all all 127.0.0.1/32 password

 The command I am running is:
 pg_dump my_db_name -F c -v -X disable-triggers -U postgres -h 127.0.0.1 -p
 5432 -f ./test.backup


Does pgadmin also connect via 127.0.0.1? What happens if you dump via
the local socket? Also, do you get an error for pg_dump, or does it
just prompt for a password that it wont accept what you submit?

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server move using rsync

2011-11-02 Thread Robert Treat
On Tue, Nov 1, 2011 at 11:08 AM, Alan Hodgson ahodg...@simkin.ca wrote:
 On October 31, 2011 03:01:19 PM Stephen Denne wrote:
 I'm wondering whether it's worth doing anyway, simply to check that it
 doesn't do something completely unexpected, which would presumably alert
 us to something we hadn't considered.


 Testing is always worthwhile, if only to ensure that PostgreSQL will actually
 run with your configuration on the new machine (sufficient shared memory, IP
 addresses specified in postgresql.conf, etc).

 However, assuming the PostgreSQL binary packages you're using are identical,
 and assuming that you aren't changing tablespace pointers around, the rsync /
 restart is pretty fool-proof in terms of reliably copying PostgreSQL itself.
 PostgreSQL is good about updating time stamps on modified files, you don't 
 have
 to worry about needing the full compare options on rsync or anything  -avr --
 delete is generally sufficient .

 You might disable WAL archiving during a test startup to avoid sending
 duplicates to your backup server.


You know, this looks like it will work, but if I were you, I would set
up the database as a PITR standby on the new box, and have WAL
shipping in place. When you're ready to move, you shutdown the old
database, synch up the xlogs, and then failover to the new database.
Not only should this be faster, it seems less error prone, and you can
actually test the failover and lunch bits while the original server is
up and running.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is there no 8.3.16 rpm with _id ?

2011-11-02 Thread Robert Treat
2011/10/30 Devrim GÜNDÜZ dev...@gunduz.org:

 [Moving to pgsql-general]

 On Sun, 2011-10-30 at 07:24 +0100, hubert depesz lubaczewski wrote:
 we'd like to upgrade to newest 8.3, and we're on 8.3.11 _id, but it
 looks like 8.3.11 is the newest version of 8.3 built with integer
 datetimes:
 http://yum.postgresql.org/8.3/redhat/rhel-5-x86_64-id/repoview/

 Is there any reason for this, and will there be any newer versions
 built with integer datetimes?

 I have no intention to build the -id packages again, given the lack of
 request (first request since 8.3.11...). You can build your own packages
 quite easily, though.


Hey Devrim, any chance you have published your rpm spec files you used
on the earlier 8.3 -id builds? I looked around and couldn't find one.


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL benchmarked on XFS vs ZFS vs btrfs vs ext4

2011-09-14 Thread Robert Treat
Can you go into some more detail on how you set up ZFS on these systems?

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

On Tue, Sep 13, 2011 at 10:56 PM, Andy Colson a...@squeakycode.net wrote:
 On 09/13/2011 08:15 PM, Toby Corkindale wrote:

 Hi,
 Some months ago, I ran some (probably naive) benchmarks looking at how
 pgbench performed on an identical system with differing filesystems. (on
 Linux).

 Since then the kernel-level version of ZFS became usable, and there have
 been improvements to btrfs, and no doubt various updates in the Linux kernel
 and PostgreSQL that should help performance.

 I ran the tests on Ubuntu 11.04 with Pg 9.0 first, then upgraded the
 system to Ubuntu 11.10 (beta) with Pg 9.1 and ran them again.

 The latter combination showed a considerable performance improvement
 overall - although I didn't investigate to find out whether this was due to
 kernel improvements, postgres improvements, or virtio improvements.

 The results are measured in transactions-per-second, with higher numbers
 being better.

 Results:

 ext4 (data=writeback,relatime):
 natty: 248
 oneiric: 297

 ext4 (data=writeback,relatime,nobarrier):
 natty: didn't test
 oneiric: 1409

 XFS (relatime):
 natty: didn't test
 oneiric: 171

 btrfs (relatime):
 natty: 61.5
 oneiric: 91

 btrfs (relatime,nodatacow):
 natty: didn't test
 oneiric: 128

 ZFS (defaults):
 natty: 171
 oneiric: 996


 Conclusion:
 Last time I ran these tests, xfs and ext4 pulled very similar results, and
 both were miles ahead of btrfs. This time around, ext4 has managed to get a
 significantly faster result than xfs.

 However we have a new contender - ZFS performed *extremely* well on the
 latest Ubuntu setup - achieving triple the performance of regular ext4!
 I'm not sure how it achieved this, and whether we're losing some kind of
 data protection (eg. like the barrier options in XFS and ext4).
 If ext4 has barriers disabled, it surpasses even ZFSs high score.

 Oddly, ZFS performed wildly differently on ubuntu 11.04 vs 11.10b. I can't
 explain this. Any ideas?


 Cheers,
 Toby


 Did you test unplugging the power cable in the middle of a test to see which
 would come back up?

 -Andy

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] md5 of table

2011-09-01 Thread Robert Treat
2011/9/1 Merlin Moncure mmonc...@gmail.com:
 2011/9/1 Grzegorz Jaśkiewicz gryz...@gmail.com:
 On Thu, Sep 1, 2011 at 11:14 AM, Sim Zacks s...@compulab.co.il wrote:
 On 09/01/2011 12:26 PM, Pavel Stehule wrote:

 Hello

 postgres=# create table tt(a int, b varchar);
 CREATE TABLE
 postgres=# insert into tt values(10,'hello');
 INSERT 0 1

 postgres=# select md5(array_to_string(array_agg(md5(tt::text)),'')) from
 I do that as well, but it might have questionable performance when
 your table has 16M rows, and is 50GB +

 you need order by for that to work.  I would do it like this:
 select md5(array(select foo from foo order by foo_pkey)::text);

 it's great quick'n'dirty, but not much scalable beyond millions.

I've always liked doing this with my pager:

 [robert@client-168] export PAGER=md5
-=[11:40:25 Thu Sep 01]=---=[ pagila-0.10.1 ]=-
 [robert@client-168] psql -hlocalhost -dpagila
psql (9.0.4, server 9.1beta3)
WARNING: psql version 9.0, server version 9.1.
 Some psql features might not work.
Type help for help.

pagila=# select * from actor order by actor_id;
f381ebdefe0aada9c0bc14e657962c1f


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-29 Thread Robert Treat
On Thu, Apr 28, 2011 at 7:00 PM, Mark Felder f...@feld.me wrote:
 On Thu, 28 Apr 2011 17:27:04 -0500, Basil Bourque basil.l...@me.com wrote:

 So, while I can't specifically recommend their products, I certainly
 suggest considering them.

 Customer of ours is probably lurking on here. We host their servers in our
 datacenter -- we had a UPS go pop after an amazing surge and their servers
 all went down (weren't paying for N+1 power). They had several FusionIO
 cards in servers running Postgres and experienced zero corruption. YMMV.


Yeah, we're running fusion-io on some pretty heavily traffic'd
servers, and the performance has been good, and durability there when
needed. It's certainly worth checking out for those investigating
these options.


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-28 Thread Robert Treat
On Thu, Apr 21, 2011 at 12:10 PM, Greg Smith g...@2ndquadrant.com wrote:
 On 04/21/2011 11:33 AM, Florian Weimer wrote:

 Is there an easy way to monitor WAL traffic in away? It
 does not have to be finegrained, but it might be helpful to know if
 we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
 database, should the question of SSDs ever come up.


 You can use functions like pg_current_xlog_location() :
 http://www.postgresql.org/docs/9.0/interactive/functions-admin.html

 Save a copy of this periodically:

 select now(),pg_current_xlog_location();

 And you can see WAL volume over time given any two points from that set of
 samples.

 To convert the internal numbers returned by that into bytes, you'll need to
 do some math on them.  Examples showing how that works and code in a few
 languages:

 http://archives.postgresql.org/pgsql-general/2010-10/msg00077.php (by hand)
 http://munin-monitoring.org/browser/trunk/plugins/node.d/postgres_streaming_.in?rev=3905
 (in Perl)
 http://archives.postgresql.org/pgsql-general/2010-10/msg00079.php (in C)
 http://postgresql.1045698.n5.nabble.com/How-can-we-tell-how-far-behind-the-standby-is-td3252297.html
 (in bash with bc(!), other links)

 What I keep meaning to write is something that does that as part of the SQL
 itself, so it gets pulled out of the database already in bytes.


We have an open task to work on this same problem. What we had cobbled
together so far was some sql which converted the xlog value into an
integer (it's pretty ugly, but I could send it over if you think it
would help), which we could then stick in a monitoring system and
graph. To get an idea of traffic, I just multiplied this by 16MB. End
result ended up looking like this:
https://circonus.com/shared/graphs/9497d906-4c5b-e6d2-bf91-d8869e7c1668/OnxdZG

Couldn't decide on exactly where to go from there. That's graphing
MB/sec, which does map easily in my mind, since xlogs streams are in
16mb bursts. It would make more sense for wal streaming though (but in
that case we'd probably want to measure it more precisely).

Robert Treat
play: http://xzilla.net
work: http://omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG on two nodes with shared disk ocfs2 drbd

2011-02-27 Thread Robert Treat
On Sun, Feb 27, 2011 at 7:17 PM, Andrew Sullivan a...@crankycanuck.ca wrote:
 On Mon, Feb 28, 2011 at 12:13:32AM +0100, Jasmin Dizdarevic wrote:
 My idea was the one, that john described: DML and DDL are done on the small
 box and reporting on the big mama with streaming replication and hot
 stand-by enabled. the only problem is that we use temp tables for reporting
 purposes. i hope that the query duration impact with not using temp tables
 will be equalized through running dml/ddl on the small box.

 By the way, despite my flip comment, it is entirely possible that what
 you need would be better handled by one of the other replication
 systems.  Slony is actually well-suited to this sort of thing, despite
 the overhead that it imposes.  This is a matter of trade-offs, and you
 might want to think about different roles for different boxes --
 especially since hardware is so cheap these days.


Yeah, it's possible one of the async master-master systems like
bucardo or rubyrep would also fit his needs. There are options here,
just no full on pony/unicorn/pegasus mix like everyone hopes for.

Oh, I guess if someone is looking to fund/help development of such a
thing, it might be worth pointing people to Postgres-XC
(http://wiki.postgresql.org/wiki/Postgres-XC). It's got a ways to go,
but they are at least trying.

Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/Lg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about switchover with PG9 replication

2011-02-27 Thread Robert Treat
On Sun, Feb 27, 2011 at 3:46 AM, Cyril Scetbon cyril.scet...@free.fr wrote:
 Le 07/02/2011 09:57, Wouter D'Haeseleer a écrit :

 Question 1 : is it possible to have such a replication configuration with
 the streaming replication of PG9 (cascaded replication) ?

 Nope, as far as I have tested pg only has 1 master and can have a number of
 slaves, so having 2 masters is not possible.

 The second host named master2 is a slave which has slaves too, not a master
 on which you can write.


It's good to point that out for those that didn't pick up on that, but
unfortunately for you it doesn't change the equation wrt your
scenario.

 Question 2 : All the procedures I have seen describing a switchover between
 2 PG servers require to copy (or rsync) the database from the new master
 (old slave) to the new slave (old master).
 Is it possible to do switchover between sites (between PGMaster1 and
 PGMaster2) whithout copying all the database from the new PG master to the
 new PG slave ?
 If it is not possible yet, shall it be possible in future releases ?

 Nope this is not possible, pg requires to have an updates basebackup at the
 slave, this is because if you loose a lot of streamed wal files it is
 impossible for the slave to catch up and have consistent data.


Yeah, this is kind of a cop-out because in a switchover you'd be able
to garauntee no loss of wal (stream or file based). The issue is more
just that the built in replication system isn't very mature yet. It's
being worked on, and switchover is something on the list, but it's not
an option yet.

Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/Lg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGError: ERROR: missing FROM-clause entry for table

2010-11-25 Thread Robert Treat
On Thu, Nov 25, 2010 at 9:21 PM, James B. Byrne byrn...@harte-lyne.cawrote:

 I am getting this error:

 PGError: ERROR:  missing FROM-clause entry for table ca_customs_entry
 LINE 1: ..._entries.is_cadex_transmitted = 'f') ORDER BY
 ca_customs...

 The code is generated by a Ruby-on-Rails-3.0.1 ActiveRecord model:

 SELECT ca_customs_shipments.* FROM ca_customs_shipments INNER
 JOIN ca_customs_entries ON
 ca_customs_entries.ca_customs_shipment_id =
 ca_customs_shipments.id WHERE
 (ca_customs_entries.is_cadex_transmitted = 'f') ORDER BY
 ca_customs_entry.is_across_transmitted,
 ca_customs_entry.is_across_rejected,
 ca_customs_entry.is_across_accepted,
 ca_customs_entry.is_cadex_released LIMIT 5 OFFSET 0


Looks to me like the problem is you are trying to ORDER BY columns in
ca_customs_entry, but there is no such table for that (don't confuse it
with ca_customs_entries).  You need to either set a matching alias, or fix
the table name qualifier in those order by columns.


Robert Treat
play: xzilla.net
work: l42.org/lg


Re: [GENERAL] Why facebook used mysql ?

2010-11-10 Thread Robert Treat
On Tue, Nov 9, 2010 at 1:36 PM, Sandeep Srinivasa s...@clearsenses.comwrote:


 On Tue, Nov 9, 2010 at 11:46 PM, David Boreham david_l...@boreham.orgwrote:


 Hmm...typically multi-core scaling issues are in the area of memory
 contention and cache coherence (and therefore are for the most part not
 dependent on the OS and its scheduler).


 If it is independent of the OS, then how does one go about tuning it.

 Consider this - I get a 12 core server on which I want multiple webserver
 instances + DB. Can one create CPU pools (say core 1,2,3 for webservers,
 4,5,6,7 for DB, etc.) ?

 I know about taskset, but should one be using it ?


You can do this in some systems (we've done it on solaris systems for
example), but realize that for any of the high scale websites, they run
dedicated machines for database and web services; and that's essentially a
mandatory requirement just for purposes of having visibility into what is
affecting your server performance at scale.

It might also be worth mentioning that Facebook doesn't actually run MySQL
like you'd get from Oracle; they have their own custom patch set that is
tuned specifically for their servers (based on their OS modifications as
well). Probably the closest equivalent would be Percona's XtraDB table
engine, and I have seen some benchmarks that would certainly show comparable
performance at 32 cores if not slightly better, but of course it will be
somewhat workload dependent. It's mostly irrelevant though to internet
oriented companies, very few are looking for 32+ core systems as a solution
to their problems.


Robert Treat
play: http://www.xzilla.net
work: http://www.omniti.com/is/hiring


Re: [GENERAL] REINDEX requirement?

2010-11-10 Thread Robert Treat
On Tue, Nov 9, 2010 at 1:51 PM, Igor Neyman iney...@perceptron.com wrote:

  -Original Message-
  From: AI Rumman [mailto:rumman...@gmail.com]
  Sent: Tuesday, November 09, 2010 3:26 AM
  To: pgsql-general General
  Subject: REINDEX requirement?
 
  How do I know that index require REINDEX?
 
 

 Look at the results of pgstatindex(...) function for specific index.
 It's part of pgstattupple contrib module - read it up in the docs.


If you are looking for a poor mans tool, we have a script that will output
numbers on table/index bloat. It's not entirely accurate (patches welcome),
but usually good enough to highlight the problems. See
http://labs.omniti.com/labs/pgtreats/log/trunk/tools/pg_bloat_report.pl


Robert Treat
play: http://www.xzilla.net
work: http://www.omniti.com/is/hiring


[GENERAL] Looking for PostgreSQL Folks in New Orleans area

2010-11-09 Thread Robert Treat
Howdy folks,

We're looking for some PostgreSQL users / advocates in the New Orleans area
for some community outreach activities, like PGDays and User Groups. If you
are in that area and interested in helping, or know who to talk to, please
drop me a line, thanks!

Robert Treat
play: http://www.xzilla.net
work: http://www.omniti.com


[GENERAL] Postgres officially accepted in to 2010 Google Summer of Code program

2010-03-19 Thread Robert Treat
Howdy folks, 

I'm very happy to announce that the Postgres project has been selected 
to 
participate in this years Google Summer of Code program.  Over the next couple 
weeks we'll be looking to solidify our mentor base; if you work on Postgres 
and would be willing to mentor a student, please send me a note so we can get 
you signed up.  If you are a student and you're interested in working on 
Postgres, now is the time to get your proposal together. Student applications 
will open up on March 29th, so we'd like to have our mentors in place for 
review, and hopefully had students discussing with the Postgres devs their 
proposals as much as needed. If anyone has any questions, feel free to email 
me, or track me down on irc.  

Handy links for Postgres GSoC:

Our ideas page for GSoC: 
http://www.postgresql.org/developer/summerofcode

Our loose attempt at organization: 
http://wiki.postgresql.org/wiki/GSoC_2010

Our Postgres page on the GSoC site: 
http://socghop.appspot.com/gsoc/org/show/google/gsoc2010/postgresql

Users Guide to GSoC: 
http://socghop.appspot.com/document/show/gsoc_program/google/gsoc2010/userguide


Thanks everyone, I'm looking forward to another interesting year with GSoC, 
and hoping you'll join in. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Volunteers needed to help staff the PostgreSQL Booth at LISA 2009

2009-10-09 Thread Robert Treat
LISA 2009 (the Large Installation Systems Administration Conference) is coming 
up next month (November 1-6th).  

We're looking for a few folks in Baltimore/DC area who are attending the 
conference or in the area who can spare a few hours to staff the Postgres 
booth. We need people all day Wednesday (November 4th, Noon - 7PM) and half 
day Thursday (November 5th, 10AM-2PM). 

You get a Postgres T-shirt for your trouble, and the opportunity to talk with 
lots of people interested in getting started with Postgres. 

Please reply to this email if you've got some time. Thanks!

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-09 Thread Robert Treat
On Wednesday 08 April 2009 18:25:25 Ron Mayer wrote:
 Robert Treat wrote:
  You can be sure that discussion of this topic in this forum will soon be
  visited by religious zealots, but the short answer is nulls are bad,
  mmkay. A slightly longer answer would be that, as a general rule,
  attributes of your relations that only apply to 1% of the rows are better
  represented as a one

 To fulfill your prophecy of zealotry, I've got a number of tables
 with columns that are mostly null that I can't think of that nice a
 way of refactoring.  I'd love ideas to improve the design, though.

 One example's an address table. Most addresses have a few fields
 that are typically present (building number, city, state, etc).
 Others, as described in various government's address standards,
 are fields that are typically absent.  For example in US addressing
 rules, the Urbanization Name line:
 http://www.usps.com/ncsc/addressstds/addressformats.htm
 MRS MARIA SUAREZ  Name
 URB LAS GLADIOLAS Urbanization name
 150 CALLE A   House no. and st. name
 SAN JUAN PR 00926-3232City, state, and ZIP+4
 Similarly sparse columns in my address tables are,
 titles, division/department Names and mailstop codes.
 (described here: http://pe.usps.gov/text/pub28/pub28c3_011.htm)

 While I realize I could stick in some string (empty string, or
 some other magic string like urbanization name doesn't apply to
 this address) into a table, it sure is convenient to put nulls
 in those columns.

 I'm quite curious what you'd suggest a well-designed address table
 would look like without nulls.

The decision here would depend on your perticular sect of the anti-null 
religion, but you have a couple of choices:

1) Break these fields out into one or more tables, containing entries only for 
those address that have the additional information. Ideally you might be able 
to do something like extended_address_info where all of these fields could 
be kept, all of them being non-null. I suspect you can't do the ideal, so 
you'd end up with a bunch of tables. 

This would be used by the normalization trumps nullification sect

2) Given that all of these columns have an authoritarian source of what should 
be allowed, you could use the magic string approach without requiring too 
much magic, and these columns could even be a foriegn key into a table 
containing the authoritarian options. 

This could be justified by the all nulls are bad sect, but might also be used 
by a null using crowd who take a strict approach to nulls meaning unknown 
value, since here it isn't that the value is unknown; there isn't a valid 
value for these columns. (Adding the magic string to your FK table creates a 
valid reference value for those entries that would otherwise not match)

Personally, if you force me into a well-designed address table *without* 
nulls  decision, I would take this latter approach. HTH

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Robert Treat
On Wednesday 08 April 2009 11:56:35 Ian Mayo wrote:
 Cheers Tom,

 On Wed, Apr 8, 2009 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Ian Mayo ianm...@tesco.net writes:
  [snip]
 
  No.  You'd basically be manually reinventing the TOAST mechanism;
  or the large object mechanism, if you choose to store the blob
  as a large object rather than a plain bytea field.  Either way,
  it won't physically be in the same table as the main row data.

 fine, that keeps the design simpler


Maybe I've been reading too much Pascal again lately, but if only 1% of your 
rows are going to have data in this column, personally, I'd put it in a 
separate table. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Robert Treat
On Wednesday 08 April 2009 15:30:28 Ian Mayo wrote:
 On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat

 xzi...@users.sourceforge.net wrote:
  Maybe I've been reading too much Pascal again lately, but if only 1% of
  your rows are going to have data in this column, personally, I'd put it
  in a separate table.

 thanks for that Robert - it does match my (completely groundless)
 first impression.

 In the nature of debate, would you mind passing on the pascal-related
 reasons why you'd put the data in another table?


You can be sure that discussion of this topic in this forum will soon be 
visited by religious zealots, but the short answer is nulls are bad, mmkay.  
A slightly longer answer would be that, as a general rule, attributes of your 
relations that only apply to 1% of the rows are better represented as a one 
to N relationship using a second table. For a longer answer, see
http://www.databasedesign-resource.com/null-values-in-a-database.html
or http://www.dbazine.com/ofinterest/oi-articles/pascal27

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL on Webmin

2009-03-16 Thread Robert Treat
On Monday 16 March 2009 17:55:00 Kostadin Solakov wrote:
 John R Pierce wrote:

 thats odd, as a domain socket should be slightly -faster- than a tcp/ip
 socket.   now, since you say 'previous host' I could wonder if other
 configuration items are impacting this, such as buffer sizes in
 postgresql.conf, relative speed of disk controllers, etc.   or perhaps
 this new database hasn't been analyzed since it was populated, or its
 indexes need rebuilding, or something similar...

   I think that is the issue. Previous server was old machine and the
 settings in postgresql.conf were the default ones.
   The new machine is Xeon quad with 8GB ram and I already made some
 changes in postgresql.conf, but still no result.
   I followed the instructions on
 http://www.powerpostgresql.com/PerfList/ and made the following changes:

   shared_buffers = 5
   work_mem = 512000
   checkpoint_segments = 32
   effective_cache_size = 20

   Also I ran REINDEX on the DB and VACUUM VERBOSE ANALYZE on each
 table, but still the results are the same as before.
   What are the settings you recommend for this server?
   The application that is going to use it has a very demanding back
 end, it parses very large XML files (20+) and saves the data in the DB.


Unless you're actually running 8.2, that information is a bit out of date. 
There's a better write up at 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Once you go through that and restart, if it's still slow, can you paste 
explain analyze from the two different servers?

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] audit table

2009-02-16 Thread Robert Treat
On Thursday 12 February 2009 22:13:05 Craig Ringer wrote:
 Sim Zacks wrote:
  I want a trigger on every table that inserts the old row into an audit
  table (for updates and deletes). If the audit table was per table, then
  I could easily have a field of type that table and insert old into it.
 
  Is there any way that I could accomplish this functionality with any
  other type, so I could input any record into it?

 You want a single audit table that looks like this:

 CREATE TABLE audit (
id SERIAL PRIMARY KEY,
table_changed regclass,
changed_by VARCHAR,
changed_when TIMESTAMP WITH TIME ZONE,
oldrow ANY_ROW_TYPE
 );

 ie you want a field that can dynamically contain anything?

 AFAIK that's not possible unless you want to store a textual
 representation of the row. I'm not sure of an easy way to do it even
 then, and of course you can't read it out again as a real row.

 What you might want to look at doing is using table inheritance. Your
 master audit table looks like this:

 CREATE TABLE audit (
id SERIAL PRIMARY KEY,
table_changed regclass,
changed_by VARCHAR,
changed_when TIMESTAMP WITH TIME ZONE,
 );

 and then you have child audit tables for each audited table, each of
 which looks like this:

 CREATE TABLE audit_tablename (
 old_row tablename;
 ) INHERITS audit;


http://pgfoundry.org/projects/tablelog/

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database schema data synchronizer software for PostgreSQL?

2009-01-20 Thread Robert Treat
On Tuesday 20 January 2009 10:44:06 David Fetter wrote:
 On Tue, Jan 20, 2009 at 03:03:33PM +0100, Csaba Együd wrote:
  Hi,
  I'd like to ask your suggestions about a reliable admin software
  which is able to compare two dabases and generate a schema
  synchrinizer script.

 There is no such thing, and there is no prospect of there ever being
 such a thing, because the database does not contain enough information
 to create this automatically.  The problem exists at the
 organizational level, and needs to be solved there.


While I would agree that these tools can't solve organizational problems, they 
do exist:

http://pgdiff.sourceforge.net/
http://apgdiff.sourceforge.net/
http://www.dbsolo.com/
http://sqlmanager.net/en/products/postgresql/dbcomparer

there are others too... 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this on the to-do list?

2009-01-20 Thread Robert Treat
On Monday 19 January 2009 18:13:51 Bruce Momjian wrote:
 Thomas Kellerer wrote:
  A B wrote on 18.01.2009 22:43:
   From the docs: 
   http://www.postgresql.org/docs/8.3/interactive/sql-update.html
  
   According to the standard, the column-list syntax should allow a list
   of columns to be assigned from a single row-valued expression, such as
   a sub-select:
   UPDATE accounts SET (contact_last_name, contact_first_name) =
   (SELECT last_name, first_name FROM salesmen
WHERE salesmen.id = accounts.sales_id);
This is not currently implemented ? the source must be a list of
   independent expressions.
  
   Is this feature going into postgresql any day soon? :-)
 
  It's on the TODO list:
 
  http://wiki.postgresql.org/wiki/Todo#UPDATE

 Also, I don't know if anyone working on this item for 8.4 or 8.5.


Probably because there is a work-around...

UPDATE accounts SET contact_last_name = x.last_name, set contact_first_name = 
x.first_name FROM (select last_name, first_name from salesmen where 
salesmen.id = accounts.sales_id) x 

Which is great if you just want to get this done, but sucks if you wanted the 
specific syntax from above. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solution for tranaction independent logging in same database?

2009-01-02 Thread Robert Treat
On Friday 02 January 2009 03:53:58 Gerhard Heift wrote:
 On Thu, Jan 01, 2009 at 02:41:08PM +0100, Gerhard Heift wrote:
  Hello,
 
  I want to log with triggers or in functions, and these logs should be
  independet of the transaction. Beside i want to have the information
  which action was commited and which not.
 
  So my idea was to log into the same database with dblink, return the
  primary keys and add them into a commit table.
 
  But my problem is, that I do not now how to write the rule properly.
 

We created a similar project to this which is in the pgsoltools repo; 
http://labs.omniti.com/trac/pgsoltools/browser/trunk/autonomous_logging_tool

This was originally created to mimic logging done in long-running Oracle 
PL/SQL functions using autonomous commits, but should work within any trigger 
functions on the postgres side as well (or at least givec you a good starting 
point to adapt it).  HTH

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tool to converter plsql in pgplsql

2008-12-12 Thread Robert Treat
On Friday 12 December 2008 04:16:19 Dave Page wrote:
 On Fri, Dec 12, 2008 at 7:12 AM, Albe Laurenz laurenz.a...@wien.gv.at 
wrote:
  paulo matadr wrote:
  you knowns a tool for automatic converter plsql in pgplsql?
  this tool exist?
 
  EnterpriseDB claim that they can do something like this,
  but I don't believe that there is any tool which can do
  more than assist you.

 We don't have a converter, though we do have experience in this area
 of course. Our Advanced Server product has direct support for pl/sql
 however so you can run your code unmodified.


*In theory* :-)  There are still a number of shortcomings, so depending on how 
large and/or complicated your systems are, it may or may not work for you, 
but it's certainly worth a look if you're planning a migration.

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu for servers (was TurnKey PostgreSQL)

2008-12-09 Thread Robert Treat
On Tuesday 09 December 2008 19:43:02 Liraz Siri wrote:
 Greg has a good point. Ubuntu is a bit of a moving target. In contrast,
 Debian has a much slower release cycle than Ubuntu and is thus
 considered by many people to be preferable for production server
 applications.


Another option for folks is to switch to another operating system thats a bit 
more stable *cough*solaris*cough*bsd*cough* 

:-)

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Favorite Tom Lane quotes

2008-12-04 Thread Robert Treat
On Monday 01 December 2008 22:09:08 Scott Marlowe wrote:
 On Mon, Dec 1, 2008 at 7:49 PM, Grzegorz Jaśkiewicz [EMAIL PROTECTED] 
wrote:
  which reminds me, of my favourite recent quote:
  Think I'll go fix this while I'm watching the football game ...

 We really need a favorite Tom Lane quotes thread.  Mine is (roughly):


http://archives.postgresql.org/pgsql-hackers/2006-04/msg00288.php
I remember after reading this post wondering whether Tom uses caffeinated 
soap... 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Job scheduling in Postgre

2008-12-04 Thread Robert Treat
On Tuesday 02 December 2008 07:11:02 A. Kretschmer wrote:
 am  Tue, dem 02.12.2008, um 16:45:16 +0500 mailte IPS folgendes:
  I have certain jobs  to be executed automatically at a given interval of
  time in the postgre SQL database. Is their any utility/feature available
  in Postgre to do so.

 No, use the scheduler from the OS, CRON for example (UNIX).


There is a database level schedular called (iirc) pgAgent, which comes bundled 
with pgAdmin. I think it's so well hidden because it comes as a part of a 
tool which is only used by a small subset of the community. I had hopes that 
it might follow autovacuums path and get moved into a contrib module and 
possibly integrated into the backend some day, but I haven't seen much push 
in that direction. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switch off PITR

2008-12-04 Thread Robert Treat
On Wednesday 03 December 2008 14:22:28 Joshua D. Drake wrote:
 On Wed, 2008-12-03 at 13:16 -0500, Bill Moran wrote:
  In response to Joey K. [EMAIL PROTECTED]:
   How do I turn off PITR in the mean time? I commented archive_command
   and issued a pg_ctl reload and postgres is *still* archiving logs to
   the backup server.
 
  Pretty sure you're going to need a full restart -- reload won't cause
  that parameter to be re-evaluated.

 You can change archive_command to something like /bin/true and reload.
 However you will have to do a full base backup to get postgresql doing
 log shipping again.


You can probably avoid this by having your archive command put the xlogs 
somewhere local, and then once you restart setting up the archive command to 
push back to your backup and then moving the missing logs manually.  (This is 
more/less fragile depending on exactly how you've set things up, but should 
be doable)

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limit on number of databases in a Cluster ?

2008-12-04 Thread Robert Treat
On Wednesday 03 December 2008 23:49:00 Scott Marlowe wrote:
 On Wed, Dec 3, 2008 at 11:05 AM, Josh Harrison [EMAIL PROTECTED] wrote:
  On Wed, Dec 3, 2008 at 11:51 AM, Scott Marlowe [EMAIL PROTECTED]
 
  wrote:
  On Wed, Dec 3, 2008 at 8:43 AM, Josh Harrison [EMAIL PROTECTED] wrote:
   Hi,
  
   1. Is there a limit on the number of databases that can be in a single
   postgres cluster?
 
  No.  I'm sure there's a practical limit into the thousands where
  things start to get slower.
 
   2. Is there any performance impacts associated with having too many
   databases in a cluster?
 
  Define too many.  I've run a couple hundred before without it being a
  problem.
 
   3. Is there a good magical number for this limit ?
 
  Only the one that your testing tells you there is.  Got a rough guess
  of how many you want to run?  How busy they'll be?  that kind of
  thing.
 
  About 10-15 ?

 That's hardly any really.  At that point it's more about whether or
 not your server can support all the users / access going on at once.
 15 or 1 db in the cluster, if you've got 200 users hitting it hard
 you'll need a big server.  OTOH, 100 dbs in a cluster with a dozen or
 fewer average users is just fine.

Right. This becomes most important when you tune postgresql.conf parameters, 
which will apply cluster wide so need to be calculated across all databases. 
The fsm settings are a good example (tracking pages across all databases), 
but also things like work_mem need to account for all connections to all 
databases when you think about how high you can set these. Don't forget some 
of these settings (like work_mem) can be set per database using the ALTER 
DATABASE command, just be careful becuase the support for backing up those 
changes is spotty at best. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Robert Treat
On Monday 17 November 2008 17:02:54 Blazej wrote:
 Of course you must delete schema before shutdown PostgreSQL and OS - I
 dont't now how resolve problem with error when the schema was not
 deleted? - I have no time to think about it maybe anybody know how to
 restore db when the in memory schema was damaged?


based on some similar, uh, experiences i've run across, i'd think easiest 
would be to keep a script around with truncate commands for all your tables, 
then when you restart, you run that script, which will fix your schema for 
you. This assumes you're keeping the default table space on hdd, if you lose 
the system catalogs, the right answer is initdb

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] speed up restore from dump

2008-11-01 Thread Robert Treat
On Friday 31 October 2008 08:07:08 Sam Mason wrote:
 On Thu, Oct 30, 2008 at 02:28:38PM -0700, Alan Hodgson wrote:
  On Thursday 30 October 2008, Joao Ferreira
  [EMAIL PROTECTED]
 
  wrote:
   well. see for yourself... (360 RAM , 524 SWAP) that's what it is...
   it supposed to be somewhat an embedded product...
 
  Clearly your hardware is your speed limitation. If you're swapping at
  all, anything running on the machine is going to be slow.

 The vmstat output only showed the odd block going in and out; but
 performance is only really going to suffer when it's thrashing.  If the
 swap in number stays in the double digits for a reasonable amount of
 time then you should probably look at what's causing it.  Giving memory
 back to the system to use for caching the file system can be good, lots
 of shared memory can also be good.


well, i think he needs to cut back on the work mem, but i think he might want 
to give a little more to wal buffers. 

 Building indexes takes time and IO bandwidth, maybe you could look at
 building less of them?  I'd be tempted to pull the import script apart
 into its constituent parts, i.e. the initial data load, and then all the
 constraints/index builds separately.  Then run through executing them by
 hand and see what you can change to make things more efficient.


It would be good to know where and when his bottlenecks are... ie. i could see 
him being i/o, memory, or cpu bottlenecked depending on where he is in the 
restore process.  

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Hot Standby utility and administrator functions

2008-10-27 Thread Robert Treat
On Monday 20 October 2008 05:25:29 Simon Riggs wrote:
 I'm looking to implement the following functions for Hot Standby, to
 allow those with administrative tools or management applications to have
 more control during recovery. Please let me know if other functions are
 required.

 What else do we need?


Is it possible to give the master/slave knowledge about each other?

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Hot Standby utility and administrator functions

2008-10-27 Thread Robert Treat
On Monday 27 October 2008 12:12:18 Simon Riggs wrote:
 On Mon, 2008-10-27 at 11:42 -0400, Robert Treat wrote:
  On Monday 20 October 2008 05:25:29 Simon Riggs wrote:
   I'm looking to implement the following functions for Hot Standby, to
   allow those with administrative tools or management applications to
   have more control during recovery. Please let me know if other
   functions are required.
  
   What else do we need?
 
  Is it possible to give the master/slave knowledge about each other?

 Yes, but for what reason?


Was thinking that admin tools that show hot standby information might also 
want to show the corresponding slave information (from the point of view of 
the master).  It might also allow tools to not have to be configured for all 
servers... ie connect to one and lookup the other.  

 The project I'm working on is Hot Standby, not streaming replication.
 That will link things together better than they are now, so I'd probably
 rather not prejudge/duplicate that.

It's possible this type of information isn't appropriate for our Hot Standby 
implementation, but it is somewhat common in asynchronous and/or master/slave 
systems. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to free disk space

2008-10-21 Thread Robert Treat
On Tuesday 21 October 2008 09:00:30 postgres Emanuel CALVO FRANCO wrote:
 After run VACUUM, you must run REINDEXDB to decrease indexes.


This is probably overkill, as you won't need to do this for a lot of tables in 
your database, and the locking issues are probably unhelpful. 

 You can pg_resetxlog too, but you need restart server to do that.


No No No!!! You should never ever ever run pg_resetxlog on a production 
machine!! I'm not sure where you got this idea, but it is a bad one to be 
sure!

 2008/10/21 Ruben Blanco [EMAIL PROTECTED]:
  Hi:
 
  My database is growing fast taking too much disk space. How can I free
  disk space without performing a VACCUM FULL? It locks the database for
  several hours, and that is not a solution.
 
  I guess a backup-restore would do the work but, isn't there a better way
  to do this without shutting down postgres?
 
  Thanks in advandce.

 --
   Emanuel Calvo Franco
 Syscope Postgresql DBA
   BaPUG Member

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop database / database in use question

2008-10-17 Thread Robert Treat
On Friday 17 October 2008 11:55:19 Dan Armbrust wrote:
 It would seem that way.  But if you have ever tried programming with
 the constraints of an InstallAnywhere installer, you would know why :)


if you are the only user, force a restart into single user mode, then drop the 
database, and restart normally. 

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column level triggers

2008-10-15 Thread Robert Treat
On Wednesday 15 October 2008 04:19:59 Laurent Wandrebeck wrote:
 2008/10/15 Scott Marlowe [EMAIL PROTECTED]:
  You'll probably have to ask that in -hackers.  I'm guessing it's one
  of those things that if one wrote a sufficiently large check one could
  find a hacker to implement it.  But I can't imagine it being a weekend
  project, and if it's not already in 8.4 beta it wouldn't make it to
  8.4, but you'd have to shoot for 8.5.


Actually, the final commitfest for 8.4 isn't untill November 1st, so if you 
did have something you wanted to get into 8.4, you have 2 weeks to make it 
into the last commitfest; after that you're probably looking at 8.5. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to remove the duplicate records from a table

2008-10-10 Thread Robert Treat
On Tuesday 07 October 2008 05:48:01 Albe Laurenz wrote:
 Yi Zhao wrote:
  I have a table contains some duplicate records, and this table create
  without oids, for example:
   id | temp_id
  +-
   10 |   1
   10 |   1
   10 |   1
   20 |   4
   20 |   4
   30 |   5
   30 |   5
  I want get the duplicated records removed and only one is reserved, so
  the results is:
  10 1
  20 4
  30 5
 
  I know create a temp table will resolve this problem, but I don't want
  this way:)
 
  can someone tell me a simple methold?

 Don't know if you'd call that simple, but if the table is
 called t, you could do

 DELETE FROM t t1 USING t t2
 WHERE t1.id = t2.id AND t1.temp_id = t2.temp_id AND t1.ctid  t2.ctid;


note that one problem the delete from approaches have that the temp table 
solutions dont is that you can end up with a lot of dead tuples if there were 
a lot of duplicates... so if you can afford the locks, its not a bad idea to 
do begin; lock table t1 in access exclsuive mode; create temp table x as 
select ... from t1; truncate t1; insert into t1 select * from x; create 
unique index ui1 on t1(...); commit;  this way you're now unique table will 
be nice and compacted, and wont get any more duplicate rows.  

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slony vs Longiste

2008-09-24 Thread Robert Treat
On Wednesday 24 September 2008 12:34:17 Jason Long wrote:
 Richard Huxton wrote:
  Jason Long wrote:
  I need to set up master vs slave replication.
 
  My use case is quite simple.  I need to back up a small but fairly
  complex(30 MB data, 175 tables) DB remotely over T1 and be able to
  switch to that if the main server fails.  The switch can even be a
  script run manually.
 
  Can someone either comment in as much detail as possible or point me to
  a comparison of Slony vs Longiste.  Or some other option I have not
  heard of?
 
  Three questions you need to ask yourself.
  1. How heavily updated is the database?
  2. How often do you change the database's schema?
  3. Are there other databases in the installation?
 
  If #1 is very heavy then you'll want to do some testing with any
  solution you use.
 
  If #2 is a lot then you'll want to consider WAL shipping as mentioned
  below. Slony can handle schema changes, but you'll need to process them
  through its own script. I'm afraid I can't comment on Londiste.
 
  If you just want a backup and the answer to #3 is no, look at WAL
  shipping (see the various archive_xxx config settings in the manual and
  google a bit).
 
  From what I read Longiste is easy to set up while I got a quote for
  Slony setup for 5-10k.
 
  Unless your requirements are strange, that seems a little high, even
  assuming USD as a currency. Of course, if you want support and
  maintenance that will tend to make things mount.

 The database has 10-20 concurrent users so updates are not very heavy.

 The schema changes very frequently.

 There are not other databases in the installation.

 This quote included initial setup, failure testing, and scripts that
 were to automate setup and manage the installation.  It did not include
 support and maintenance.

Are you planning on hiring someone to do it, or are you going to do it 
yourself, because the prices of the solution is completely orthogonal to 
which is the better fit technically. 

In your case, since you do a lot of DDL changes, I'd go with londiste over 
slony if I had to pick from those two. However, given the requirements you 
laid out, PITR is probably your best option (this is what Richard alluded 
too), and certainly the one I would recommend you try first. 

-- 
Robert Treat
http://www.omniti.com/
Database: Scalability: Consulting

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.3 stability ?

2008-09-18 Thread Robert Treat
On Thursday 18 September 2008 08:42:07 Gauthier, Dave wrote:
 Great News.  I just requested IS to get the 8.3.3 version.
 One more question...Is there some sort of migration that I have to
 do for existing DB's?  Is it as drastic as a DB unload/load?  Or
 something simpler?


Upgrading from 8.2.x to 8.3.x will require a dump/restore (or similar 
mechanisms), and you'll also want to do a fair amount of testing of your 
application code against 8.3 to make sure it doesn't have any issues. Be 
aware that 8.3.4 will probably be next week, so you'll want to really move to 
that version when you do the actual upgrade. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest way to restore a database

2008-09-13 Thread Robert Treat
On Friday 12 September 2008 15:55:46 Tom Lane wrote:
 Scott Ribe [EMAIL PROTECTED] writes:
  The worry expressed upthread about the transaction being too large is
  unfounded, btw.  Unlike some other DBs, PG doesn't have a finite-size
  undo log.
 
  Sure, it won't fail. But would there be some point at which it would
  become slower than multiple transactions? Or is it always faster (or at
  least as fast)?

 I can't think of any reason it would be slower.

 There are certainly issues you could run into with very long
 transactions, like vacuum not being able to remove bloat elsewhere.


Which reminds me (and not seeing it elsewhere), on full restores you will 
probably want to disable autovacuum entirely, as it will compete for 
reasources and can lead to locking issues as well. Note, this can sometimes 
apply to more narrow restore scenarios, but it isnt as cut and dried.  (Ie, 
with multiple database in a cluster, you dont want to disable it for all 
databases, though it'd be nice to disable it for the one you're restoring)

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore parameters

2008-09-13 Thread Robert Treat
On Friday 12 September 2008 14:23:52 Kevin Duffy wrote:
 Hello:

 I am move to a new production server and am testing my backup and
 restore procedures.

 Given a backup created with the follow command

 C:\C:\progFiles\PostgreSQL\8.2\bin\pg_dump -Fc -b -C -o -f
 E:\backupPostgres\benchxx_c20080912.backup -U postgres benchxx_c

 What is the best way to do a restore of the above backup?  I tinkered
 with a couple of scenarios.

 The one the gave a clean restore was

 a)   drop the database

 b)   restore using the following:  pg_restore -C -d template1  -U
 postgres  E:\data\postgres\ benchxx_c20080912.backup

 Is this the correct way to do a restore?


Well, you need to add the -Fc flags on the restore at a minimum, but otherwise 
TIAS.  

Also, the use of -o is kind of a warning sign to possible bad schema design, 
you shouldn't be using oids for anything, are you sure you need that flag? If 
you do you might want to think about factoring that out of your design, if 
not then verify you need the -b flag too. 

As a final though, if you're already going through the pain of a dump/restore, 
I'd suggest looking at upgrading to 8.3 during the process. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore filesystem backup

2008-09-13 Thread Robert Treat
On Saturday 13 September 2008 09:07:23 Patrik Strömstedt wrote:
 Hi,

 I have a big problem.

 The backup (done nightly with pg_dump) at one of our customers sites is
 broken (well, it's overwritten and is of no use anymore). What is left is a
 filesystem backup that incudes the postgresql directories.

 I'm trying to restore one of the tables from this filesystem backup
 (employee_pass (salaries..)), that has been deleted on the live system.

 How (if possible), can I use this filesystem backup to restore the database
 into a working setup (off-site), from where I can retreive data from the
 one table (employee_pass).

 This is on Windows 2000 Server, the Postgres version is 8.01 (I know, it's
 old...)


Honestly you have a mess on your hands on a number of different levels. I'd be 
tempted to just try copying the tables files directly between the two 
directories if you can matchup the files on disk correctly. Otherwise you 
might be forced to try and get some filesystem level tools going, but I'm not 
sure how feasible that is on windows, especially on such an old version.  
Good luck. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] about partitioning

2008-09-13 Thread Robert Treat
On Thursday 11 September 2008 07:47:00 Joao Ferreira gmail wrote:
 Hello all,

 my application is coming to a point on which 'partitioning' seems to be
 the solution for many problems:

 - query speed up
 - data elimination speed up

 I'dd like to get the feeling of it by talking to people who use
 partitioning, in general..

 - good, bad,

good :-)

 - hard to manage, easy to manage,

I think the upfront costs for managing a partitioning setup are higher with 
postgres than other systems, but there is nothing that you shouldn't be able 
to automate in a cron script (at which point management becomes easy), plus 
postgres gives you some interesting flexibility that is harder to find in 
other setups. 

 - processing over-head during INSERT/UPDATE,

you can setup inserts to have relativly little overhead, but it requires more 
management/maintence work up front. Updates within a partition also have 
relativly little extra overhead, especially if you put in a little 
application logic to figure out how to work on a partition directly. Updates 
where you are changing the partition key value are always more problematic 
though.  

 - stability/compatibility of pg_dump and restore operations,

no real issues here as long as your on recent enough versions to do wildcard 
table matching for individual tables. 

 - how many partitions would be reasonable for read _and_ write  access
 optimal speed;


again, this depends on how exactly your working on the data. For example, we 
have tables with over a thousand partitions on them; in those scenarios all 
data is written into a single partition (with a new partition created daily), 
and the qeury patterns are really straightforward... last month gets a lot of 
queries, lasat three months not so much, last year barely any, and beyond 
that is pretty much just archive info. That said, we have other systems where 
that wouldnt work at all (for example, a static number of partitions, all of 
which are queried activly).  

For some more info, I've given at least one presentation on the topic, which 
seems to be missing from the omniti site, but I've uploaded it to 
slideshare... 
http://www.slideshare.net/xzilla/postgresql-partitioning-pgcon-2007-presentation

HTH. 

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest way to restore a database

2008-09-12 Thread Robert Treat
On Friday 12 September 2008 14:32:07 Greg Smith wrote:
 On Fri, 12 Sep 2008, William Garrison wrote:
  Is there a definitive list of things to do?

 That section of the documention is pretty good:
 http://www.postgresql.org/docs/current/static/populate.html

 The main thing it's missing is a discussion of how to cut down on disk
 commit overhead by either usinc async commit or turning fsync off.  If
 you've got a good caching controller that may not be needed though.

 The other large chunk of information it doesn't really go into is what
 server tuning you could do to improve general performance, which obviously
 would then help with loading as well.
 http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over
 much of that.

  * Turn off full_page_writes
  Don’t write the WAL archives in a safe way. But we don’t need WAL
  archives during a restore. Future versions of postgres will let you turn
  off WAL archives entirely

 Ideally you'd be using COPY such that the table was just created or
 truncated before loading, which (if archive_mode is off) keeps them from
 being WAL logged, as described in 14.4.7.  If you do that and vastly
 increase checkpoint_segments, full_page_writes has minimal impact.

  * Increase the checkpoint_segments parameter (the default is 3 – so...
  maybe 10?)

 64-256 is the usual range you'll see people using for bulk loading.


Don't forget to bump up checkpoint_timeout along with that... actually, I 
blogged a couple of times on this topic:

http://people.planetpostgresql.org/xzilla/index.php?/archives/133-Getting-faster-database-restores-on-postgresql-8.1.html
http://people.planetpostgresql.org/xzilla/index.php?/archives/223-Measuring-database-restore-times.html

A little old, but might be helpful. 

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Install Postgres on a SAN volume?

2008-09-09 Thread Robert Treat
On Tuesday 09 September 2008 04:37:09 Magnus Hagander wrote:
 Greg Smith wrote:
  On Tue, 9 Sep 2008, Magnus Hagander wrote:
  As long as your SAN guarantees an atomic snapshot of all your data
  (which every SAN I've ever heard of guarantees if you're on a single
  volume - entry level SANs often don't have the functionality to do
  multi-volume atomic snapshots, though), you don't need to set up PITR
  for simple backups
 
  It's all those ifs in there that leave me still recommending it.  It's
  certainly possible to get a consistant snapshot with the right hardware
  and setup.  What concerns me about recommending that without a long list
  of caveats is the kinds of corruption you'd get if all those conditions
  aren't perfect will of course not ever happen during testing.  Murphy
  says that it will happen only when you find yourself really needing that
  snapshot to work one day.

 Well, I agree one should be careful, but I don't see the risk if you
 just change all those ifs into a single one, which is if all your data
 *and* WAL is on the same SAN LUN.

 (heck, you don't need hardware to do it, you can do software snapshot
 just fine - as long as you keep all your stuff on the same mountpoint
 there as well)


That's pretty key, but there can be advantages to doing it using the pitr 
tools, and I think in most cases it would be hard to argue it isn't safer.  

As a counter example to theo's zfs based post, I posted a linux/lvm script 
that can work as the basis of a simple snapshot backup tool, available at 
http://people.planetpostgresql.org/xzilla/index.php?/archives/344-ossdb-snapshot,-lvm-database-snapshot-tool.html

And yes, I prefer working on the zfs based one :-)

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL TPC-H test result?

2008-09-09 Thread Robert Treat
On Tuesday 09 September 2008 10:06:01 Amber wrote:
 From: Andrew Sullivan [EMAIL PROTECTED]
 Sent: Tuesday, September 09, 2008 8:39 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] PostgreSQL TPC-H test result?

  On Tue, Sep 09, 2008 at 07:59:49PM +0800, Amber wrote:
  I read something from
  http://monetdb.cwi.nl/projects/monetdb/SQL/Benchmark/TPCH/index.html
 
  Given that the point of that study is to prove something about
  performance, one should be leery of any claims based on an out of the
  box comparison.  Particularly since the box their own product comes
  out of is compiled from CVS checkout.  Their argument seems to be
  that people can learn how to drive CVS and to compile software under
  active development, but can't read the manual that comes with Postgres
  (and a release of Postgres well over a year old, at that).
 
  I didn't get any further in reading the claims, because it's obviously
  nothing more than a marketing effort using the principle that deriding
  everyone else will make them look better.  Whether they have a good
  product is another question entirely.
 
  Yes, we don't care about the performance results, but we do care 
  about the 
   point that PostgreSQL can't give the correct results of TPC-H queries.

Given the point of those benchmarks is to make other systems look bad, I think 
you have to take them with a grain of salt. Since we don't know what the 
errors/results were, and no information is giving, we are left to wonder if 
this is a problem with the software or the tester. The site would have us 
believe the former, but I think I would lean toward the latter... case in 
point, I did a quick google and turned up this link: 
http://www.it.iitb.ac.in/~chetanv/personal/acads/db/report_html/node10.html. 
It isn't terribly informative, but it doesindicate one thing, someone else 
was able to run query #6 correctly, while the above site claims it returns an 
error. Now when I look at query#6 from that site, I notice it shows the 
following syntax:

interval '1' year.  

when I saw that, it jumped out at me as something that could be an issue, and 
it is:

pagila=# select now() - interval '1' year, now() - interval '1 year';
   ?column?|   ?column?
---+---
 2008-09-09 11:28:46.938209-04 | 2007-09-09 11:28:46.938209-04
(1 row)

Now, I'm not sure if there is an issue that monet supports the first syntax 
and so when they ran thier test on postgres this query produced wrong 
results, but that seems possible. In this case I would wonder if the first 
syntax is sql compliant, but it doesn't really matter, the tpc-h allows for 
changes to queries to support syntax variations between databases; I'm pretty 
sure I could make suttle changes to break other databases as well. 

Incidentally, I poked Mark Wong, who used to work at the OSDL (big linux 
kernel hacking shop), and he noted he has successfully run the tpc-h tests 
before on postgres. 

In the end, I can't speak to what the issues are wrt monet and postgres and 
thier tpc-h benchmarks, but personally I don't think they are worth worring 
about. 

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle and Postgresql

2008-09-04 Thread Robert Treat
On Wednesday 03 September 2008 09:17:54 Asko Oja wrote:
 On Wed, Sep 3, 2008 at 5:56 AM, Robert Treat

 [EMAIL PROTECTED]wrote:
  On Tuesday 02 September 2008 17:21:12 Asko Oja wrote:
   On Tue, Sep 2, 2008 at 2:09 AM, Michael Nolan [EMAIL PROTECTED] wrote:
Oracle handles connecting to multiple databases (even on
 
  multiple/remote
 
computers) fairly seamlessly, PG does not (yet.)
  
   Stuff we do with plProxy on PostgreSQL is in some respects more
   advanced than anything Oracle has to offer :) We have hundreds of
   databases in
 
  quite
 
   complex network of remote calls and replication.
 
  Yes, but it is also far more complex to install, configure, and use,
  compared
  to something simple like oracle's dblink, which comes pre-installed, is
  simple to set-up, and has a much more straight-forward syntax for use in
  day
  to day query work.

 We are working on these matters and hopefully get some of them solved in
 8.4

 :)

 Configure and use part is NO more complex than Oracle and has several use
 cases for which neither of dblinks is suitable.
 Or are you claiming that calling functions is not straight forward and
 seamless in PostgreSQL.

It is not as simple as Oracles database link syntax. Setting up a connection 
involves a couple of sql looking commands, and once you setup a connection to 
a remote database, you can reference a table with something like select * 
from [EMAIL PROTECTED]  There's no way a function oriented solution can 
match that imho. (BTW, if you want to see more, Lewis has a pretty good write 
up; 
http://it.toolbox.com/blogs/oracle-guide/database-links-a-definition-in-plain-english-7023)
 
-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle and Postgresql

2008-09-02 Thread Robert Treat
On Tuesday 02 September 2008 17:21:12 Asko Oja wrote:
 On Tue, Sep 2, 2008 at 2:09 AM, Michael Nolan [EMAIL PROTECTED] wrote:
  Oracle handles connecting to multiple databases (even on multiple/remote
  computers) fairly seamlessly, PG does not (yet.)

 Stuff we do with plProxy on PostgreSQL is in some respects more advanced
 than anything Oracle has to offer :) We have hundreds of databases in quite
 complex network of remote calls and replication.


Yes, but it is also far more complex to install, configure, and use, compared 
to something simple like oracle's dblink, which comes pre-installed, is 
simple to set-up, and has a much more straight-forward syntax for use in day 
to day query work. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New to postgres -' how to' tips needed

2008-08-21 Thread Robert Treat
On Thursday 21 August 2008 15:16:29 Markova, Nina wrote:
 Hi,


 I'm completely new to postgres. My main job will be to install and
 configure it properly and I'm not sure how much I can rely on the default
 values.

 Are there any good articles howto  install and configure postgres? I have
 found the online documentation but need something simple.

 Any suggestions comments are welcome.


to get better help, you'll need to have far more specific questions. ie. some 
might say apt-get install postgres is enough for anybody, but that might be 
completly irrelevant in your environment.  

that said, if you can get it installed (and really, try to get 8.3.3 if you 
can), i'd recommend looking at 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for some 
configuration hints.  

If you have specific questions beyond that, please post them in the list, and 
be sure to include your OS and version information. HTH :-)

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mailing list/newsgroup disconnect

2008-08-10 Thread Robert Treat
On Tuesday 05 August 2008 03:12:26 Sim Zacks wrote:
 There seems to be a disconnect between the mailing list and the
 newsgroup right now. I received a bunch of replies via email that did
 not show up in the newsgroup. (I did not receive any messages that were
 sent to the mailing list and not to me personally).

 Is there someone I should mention this to or does he already know?


Problems like this should be reported to [EMAIL PROTECTED] It would 
likely be helpful to include emails with full header information, though the 
folks there can tell you what they need. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [EMAIL PROTECTED]

2008-08-04 Thread Robert Treat
Hiroshi-san, 

Is this something specific to windows? If so, should this be consider a bug?

Robert Treat

On Sunday 03 August 2008 18:01:05 Hiroshi Saito wrote:
 Hi.

 Sorry, it was not included in release.
 please see,
 http://winpg.jp/~saito/pg_work/OSSP_win32/

 Regards,
 Hiroshi Saito

 Hi all,
 
 I installed postgresql-8.3.3-1 for win2000 with UUID-OSSP. Following
 the documentation I issued this query:
 
 SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org');
 
 but the DB returns with this error message:
 
 ERROR: function uuid_ns_url() does not exist
 SQL state: 42883
 Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.
 Character: 25
 
 Any hint to use UUID within my database tirggers?
 
 Thank you,
 Laci
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using PostGres general distribution

2008-07-31 Thread Robert Treat
On Thursday 31 July 2008 15:28:14 Mike Gould wrote:
 We currently use SQL Anywhere 9.0.2 as our database in our current product.
  The main reason is the low maintenance that is required and the
 installation is a breeze.  All we need to do is to ship 3 dll's and a db
 and log file. I understand that with PostGres that the installation will
 end up being much more complex, however that doesn't really worry me as
 much as how much administration of the database is needed.

IIRC, SQL Anywhere is designed to work as an embedded database, with 
particular points towards zero administrative overhead.  That's not the 
design goal of PostgreSQL, though it can (and is) used for the underlying 
piece of many products. 

 SQL Anywhere 
 has an event processor built in to make doing database backups while the
 system is online very easy. 

you can do backups of postgres under normal operations using pg_dump. you'll 
need to craft your own scheduler for this, as postgres doesn't include one. 

 We also are able to do certain types of 
 maintenance such as selective reorganize of tables, automatically adding
 additional free space at night so that it doesn't affect performance during
 processing hours and many other functions.


Most DML operations in postgres can be run inside a transaction, and postgres 
is not in the habit of creating artificial constraints for index/tablespace 
size as some other databases.  The main key for you will probably be to turn 
the autovacuum daemon on, which will do most of the cleanup work you would 
need to have done regularly automagically. 

 If we had 1500 customers running our system with PostGres and we have
 little control over the server hardware, the OS the customer would be
 running the db on, is Postgres the appropriate choice or is this going to
 be a maintenance nightmare?  How self sufficient is Postgres?


I know postgres is used in many kiosk type systems, where there is little to 
no ongoing maintenance for those machines which sound similar to what you 
might be looking at.  One of the clients my company works with is doing 
something similar to this, distributing a postgres backed application 
designed to be run with little/no postgres maintenance overhead. It's more 
like a data appliance than an embedded app (it deals with 100's GB of data), 
but seems so far has been very doable.  

Most of the trouble scenarios that are involved are when you have no control 
over usage patterns... Ie. someone has direct access to the database, and 
they do massive dumps and reloads of data you haven't designed into your 
original expectations; but if you have controlled access to the server, it's 
likely postgres can work in that scenario. (The other problem spots is server 
upgrades, but you can probably go years on a particular version before that 
becomes really problematic, it just depends on what your applications 
lifecycle looks like)

-- 
Robert Treat
Database Architect
http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Must be table owner to truncate?

2008-07-30 Thread Robert Treat
On Wednesday 30 July 2008 08:52:26 Ragnar wrote:
 On mið, 2008-07-30 at 07:36 -0400, Kevin Hunter wrote:
  At 3:45p -0400 on Mon, 28 Jul 2008, Said Ramirez wrote:
   According to the documentation,
   http://www.postgresql.org/docs/current/interactive/sql-truncate.html ,
   only the owner can truncate a table. Which means the non-owner must
   either log in/ switch roles as the owner, or they can just run a
   DELETE.
 
  Well that's interesting.  From a security standpoint, what's the
  difference between an unqualified DELETE and a TRUNCATE?

 lack of triggers and RULEs spring to mind.


Just fyi, there is a patch for 8.4 that will add truncate permissions. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-advocacy] [pdxpug] Pg booth staffing at OSCON

2008-07-15 Thread Robert Treat
On Tuesday 15 July 2008 10:42:07 Daniel Johnson wrote:
  Well, you can obviously get into OSCAMP and FOSSCoach and the BOFs and
  the other free events.  Not that I'd be promoting such a thing (as an
  OSCON committee member), but if there's *a* specific session you want to
  attend, you can probably persuade one of the several PostgreSQL speakers
  to loan you their badge.

 No offense, but in the years that I have been going to OSCON I've
 never had anywhere near enough money to buy a real pass to OSCON.
 Every year it has been volunteering for a booth, or attending OSCAMP.
 Last year I was able to afford Ubuntu Live through one of the more
 extreme discount codes, but that is it.  There are lots of people who
 participate as much as they can by way of volunteering, and I am one
 of them.  We help make the conference happen so please treat us with
 respect even if we can't afford to pay are way in.


I can't imagine how you could have taken Josh's post to be anything but 
courteous and respectful, but I do encourage you to join us at the BOF where 
we can settle it once and for all sumo suits anyone?  
http://www.maineventweb.com/page/page/2916926.htm

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-26 Thread Robert Treat
On Monday 23 June 2008 15:45:22 Kynn Jones wrote:
 On Mon, Jun 23, 2008 at 2:21 PM, Steve Atkins [EMAIL PROTECTED] wrote:
  In real use you're unlikely to hit any limits, theoretical or practical,
  but if you start to use a silly number of tables and so on you're likely
  to hit performance issues eventually. I'm not sure where that threshold
  would be, but it's higher than thousands.

 Actually, the DB I have in mind would certainly be approaching silly
 territory.  I'm looking at a schema with around 10 thousand tables (or
 views).  Unfortunately, as far as I can tell,
 http://www.postgresql.org/about/ says nothing about maximum number of
 tables.  I suppose I could always find what this limit is the hard way,
 by writing a script that just keeps creating empty tables and see where
 that goes, but I'd prefer not to do something like this...


http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Losing data

2008-06-19 Thread Robert Treat
On Thursday 19 June 2008 14:06:38 Garry Saddington wrote:
  In any case, however, if PostgreSQL reported the transaction complete and
  the machine didn't experience any hardware problems (like sudden power or
  disk failure), I would certainly not suspect PostgreSQL as the source of
  the problem.

 What has happened to the reports then? I have used this combination of Zope
 and Postgres for 5 years with no problems like this before and we have
 written one complete set of reports on this server in the past 6 weeks. The
 problem seems to have started last friday, when reports started to go
 missing.

Out of curiosity, what is your vacuum strategy? 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alias in the HAVING clause

2008-05-14 Thread Robert Treat
On Tuesday 13 May 2008 18:43:25 Tom Lane wrote:
 Scott Marlowe [EMAIL PROTECTED] writes:
  On Tue, May 13, 2008 at 3:43 PM, Nathan Thatcher [EMAIL PROTECTED] 
wrote:
  I am in the middle of switching a bunch of queries over from MySQL to
  PostgreSQL and have hit a little snag. The following query works fine
  in MySQL but raises an error in postgres:
 
  SELECT COUNT(*), id % 3 AS f1 FROM table GROUP BY f1 HAVING f1  0;
 
  I think you're editing your queries to show to us.  There's no way
  that query would run, as you're selecting id and grouping by f1.

 Depressingly enough, it *does* run in mysql.  There are assorted spec
 violations and undefined behaviors involved, but that's more or less
 what you've got to expect with mysql.

 Not that we're entirely pristine ourselves.  We should reject GROUP BY
 f1, since per spec that alias isn't in scope in GROUP BY either.  But
 a long time ago we decided that GROUP BY should act as much as possible
 like ORDER BY, and I doubt we want to change it now.


Yeah, I am surprised to see the alias work in the group by (and I'm pretty 
sure there are cases where it fails). That said, I think ease of use 
arguments would trump spec compliance for the sake of spec compliance, though 
I understand there are technical problems the spec is trying to keep you from 
getting into...  but I have to wonder, if we have established f1 by the time 
we evaluate the group by, shouldn't we also be able to determine f1 at having 
time, and therefore allow alias in having in this instance?  

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby / WAL archive-restore through system restarts

2008-05-13 Thread Robert Treat
On Tuesday 13 May 2008 12:48:38 Guillaume Lelarge wrote:
 Alvaro Herrera a écrit :
  someone wrote:
  Can I shutdown Server B (backup/recovery postmaster) simply by
  killing the postmaster and restart it back in recovery mode to
  continue re-syncing where it left off?  Or does stopping Server B
  while in recovery mode require any manual re-sync steps before it can
  resume recovery?
 
  I think you'll need to redo the whole process : restore the full data
  backup, create the recovery.conf file, etc.
 
  No, you don't.  The server can continue replaying files.

 Oops, sorry about this. I thought it would have a hard time to go back
 to recovery mode. That's great to know. Thanks.



There is a caveat here, in that you need to keep around n number of xlogs, 
where n is determined based on the last restart point processed on the slave. 
If you are deleting all xlogs as they are processed, any shutdown will likely 
cause you to have to start the whole thing over again. Note pg_standby and 
8.3 give some pretty convenient tools to manage this. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG -v- MySQL

2008-05-13 Thread Robert Treat
On Tuesday 13 May 2008 12:09:28 Kevin Hunter wrote:
 * The documentation on the website for Postgres is by far the better of
 the two DBs.  MySQL's is decent and very useful, but Postgres'
 documentation just plain rocks.  In fact, the Postgres documentation is
 one of the very best examples of documentation for any project with
 which I've had to deal, OpenSource or otherwise.  Head and shoulders.


I've always thought that the biggest problem with the MySQL documentation is 
that it has a very hard time organizing all the little caveats that come into 
play between the different storage engines. Sometime an item might be 
documented in a storage specific area rather than a specific command (so it 
gets overlooked), other times I find that the command pages themselves are 
too cluttered with storage specific issues (making it hard to find 
information specific to your environment).  It's an understandably difficult 
issue to work around, since ever storage engine you use means that you're 
basically learning the intricacies of a separate database, so it doesn't 
surprise me that things end up a little schizophrenic at times. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this possible in a trigger?

2008-05-06 Thread Robert Treat
On Tuesday 06 May 2008 20:10:50 Klint Gore wrote:
 Fernando wrote:
  I want to keep a history of changes on a field in a table.  This will
  be the case in multiple tables.
 
  Can I create a trigger that loops the OLD and NEW values and compares
  the values and if they are different creates a change string as follows:
 
  e.g;
 
  FOR EACH field IN NEW
  IF field.value  OLD.field.name THEN
 changes := changes
 
  || field.name
  || ' was: '
  || OLD.field.value
  || ' now is: '
  || field.value
  || '\n\r';
 
  END IF
  END FOR;
 
  Your help is really appreciated.

 You can't in plpgsql.  It doesn't have the equivalent of a walkable
 fields collection.  Its possible in some other procedure languages (I've
 seen it done in C).


I did it once by setting up the function to accept the tablename and ctid of 
the row involved, and then grabbing the info from the system tables. 
Certainly easier to do it in plperl though. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-05-01 Thread Robert Treat
On Thursday 01 May 2008 01:30, Greg Smith wrote:
 On Wed, 30 Apr 2008, Robert Treat wrote:
  Whenever anyone posts a problem on 7.3, the first thing people do now
  days is jump up and down waving thier arms about while exclaiming how
  quickly they should upgrade. While I am certain there are even older
  versions of postgres still running in production out there, I'd have to
  say that the core developers for this project do not release software
  with the expectation that you will use if for more than 5 years.

 You could easily make a case that 7.3 wasn't quite mature enough overall
 to be useful for 5 years.  There's little reason to keep pumping support
 effort into something with unfixable flaws.  I know when I was using 7.4
 heavily, I never felt like that was something I could keep going for that
 long; the VACUUM issues in particular really stuck out as something I
 wouldn't be likely to handle on future hardware having larger databases.

 8.1, on the other hand, is the first release I thought you could base a
 long-term effort on, and 8.2 and 8.3 have moved further in that direction.
 8.1 has been out for 2.5 years now, and it seems like it's got plenty of
 useful left in it still (except on Windows).  The improvements in 8.2 and
 8.3 are significant but not hugely important unless you're suffering
 performance issues.

 Compare with 7.3, which came out at the end of 2002.  By 2.5 years after
 that, the project was well into 8.0, which was clearly a huge leap.
 PITR, tablespaces, whole new buffer strategy, these are really fundamental
 and compelling rather than the more incremental improvements coming out
 nowadays.


This all sounds nice, but I don't see any movement from the project to 
increase community commitment to 5 years for any release, so I think it's all 
moot. 

 (Obligatory Oracle comparison:  for customers with standard support
 levels, Oracle 8.1 was EOL'd after slightly more than 4 years.  It wasn't
 until V9 that they pushed that to 5 years)


And even that isn't full support. IIRC Oracle certified applications can only 
be done within the first 3 years of the product. I think there are other 
scenarios under 5 years as well. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-05-01 Thread Robert Treat
On Wednesday 30 April 2008 11:00, Craig Ringer wrote:
 Robert Treat wrote:
  If one were to have built something on postgresql 5 years ago, they would
  have had to do it on 7.3.  Whenever anyone posts a problem on 7.3, the
  first thing people do now days is jump up and down waving thier arms
  about while exclaiming how quickly they should upgrade.

 [snip]

   I'd have to
   say that the core developers for this project do not release software
   with the expectation that you will use if for more than 5 years.

snip
 That says nothing about the people out there still using 7.3 and similar
 without problems, running well within its capabilities and happy with
 what it's doing. I doubt many people would advise them to upgrade - at
 least not in a hurry and not with any jumping and hand-waving.

snip
 My impression from using PostgreSQL is that people using old versions
 are taken seriously. Data corruption, crash and security bug fixes get
 applied to very old versions. For example, 7.3.21 was released on  Jan
 2008, and includes several fixes:

 http://www.postgresql.org/docs/current/static/release-7-3-21.html


from those very release notes This is expected to be the last PostgreSQL 
release in the 7.3.X series. Users are encouraged to update to a newer 
release branch soon.

If you are on any version of 7.3, the official response is you need to 
upgrade to a newer major version regardless of your problems.  You're 
overlooking data-loss level bugs that can bite people even if they aren't 
currently suffering from any issues. 

And again, if you do the math, any install before 2008-11-17 would have been 
on 7.3, which is less than 5 years.  Or, looking forward, I'm not expecting 
7.4 will be supported beyond 2010 (there have already been calls to stop 
supporting it for some time) which is what would be required if we really 
have an expectation of support for more than 5 years. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-05-01 Thread Robert Treat
On Thursday 01 May 2008 13:40, Tom Lane wrote:
 Robert Treat [EMAIL PROTECTED] writes:
  And again, if you do the math, any install before 2008-11-17 would have
  been on 7.3, which is less than 5 years.

 I'm not sure how you're doing the math, but my copy of the release notes
 dates 7.3 as 2002-11-27 and 7.3.21 as 2008-01-07, which makes it five
 years plus that we provided bug-fix releases for 7.3.


The whole thing started with If I were to have installed postgres 5 years 
ago, which would be 2003-05-01, then I would not have gotten 5 years of 
support from that system.  Essentially that statement is true of any install 
up to the 7.4 release. 

  Or, looking forward, I'm not expecting
  7.4 will be supported beyond 2010 (there have already been calls to stop
  supporting it for some time) which is what would be required if we really
  have an expectation of support for more than 5 years.

 7.4 was released 2003-11-17, so I think that it will very likely get
 obsoleted at the end of 2008.


If that's the case, it'd be nice to get an official statement of that now. :-)

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-04-30 Thread Robert Treat
On Monday 28 April 2008 10:28, Andrew Sullivan wrote:
 On Sat, Apr 26, 2008 at 08:33:28PM -0400, Robert Treat wrote:
  enum types custom ordering. It also showcases the idea of data
  definitions that should never change, but that do changes every half
  dozen years or so. Now you can argue that since it is expected that the
  ratings might change in some way every few years that an enum type is not
  a good choice for this, but I feel like some type of counter-argument is
  that this is probably longer than one would expect thier database
  software to last. :-)

 I think that if you are building software on the premise that it's
 only going to last five years, you oughta have a look around on the
 Internet again.  Or think about why banks spent the money they did a
 few years back poring over ancient code making sure that two-digit
 year representations weren't in use.


If one were to have built something on postgresql 5 years ago, they would have 
had to do it on 7.3.  Whenever anyone posts a problem on 7.3, the first thing 
people do now days is jump up and down waving thier arms about while 
exclaiming how quickly they should upgrade. While I am certain there are even 
older versions of postgres still running in production out there, I'd have to 
say that the core developers for this project do not release software with 
the expectation that you will use if for more than 5 years. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-04-29 Thread Robert Treat
On Monday 28 April 2008 17:35, Jeff Davis wrote:
 On Sat, 2008-04-26 at 20:33 -0400, Robert Treat wrote:
  I think one of the best examples of this is the movie rating system
  (which I blogged about at
  http://people.planetpostgresql.org/xzilla/index.php?/archives/320-Postgre
 SQL-8.3-Features-Enum-Datatype.html )
 
  It's a good example of setting pre-defined values that really can
  leverage the enum types custom ordering. It also showcases the idea of
  data definitions that should never change, but that do changes every
  half dozen years or so. Now you can argue that since it is expected that
  the ratings might change in some way every few years that an enum type is
  not a good choice for this, but I feel like some type of counter-argument
  is that this is probably longer than one would expect thier database
  software to last. :-)

 Let's say you have ratings A, B, and D for 5 years, and then you add
 rating C between B and D.

 If you have a constant stream of movies that must be reviewed, then the
 addition of a new rating will necessarily take some fraction of the
 movies away from at least one of the old ratings. In that case, is an
 old B really equal to a new B?

 Similar concerns apply to other changes in ENUMs, and for that matter,
 they apply to the FK design, as well.

 I would say the *actual* rating is the combination of the rating name,
 and the version of the standards under which it was rated.


*You* would say that, but typically movie ratings are not adjusted when a new 
rating comes out.  For good examples of this, go back and look at 70's era 
movies (cowboy movies, war movies, etc...) that are G rated, but have a lot 
of people being shot/killed on-screen, something which would give you an 
automatic PG rating today.  (There are similar issues with PG/R movies in the 
80's, typically focused on violence and drug use, before the PG-13 rating 
came out).

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-04-26 Thread Robert Treat
On Friday 25 April 2008 14:56, Merlin Moncure wrote:
 On Thu, Apr 24, 2008 at 3:01 PM, Tino Wildenhain [EMAIL PROTECTED] wrote:
  Merlin Moncure wrote:
   I think you're being a little too hard on enums here.  I was actually
   in the anti-enum camp until it was demonstrated to me (and in my own
   testing) that using enum for natural ordering vs. fielding the
   ordering of the type out to a join is can be a huge win in such cases
   where it is important.  Relational theory is all well and good, but in
   practical terms things like record size, index size, and query
   performance are important.
 
   Uhm. Sorry what? Can you demonstrate this particular use?
   When I first saw discussion about enumns I kinda hoped they
   will be implemented as kind of macro to really map to a table.
   But here you go. I'm still looking for a good example to
   demonstrate the usefullness of enums (same for arrays for that
   matter)

 You must not be aware that enums are naturally ordered to make that
 statement.  Suppose your application needs to order a large table by
 a,b,c where b is the an 'enum' type of data.  With an enum, the order
 is inlined into the key order, otherwise it's out of line, meaning
 your you key is larger (enum is 4 bytes, varchar is guaranteed to be
 larger), and you need to join out to get the ordering position, use a
 functional index, or cache it in the main table.


I think one of the best examples of this is the movie rating system (which I 
blogged about at 
http://people.planetpostgresql.org/xzilla/index.php?/archives/320-PostgreSQL-8.3-Features-Enum-Datatype.html
)

It's a good example of setting pre-defined values that really can leverage the 
enum types custom ordering. It also showcases the idea of data definitions 
that should never change, but that do changes every half dozen years or so. 
Now you can argue that since it is expected that the ratings might change in 
some way every few years that an enum type is not a good choice for this, but 
I feel like some type of counter-argument is that this is probably longer 
than one would expect thier database software to last. :-) 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes?

2008-04-23 Thread Robert Treat
On Wednesday 23 April 2008 14:10, Karsten Hilbert wrote:
 On Wed, Apr 23, 2008 at 12:38:48PM +0200, Andreas 'ads' Scherbaum wrote:
  Yes. You should/can use ENUM for something like 'gender':
  male, female, unknown. You don't need to add other values ever (yeah, i
  skipped some special cases).

 I was gonna say ! :-)

 Add

  hermaphrodite
  transgender with female phenotype
  transgender with male phenotype

 and you should be set from current medical science's point
 of view ;-)


The standard is unknown, male, female, and n/a. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column order

2008-04-23 Thread Robert Treat
On Wednesday 23 April 2008 21:33, Vyacheslav Kalinin wrote:
 Hello,

 It is often convenient to have columns of a table in certain order (as
 shown by psql or most GUI database explorers, it also affects INSERT's
 without columns specified behavior) so as to most significant columns
 to
 come first, semantically close columns to be grouped etc, while the columns
 might be added to the table in different order during development
 process. So, the question is - is it an acceptable way to play with
 pg_attribute's attnum and set it to needed value
 or recreate the table is the only
 way? On the related note - should the columns necessarily be numbered
 from 1 up with the step of 1?


If you do this on a table with data in it you will almost certainly hoark your 
data.  On an empty table I'd just expect things to break. If you are really 
interested in being able to re-order columns, search the archives for a patch 
we saw ~ year or so ago that implemented storage level column ordering. The 
discussion that followed laid out much of what would also be needed for 
logical level column sorting. Work  out those two bits and you'll have soon 
have a patch for doing this the right way. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcrypto and dblink

2008-04-14 Thread Robert Treat
On Thursday 10 April 2008 16:49, Roberts, Jon wrote:
   I am moving from Windows to Solaris and I need pgcrypto and dblink.
Where are these?  I don't see anything in the configure that

 suggests

  it
 
is even an option.
 
  They're not handled by 'configure'.  They are in the 'contrib'
  directory in the source tree, and you install them by first installing
  PG itself, then go into the module directory, e.g. 'contrib/pgcrypto',
  and running 'make'.

 Thanks so much!

 Wouldn't it make sense to add a section to this page that describes the
 contrib process?
 http://www.postgresql.org/docs/8.3/static/install-post.html

 I had thought all of the installation options were set using configure.
 A post installation step is fine but I think it needs to be documented
 as such.


There are instructions on how to install them at 
http://www.postgresql.org/docs/8.3/interactive/contrib.html, but your right 
they don't seem to be mentioned anywhere in the install section.  I'm not 
sure where it should go, but perhaps making it 15.6.1 and bumping the other 
items down a notch. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-27 Thread Robert Treat
On Wednesday 26 March 2008 16:03, hubert depesz lubaczewski wrote:
 On Wed, Mar 26, 2008 at 10:02:45AM -0700, Steve Atkins wrote:
  What's the psql equivalent of the standard use case of vacuumdb -a?
  (If you don't know the answer, for both unix and windows, you don't get
  to vote for removing vacuumdb).

 linux:
 psql -qAt -c select E'connect ' || datname || E'\nvacuum;' from
 pg_database where datallowconn | psql

 windows:
 psql -qAt -c select E'\\connect ' || datname || E'\nvacuum;' from
 pg_database where datallowconn | psql

 that's not actually complicated (i'm not saying it's nice, as it isn't).


I have to think that a better solution for someone whose needs are met by the 
above is to just enable autovacuum. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql partitioning

2008-03-22 Thread Robert Treat
On Saturday 22 March 2008 09:39, Reece Hart wrote:
 On Fri, 2008-03-21 at 23:00 -0400, Ram Ravichandran wrote:
  I assume that primary key
  uniqueness is not tested across tables. Right?

 That's correct.  It's on the TODOs:
 Inheritance
   * Allow inherited tables to inherit indexes, UNIQUE constraints,
 and primary/foreign keys
 (at http://www.postgresql.org/docs/faqs.TODO.html )


 I wonder whether you might be able to achieve the benefits of
 partitioning and the simplicity of a single-table updates by using a
 view with an update rule. This would allow you to embed the logic for
 moving rows between partitions when the partition criterion changes into
 the database. I've not done this myself, so I'm, um, not speaking from
 experience.


Actually you can add an update rule to the parent table itself,  rewriting 
into a set of insert, delete statements. (or call a function to manage it 
which is probably better on a larger number of partitions) 
-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres development

2008-03-22 Thread Robert Treat
On Wednesday 19 March 2008 23:51, Brent Wood wrote:
 Hi all,

 I'm interested in finding what would be involved on enhancing Postgres to
 allow queries run in one database in a cluster to access  join with tables
 in other databases in the cluster, ie: cross database join support.

 This would be very useful,  depending on cost, I may be able to arrange
 for funds to cover this development.


I'd suggest tracking down Neil Conway, and maybe David Fetter.  I know Neil 
has been playing with something similar for 8.4, and David has been pestering 
him about it pretty steady. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger to run @ connection time?

2008-03-16 Thread Robert Treat
On Friday 14 March 2008 11:36, Marko Kreen wrote:
 On 3/14/08, Erik Jones [EMAIL PROTECTED] wrote:
   On Mar 14, 2008, at 7:17 AM, Marko Kreen wrote:
To put it to core Postgres, it needs to be conceptually sane
first, without needing ugly workarounds to avoid it bringing
whole db down.
   
I can see ATM only few ways:
   
- Applies only to non-superusers.
   
- Error from CONNECT trigger does not affect superuser.
   
- Applies to database + role.  Role could be also group of users.
   
So you always have way do fix things, without hexediting in data
dir...
 
  Another option:
 
   Does not fire at all in single-user mode.  This would be covered by
   Applies to non-superusers if that were there but, by itself, the
   triggers would still fire for normal superuser connections.

 Seems bit too hard - you may other db-s that work fine,
 why should those suffer?


there are other failure scenario's for a single db that require single user 
mode (think corrupted indexes), so I'm not sure that is too high a price to 
be paid, though a less barriar would be better.

If we decide that an on connect trigger involves the combination of a database 
and a role, you generally can escape from the failure scenario by having 
either a different role, or a different database with the ability to 
do alter database disable on connect triggers. whether this is a direct 
alter database, or set at the GUC level, either makes it pretty hard to lock 
yourself out completly, and single user mode can be the fall back for that if 
needed. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql book - practical or something newer?

2008-02-19 Thread Robert Treat
On Monday 04 February 2008 10:48, vincent wrote:
  Christopher Browne wrote:
 
  Personally I'm surprised that the last couple responses seem to center
  around not being able to make much money off of it. I agree that it
  would require some time investment, but so did building PG in the first
  place. Countless people have already sacrificed hours upon hours of
  their time with no return on their investment except pride in their work
  and a better overall product for everybody to use. I'm not a talented
  enough programmer to contribute to the code, but in this way I can do
  something to give back to the pg community.
 
  --
  Tom Hart

 +1

 It seems there's a stalemate, apparently PgSQL needs to be more popular
 before authors want to write for it, and the public doesn't want to commit
 to a database that has only a handfull of books available.


Just to clarify, the market needs to expand to get publishers on board, not 
authors. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] end of life for pg versions...

2008-02-12 Thread Robert Treat
On Tuesday 12 February 2008 15:48, Ivan Sergio Borgonovo wrote:
 On Tue, 12 Feb 2008 16:15:23 -0300

 Alvaro Herrera [EMAIL PROTECTED] wrote:
  Ivan Sergio Borgonovo wrote:
   Is it just vaporware... maybe... but still there are pros and
   cons of having a bland schedule for EOL and new releases.
 
  We do have a schedule:
  http://developer.postgresql.org/index.php/PostgreSQL_8.4_Development_Plan

 woops maybe what's missing is a clear link on the main site[1] + EOL.

 thanks

 [1] http://www.postgresql.org/ I'd place it in the support menu...

Actually I think we should be pointing people to 
http://www.postgresql.org/developer/roadmap.

Of course we would still need to add an EOL page... I think one could make a 
strong argument for a static url for EOL info now that windows is EOL for  
8.2. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] TSearch2 Migration Guide from 8.2 to 8.3

2008-02-12 Thread Robert Treat
On Tuesday 12 February 2008 10:26, Tom Lane wrote:
 Richard Huxton [EMAIL PROTECTED] writes:
  Oliver Weichhold wrote:
  Is there something like a Migration Guide from 8.2to
  8.3 for tsearch2 users?
 
  Hmm - there was a blog posting recently that linked to a load of
  migration stuff...

 There's always RTFM:
 http://www.postgresql.org/docs/8.3/static/textsearch-migration.html

 (Note that the blog you linked to is a bit out of date, as it
 predates the creation of the tsearch2 compatibility module.)


Note another possible solution for mediawiki users is to do a fresh install 
using latest svn, which has native 8.3 fts support for postgres, and then do 
an xml dump/import of the wiki contents. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is PG a moving target?

2008-02-11 Thread Robert Treat
On Monday 11 February 2008 14:49, Jeff Davis wrote:
 On Mon, 2008-02-11 at 09:09 +0100, Peter Eisentraut wrote:
  Ken Johanson wrote:
   Is there anything now, or in the works, for compatibility emulation?
   For example to setup my session to act like 8.2 and allow less-strict
   typing.
 
  The best way to ensure 8.2 compatibility is to use 8.2.  But as casts are
  user definable, you can add back any casts you want.  Just don't add
  dozens of implicit casts and then come back here wondering why your
  application is behaving strangely. :)

 As I understand it, it's tricky (or impossible) to get the 8.2 behavior
 back just by adding/modifying casts.

 If not, couldn't we just publish those casts so people can be backwards
 compatible if they want?


that was the idea behind castcompat, which didn't get far out of the gate 
before several examples cropped up showing how backwards-compatible casting 
would break new 8.3 system expectations. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-advocacy] PostgreSQL Certification

2008-02-04 Thread Robert Treat
On Monday 04 February 2008 09:52, Selena Deckelmann wrote:
 On Feb 4, 2008 4:27 AM, Dave Page [EMAIL PROTECTED] wrote:
  Even a new domain seems odd to me - if this is to be official, then
  surely it should be under postgresql.org.

 Having a separate TLD actually increases the visibility of the effort
 from a search engine perspective.

 We can learn a lesson from Perl advocacy - it is still possible to
 render projects invisible to the outside world through excessive
 consolidation.  A search for perl blogs still does not put
 use.perl.org in the top results.


hmm, i'd have thought you would have wanted planet.perl.org anyway (though 
that doesn't show up either)

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?

2008-02-02 Thread Robert Treat
On Thursday 31 January 2008 07:08, [EMAIL PROTECTED] wrote:
 [Following up on my own message.]


 Also,

  let
 
  us
 
  know
 
  your
 
  wal
 
  tunning
 
  parameters
 
  like
 
  commit_delay,
 
  fsync.

 I haven't done any tuning as of yet. I'm running with the default settings
 produced by initdb.


Don't even bother trying to tune zfs untill after you've tuned postgres, 
otherwise your wasting your time. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql book - practical or something newer?

2008-01-31 Thread Robert Treat
On Wednesday 30 January 2008 02:54, Ow Mun Heng wrote:
 On Tue, 2008-01-29 at 19:16 +, Dave Page wrote:
  On Jan 29, 2008 6:16 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
   I try to be reasonable (no laughing people :)).
 
  Oh it's hard, so very, very hard!

 But seriously, I've ranted on this some time ago( and you can tell that
 I'm about to start again)

 rant
 One of the worst aspect of PG is the documentation, or the lack of it in
 terms of traditional house. The Manual is fine and all, but in most
 cases, what I find that it lacks is actually examples. Either examples
 to show what it a particular field/query means but also as a way to show
 exactly how a particular problem can be solved.

 When I played with both MSSQL and MySQL, I had loads of books (and I
 bought a bit of it too, didn't bother subscribing to safari, it just
 ain't a book!) to be used as reference and what not.

 In PG, all there is, is the manual, a book by Robert Treat, the Book
 from Joshua, 1 or 2 other books authored by someone I can't remember etc
 and that's about it.

 Then I would have to go hunt(via google) for any bit of blog/
 presentation slides from a meetup/talk etc for ways to find out how to
 do a particular thing. (Thanks Bruce M, Thanks Robert T - excellent
 partitioning talk!, Thanks PgCon!) and pore over those.

 Other than that, it's more or less, Bang you head here and send email
 to the list and hope someone answers

 I hang on to my O'reilly SQL Hacks book tightly as it gives me
 examples on how to solve a problem and even how other DBs solve it.

 I wish there was a book like MySQL Cookbook (which I have a copy)
 /rant

Just so you know, I approached OReally about writing a PostgreSQL Cookbook, 
and they turned it down. They did offer me some other titles, but those don't 
seem to have gone anywhere. 

I have thought of going the self-publishing route, but the reason against it 
is the same one as you don't see a lot of book publishers working on PG 
books; the sales just aren't that strong. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] WARNINGs after starting backup server created with PITR

2008-01-19 Thread Robert Treat
On Friday 18 January 2008 18:04, Erik Jones wrote:
 For our primary, er, main, onsite standby server that's also what we
 do.  But, this was a co-location to co-location transfer so there was
 no NFS mount, it was a direct rsync to the server at the other co-
 location.  For WAL files, I've already decided to write a WALShipper
 utility that will handle shipping WALs to multiple standbys with
 verfication, but for the base backup, this is distressing.  We do
 have the option to do the base backup to a portable USB drive and
 then carry it to the second co-lo for now.  But, pretty soon we're
 going to be surpassing the available limits in portably drive
 capacity unless we invest in tape drives.


Are you guys running ZFS yet? If so it's snapshot / cloning capabilities might 
be the way to go. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Robert Treat
On Friday 11 January 2008 13:44, Josh Harrison wrote:
 On Jan 11, 2008 1:22 PM, Erik Jones [EMAIL PROTECTED] wrote:
  On Jan 11, 2008, at 12:14 PM, Scott Marlowe wrote:
   On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
   Hi
   We have an Oracle production database with some terbytes of data.
   We wanted
   to migrate that to Postgresql (rigt now...a test database and not
   production) database.
   What are the good options to do that?
   Please advise me on where to look for more information on this topic
  
   You have two steps to work on.  The first is the DDL, to create
   equivalent tables in pgsql as in oracle, the second is to migrate over
   your data.
  
 I had done this with the test database. For ddl generation I used xml/xsl
 and for data migration I used jdbc. I can get the ddl generated fine. With
 JDBC the data migration is a bit slow.
 My question is abt the data migration. Im not sure how to try this with an
 online oracle database. We are required to run both postgres and oracle
 database simultaneously for a couple of months (atleast till we decide
 whether we are going to shut down oracle for good !!!). Since the oracle
 database is a production database, It will have updates/inserts during this
 time. How do you manage that?


About a year ago we converted one of our clients multi-TB ODS systems built in 
Oracle over to PostgreSQL. There's a case study about it you can get from the 
Sun folks at  
http://www.sun.com/third-party/srsc/resources/postgresql/postgre_success_dwp.pdf

Now, due to the size of the project, we had to run both the Oracle and 
Postgres systems in parallel for several months. We kept the data up to date 
using a slew of custom code, designed to replicate data from either the ODS 
system or the OLTP system, depending on various technical and business 
factors.  My guess is that in your case, you'd want a mix of replicating data 
from the current Oracle database and your application, as best possible. 

Figuring out how you go about replicating the data is  certainly easier if 
you've have been through it before, but I don't think it is anything too 
magical; we went through a number of different ideas and ended up using 
multiple methods depending on the data involved.  HTH. 

-- 
Robert Treat
Database Architect
http://www.omniti.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-15 Thread Robert Treat
On Tuesday 15 January 2008 21:00, Greg Smith wrote:
 On Wed, 16 Jan 2008, Ivan Sergio Borgonovo wrote:
  Furthermore I think that developing in such a MySQLish centric way
  will make MUCH harder to support any other DB not only PostgreSQL and
  freedom of choice is very important to me.

 Having helped out a bit getting Postnuke working better with PostgreSQL, I
 can tell you that didn't go far until the developers really embraced using
 ADOdb and were targeting 2 engines at once (MS SQL was the other one they
 really worked on).

 The only work I've seen for Drupal with similar focus all involves the PDO
 library, as alluded to in the post you mentioned:

 http://drupal.org/node/134580
 http://edin.no-ip.com/html/?q=code_siren_unofficial_drupal_6_x_database_dri
ver_supporting

 The problem with PDO is that it requires PHP5, which means it will be
 years until it's on enough shared hosts etc. that the mainstream Drupal
 version can require it.


There's been a big move in the php community to push people towards php5 (one 
of which was EOL of php4), which has started to pay off.  I'd guess that if 
they wanted to, they could switch to PDO with Drupal 7 and not hurt 
themselves too much. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


  1   2   3   4   >