another windows admin running all his service
under the administrator context.

I needed the PostgreSQL setup to prove to a customer
that a working setup could be made using PGSQL.  It lived
on my system for a couple of days in total, so cooking
up a perfectly secure system was hardly worth it, in fact
it was a major waste of my time.

The machine in question has a firewall, so external
connections to the service would never occur.

PostgreSQL telling me how to run my system security-wise
is infinitely annoying.  Feels like being locked in a
cage, which is always an insulting way to treat a user.

I just tried the same approach with another popular
open source database.  It indeed also refuses to start,
but the user isn't locked out.  It kindly says "you
probably shouldn't be doing this, but if you really want
to, you can run as root with --user=root".

Anyway, I just enabled the guest account on the machine
in question and started a command prompt under those
credentials.  Things wouldn't work unless the guest
account was allowed sign-on privileges (?), so it has
those now.  I've actually forgotten to disable them again,
so argueably being forced by PostgreSQL to run as another
user has actually caused *worse* security on this machine
than would otherwise be the case :-).

Oh well, enough ranting about that already.
Everything worked very smoothly once I had wasted a couple
of hours getting it to start.

Thanks for all the feedback regarding "pg_ctl register"
etc. and thanks for the discussion!


The point I was trying to make is that recovery is never
a cookbook process --- it's never twice the same problem.
(If it were, we could and should be doing something about
the underlying problem.)  This makes it difficult to
provide either polished tools or polished documentation.

I've seen a ton of wedged databases.  Missing a tuplet here
or there after recovery because the process applied by the
tool used is imperfect has never been a problem for any
company I've been at.  Minimal time to recover has always
been far more critical.  And also the money paid for the
actual recovery in terms of paychecks matter a lot.

Simply put, a tool with just a single button named "recover
all the data that you can" is by far the best solution in so
many cases.  Minimal fuzz, minimal downtime, minimal money
spent on recovery.  And perhaps there's even a good chance that
any missing data could be entered back into the system manually.

The only time I've ever heard of experts being brought in
to fix a database problem was when IBM's DB2 system crashed for
a major bank in Scandinavia.  But that's banking data, so that's
an entirely different story from everyday use by any other kind
of corporation.  It's .01% of the market, it's really not
that interesting if you ask me.

Ok, long rant short, convincing any company to use a database
system is much easier when that particular system has a one-click
recovery application.  Same reason why NTFS and FAT32 are
filesystems that people like to keep their data on - they know
that when things do go wrong, they can launch Tiramisu or
Easy Recovery Pro or what not and just tell it "recover as much
as you can onto this other disk".  People sleep safer if they
know that there's a backup plan, in particular one that doesn't
require two months of downtime while your DBA is learning the
innards of the database system file structure, posting to a
mailing list *hoping* that there's someone that can and wants
to talk to him (while he's at his most stressed) who has the
required expertise, etc.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to