[ADMIN] Upgrading to 8.2, changes in user/group management scripts

2007-01-19 Thread Peter Koczan

I'm upgrading our Postgres installation from 7.4 to 8.2 (I know, I know, but
we've been busy). The biggest thing to change will be our user/group
management scripts, as they directly modify pg_shadow and pg_group. Beyond
being potentially stupid (but hey, I didn't write it), this just won't work
anymore, for a few reasons, which I will get to in a bit.

What we used to do (all in a transaction block so it's nice and atomic):
For users: Delete pg_shadow, read in logins/uids from /etc/passwd and create
users, making sure the Unix uid matches up with the postgres sysid.
For groups: For each group, drop the group, read in the new group from
either another database table or some other mechanism.

Now, having the Unix uids match up with the Postgres sysids isn't such a big
deal since we'll be using Kerberos, which can presumably match up based on
login name. It was nice to have them match up, but it's probably not
necessary. However, the above mechanisms won't work for the following
reasons:

1. pg_shadow and pg_group are now views, and you can't update views.
2a. Simply dropping a user is insufficient if the user owns anything in the
database cluster.
2b. Deleting a user by removing the row from pg_authid works, but then the
object owner is unknown and the object disappears unless you know where to
look for it in the system tables. The objects are effectively orphaned.
3. There seems to be no way to recover the old sysid, as the "WITH SYSID
uid" clause in CREATE USER/CREATE ROLE is now just noise, and trying to
insert it directly into the system table proved fruitless. If you can't
recover the old sysid, it leads to the orphaned objects problem described in
2b.

So, I'm wondering how I can do something roughly equivalent to this, I've
been considering using some sort of "diff"-like mechanism, but it'd be nice
to have something simpler. Any help would be greatly appreciated.

Peter


Re: [ADMIN] Upgrading to 8.2, changes in user/group management scripts

2007-01-22 Thread Peter Koczan

The main thing I'm worried about is the orphaned objects problem. It's not
adding users so much as removing them that I'm concerned about (I work at a
University and we remove inactive/non-enrolled users). These cases would
likely require a lot more in-depth intervention by myself and other staff.

On 1/21/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:


On Fri, Jan 19, 2007 at 03:45:18PM -0600, Peter Koczan wrote:
> I'm upgrading our Postgres installation from 7.4 to 8.2 (I know, I know,
but
> we've been busy). The biggest thing to change will be our user/group
> management scripts, as they directly modify pg_shadow and pg_group.
Beyond
> being potentially stupid (but hey, I didn't write it), this just won't
work
> anymore, for a few reasons, which I will get to in a bit.
>
> What we used to do (all in a transaction block so it's nice and atomic):
> For users: Delete pg_shadow, read in logins/uids from /etc/passwd and
create
> users, making sure the Unix uid matches up with the postgres sysid.
> For groups: For each group, drop the group, read in the new group from
> either another database table or some other mechanism.
>
> Now, having the Unix uids match up with the Postgres sysids isn't such a
big
> deal since we'll be using Kerberos, which can presumably match up based
on
> login name. It was nice to have them match up, but it's probably not
> necessary. However, the above mechanisms won't work for the following
> reasons:
>
> 1. pg_shadow and pg_group are now views, and you can't update views.
> 2a. Simply dropping a user is insufficient if the user owns anything in
the
> database cluster.
> 2b. Deleting a user by removing the row from pg_authid works, but then
the
> object owner is unknown and the object disappears unless you know where
to
> look for it in the system tables. The objects are effectively orphaned.
> 3. There seems to be no way to recover the old sysid, as the "WITH SYSID

> uid" clause in CREATE USER/CREATE ROLE is now just noise, and trying to
> insert it directly into the system table proved fruitless. If you can't
> recover the old sysid, it leads to the orphaned objects problem
described in
> 2b.
>
> So, I'm wondering how I can do something roughly equivalent to this,
I've
> been considering using some sort of "diff"-like mechanism, but it'd be
nice
> to have something simpler. Any help would be greatly appreciated.

You already stated there's no reason to have the ID's match, so why
bother? Just check each user/group name to see if it already exists and
add it if it doesn't.
--
Jim Nasby [EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



[ADMIN] pg_dumpall problems

2007-01-29 Thread Peter Koczan

Hi all,

Thanks for your help thus far. I almost have 8.2 ready (hooray no more
ancient 7.4). I do have one more problem which is likely the last thing
before pushing out live support. pg_dumpall isn't working.

Right now, we run pg_dumpall as a non-superuser, let's call it backup, whom
we give read access to the appropriate system tables (pg_shadow in 7.4,
pg_authid in 8.2), I get this error...
pg_dumpall: query failed: ERROR:  permission denied for relation pg_authid
pg_dumpall: query was: SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1

The weird thing is, whenever I run this exact same command from psql as the
backup user, it works.

template1=> select current_role;
current_user
--
backup
(1 row)
template1=> \z pg_catalog.pg_authid;
 Access privileges for database "template1"
  Schema   |   Name| Type  |  Access privileges
+---+---+--
pg_catalog | pg_authid | table |
{postgres=arwdxt/postgres,backup=r/postgres}

template1=> SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1;
template1=> SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1;
(There's data...glorious, glorious data).

The easy solution is to make backup a database superuser, but that's a
change I'd not like to make right now since I don't understand all the
security implications. What's the deal with this error, and is there any
nicer solution? Thanks much in advance.

Peter


Re: [ADMIN] pg_dumpall problems

2007-01-30 Thread Peter Koczan

I should be a little more specific. The way we currently do backups is that
this backup user dumps user/group data (i.e. the --globals-only flag) for
pg_dumpall, and we pg_dump individual databases.

We grant read access to this backup user for all non-system
tables/views/etc. (and usage for non-system schemas) in user databases so it
can read and dump the data. We also grant it read access on select system
tables to make pg_dumpall work. We do this for two reasons. First, we like
to follow the generally good security practice of minimum access, and
second, (as a result of the first), we don't want to tickle a bug in our
backup system that magically corrupts our data (which is possible if this
user has more than read access).

So, is there any remedy to my problem (see below) short of granting
superuser access? Is this a bug (which I would then report on the
appropriate channels)?

As for Tom's suggestion, there's no way to specify the database in
pg_dumpall, only the server, and the same bug occurs if I run as the user on
the same server and cluster with the same major version.

Peter

On 1/29/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Peter Koczan" <[EMAIL PROTECTED]> writes:
> Right now, we run pg_dumpall as a non-superuser,

[ raised eyebrow... ]  That's never been a recommended procedure.

> The weird thing is, whenever I run this exact same command from psql as
the
> backup user, it works.

Maybe you're not trying it in the same database pg_dumpall is?

regards, tom lane



Original post for all those to see:
-
Thanks for your help thus far. I almost have 8.2 ready (hooray no more
ancient 7.4). I do have one more problem which is likely the last thing
before pushing out live support. pg_dumpall isn't working.

Right now, we run pg_dumpall as a non-superuser, let's call it backup, whom
we give read access to the appropriate system tables (pg_shadow in 7.4,
pg_authid in 8.2), I get this error...
pg_dumpall: query failed: ERROR:  permission denied for relation pg_authid
pg_dumpall: query was: SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1

The weird thing is, whenever I run this exact same command from psql as the
backup user, it works.

template1=> select current_role;
current_user
--
backup
(1 row)
template1=> \z pg_catalog.pg_authid;
 Access privileges for database "template1"
  Schema   |   Name| Type  |  Access privileges
+---+---+--
pg_catalog | pg_authid | table |
{postgres=arwdxt/postgres,backup=r/postgres}

template1=> SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1;
template1=> SELECT rolname, rolsuper, rolinherit, rolcreaterole,
rolcreatedb, rolcatupdate, rolcanlogin, rolconnlimit, rolpassword,
rolvaliduntil, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment
FROM pg_authid ORDER BY 1;
(There's data...glorious, glorious data).

The easy solution is to make backup a database superuser, but that's a
change I'd not like to make right now since I don't understand all the
security implications. What's the deal with this error, and is there any
nicer solution? Thanks much in advance.


Re: [ADMIN] pg_dumpall problems

2007-01-31 Thread Peter Koczan

On 1/30/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Peter Koczan" <[EMAIL PROTECTED]> writes:
> So, is there any remedy to my problem (see below) short of granting
> superuser access? Is this a bug (which I would then report on the
> appropriate channels)?

It's not a bug.

> As for Tom's suggestion, there's no way to specify the database in
> pg_dumpall, only the server, and the same bug occurs if I run as the
user on
> the same server and cluster with the same major version.

You still haven't responded to my query: did you try it in the same
database that pg_dumpall is connecting to?  My guess is that you have
munged the permissions on pg_shadow or pg_authid without understanding
that that will only take effect in the one database you do it in.
pg_dumpall is connecting to either "postgres" or "template1" depending
on version; what's the permissions situation in that database?

regards, tom lane



I misunderstood your question. I apologize. Granting access on "postgres"
worked like a charm. Thank you.

Peter


Re: [ADMIN] Question on Fragmentations

2007-02-08 Thread Peter Koczan

Moiz Kothari wrote:

Hi All,

What are the reasons of data getting fragmented in postgres? Do we 
have any page which explains different scenarios of data getting 
fragmented?


Regards,
Moiz Kothari
I guess there are two types of fragmentation in play with postgres, 
internal database fragmentation and external filesystem fragmentation.


Internal fragmentation is caused by holes in disk blocks when records 
are deleted, or a record is *just* a little too big that it can't fit in 
a remaining empty space in a particular disk block. This can be fixed 
(or at least reduced quite a bit) by using VACUUM/VACUUM ANALYZE 
(especially using the FULL option) in psql or the command vacuumdb (and 
its myriad options), as is a very helpful, oft-suggested strategy by 
people on these lists, for reasons other than just defragmentation.


External fragmentation occurs in postgres for the same reason it occurs 
in other types of files. Lots of appends and generally continual file 
growth mean that the filesystem is less likely to find adjacent disk 
blocks (which is the source of fragmentation). Even though Unix file 
systems generally do a good job of preventing fragmentation, some 
workloads are just not good. For instance, large mail spools (where the 
policy is one file per spool instead of one file per message) are prone 
to fragmentation.


Case in point, I use xfs as the filesystem running under postgres, and 
after a few days the "major" database clusters showed ~90% fragmentation 
on their respective partitions (which is about a 10 to 1 ratio of file 
fragments to files). After running a defragmenter, the fragmentation 
went under 5% (which is under 1.2 fragments per file), and performance 
generally doubled. Other, more application specific databases which 
still get hit and appended quite often, are often at no more than 3%-5% 
fragmentation.


I poked around and sadly couldn't find any references to fragmentation 
in the official docs, but I found a couple references to the psql 
command CLUSTER in other archived lists. This may help fix 
fragmentation, though I haven't tried it out myself to know if it works 
(or what implications there are for performance in general).


I've also read accounts of people with *HUGE* databases with almost no 
fragmentation, so yet another possibility in the fight against 
fragmentation may be to tweak your filesystem or postgres configs. If 
anyone has any specific parameters to try or experience doing this, I 
would consider it more than welcome, because I do not have that knowledge.


Sorry for the relative verbosity.

Peter

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [ADMIN] Question on Fragmentations

2007-02-09 Thread Peter Koczan

Michael Monnerie wrote:

On Freitag, 9. Februar 2007 04:08 Peter Koczan wrote:
  

Case in point, I use xfs as the filesystem running under postgres,
and after a few days the "major" database clusters showed ~90%
fragmentation on their respective partitions (which is about a 10 to
1 ratio of file fragments to files). After running a defragmenter



Does xfs have such stats, and defragmenter included? It could be a good 
idea for me to use that, then. Currently I use reiserfs.


mfg zmi
  
xfs comes with It does have built-in, xfs-approved utilities for stats 
and defragmenting built-in.


xfs_db gives stats (for fragmentation use xfs_db -c frag -r /dev/XXX). 
This works even if the filesystem is mounted and active, but I believe 
that old stats are cached until said filesystem is remounted or until 
some stat collection process runs.


xfs_fsr is the defragmenter (simply use xfs_fsr /dev/XXX). It's safe to 
run this on an active filesystem/database partition, because it throws 
away the fragmented data if files are changed. So, for full 
defragmentation, you'll either want to run it offline, unmounted, or 
during idle times.


Peter



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[ADMIN] converting from Sybase to Postgres

2007-03-19 Thread Peter Koczan

Hi everyone,

I'm looking to convert a lot of data from Sybase to Postgres. As this will
be a painful process, I was wondering if there are any utilities and
programs to ease the conversion process. Specifically, I'm looking for help
in 3 areas.

1. Recreation of the schema, especially converting triggers, data types, and
functions.
2. Converting SQL scripts and cron jobs where there are incompatibilities (
e.g. sybase : isnull(...) :: postgres : coalesce(...) ).
3. There is an ancient GUI that we use to interface to sybase (it's based on
APT, I believe). The nice thing is it can run external commands, and
interfaces somewhat nicely with Kerberos. Is there any piece of middleware
(preferably GUI-based, though web-based may do) for postgres that can do
this?

Of course I can spend hours tracking down cron jobs, apps, and scripts,
painstakingly recreating triggers and functions, and write my own GTK-based
GUI, but if there's anything that can help me, please let me know.

Peter


[ADMIN] Postgres Perl Module and Kerberos problem on Solaris

2007-03-23 Thread Peter Koczan

Hello,

I understand that this is probably outside the realm of Postgres itself, so
if no one has a really good definite answer, that's fine. Still, I thought
I'd fish for information and see if anyone else has run into this problem.
I've already made a bug report to the DBD::Pg folks.

The problem is that the Perl Module (DBD::Pg) runs into a broken assert on
Solaris (5.8 for anyone wondering). Message:

Assertion failed: (&(&_m->os)->n)->initialized ==
K5_MUTEX_DEBUG_INITIALIZED, file
../../../krb5-1.5.1/src/util/support/threads.c,
line 388
/local.gazoo/convertdata[4]: 5487 Abort(coredump)

I only get this message on Solaris, not on Linux, and only for perl scripts,
not for included programs, like vacuumdb. Even weirder, the assert only
seems to happen at the end of the script. All the database actions work
fine, and then (probably on a disconnection or a script exit) the assert is
broken.

I'm running Postgres 8.2 (the server is running under CentOS 4.4 Linux),
DBD::Pg 1.49, and Kerberos5 1.5.1 for anyone who's interested.

In the meantime, I've recompiled the module against postgres libraries
linked against old kerberos libraries. It's ugly, but it works.

If anyone has any info or cleaner fixes, it would be greatly appreciated. I
do realize that I may be barking up the wrong tree, so if you don't have any
info, that's OK too.

Peter


Re: [ADMIN] URGENT TABLE PG_SHADOW CORRUTEP

2007-03-25 Thread Peter Koczan

I saw something along these lines recently when I was running stuff from
postgres 7.4.7 (we recently upgraded to 8.2.3). Basically, a unique index
somehow fails and allows duplicate rows...but I don't know why.

Deleting doesn't work because postgres only looks for the record shown by
the index (though this may have been fixed in more recent releases). At
least this is the behaviour I observed.

The way I fixed it was to drop the unique index, remove the offending rows,
reinsert only one of the offending rows (so things would once again be
unique), and remake the index. It worked.

There are two things that concern me:
1. Trying to do this on a system table is likely ill-advised at best, and
dangerous at worst.
2. If the table only has 3 rows in it and it got corrupted, there's
something really wrong. My table in question was a user table that had
almost a million rows.

On 3/25/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Daniel Ricardo Medina" <[EMAIL PROTECTED]> writes:
> I have try to run this command in single user mod
>
> select * from pg_shadow;
> delete from pg_shadow;
> CREATE USER postgres WITH CREATEDB CREATEUSER PASSWORD '***' SYSID 1;
> CREATE USER userradar PASSWORD '***'

> but the result is the same.

What do you see if you look in pg_shadow right after doing the DELETE?
What PG version is this?

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate



[ADMIN] Postgres data on a shared file system

2007-04-19 Thread Peter Koczan

Just for funsies, today I decided to try and serve postgres data out of AFS
(just a tablespace, not an entire database cluster). For those of you not in
the know, AFS is a distributed, networked file system. It's like NFS, but
with differences in structure, permissions, quotas, administration, and many
other things that are beyond the scope of this list (you can find details
yourself if you're so inclined). I wanted to see if there were any potential
benefits, how it compared in terms of performance to a standard local-disk
cluster, and what hacks had to be put into place for things to work.

Does anyone else have experience trying to serve clusters or data from
shared file systems? I'd like to see how your experiences compare to mine.

- AFS uses Kerberos for authentication, and you need to have all the
Kerberos bits and tickets set up for the postgres system user in order to
have proper authentication for AFS. If you don't, you'd have to give
system:anyuser (the AFS equivalent of global access) write access to the
appropriate directories, which is bad for many reasons.
- As a result of the previous point, you can't use pg_ctl to start the
server. pg_ctl doesn't pass Kerberos credentials. I had to start the server
using the postgres binary and precede it with a ticket passing mechanism
(mine is called runauth).
- AFS was about 8% slower in both reading and writing data (both the local
disk and the AFS file server had 80 GB, 7200 RPM disks in RAID 1, and the
logs were on a separate disk on the local file-system, so this is a fairly
scientific metric).
- You can't take advantage of the shared file system because you can't share
tablespaces among clusters or servers. You'd either get an error for trying
to initialize a non-empty directory, or if you hack it to try and fool the
clusters you could wind up with data corruption and race conditions since
the clusters don't play nice.
- You can't take advantage of AFS replication (where you can put read-only
copies of files in one tree and read-write copies in another tree) since
trying to point a cluster at a read-only tablespace requires you to write to
that tree, which you can't do since it's read-only. You're better off using
something else for database replication.

All in all, while the prospect of using existing shared disk space was
intriguing, I couldn't take advantage of any features that would make it
truly worthwhile, there was a non-trivial performance hit, and it required a
few small hacks to get working.

But what about all of you, what are your thoughts and experiences with
trying to serve databases out of shared file systems?

Peter


Re: [ADMIN] VACUUM FULL ANALYSE hanging

2007-05-02 Thread Peter Koczan
I've noticed in my own experiments and experiences with VACUUM FULL that 
it tries to reindex all the indexes to compress them. While a good idea, 
this unfortunately takes a *long* time.


You should check two things. First, the server CPU usage should be high 
(~100% if on a single core). Second, check the contents of the pg_locks 
view. It should hold exclusive locks on all the indexes (though it's 
ordered by oid, so you might have to check pg_class or something else to 
get the actual table/index name). If it's truly hanging/deadlocking, the 
locks won't be granted, and the CPU usage will be low.


In my experiences, doing a dump/restore was far faster, but this method 
creates downtime. (e.g. a 10 GB database took 2 hours to restore, while 
reindexing/vacuuming full was still on the same table after 4 hours).


If anyone can shed some light onto why reindexing/vacuuming full takes 
so long, I'd like to know.


Peter

Gabriele Bartolini wrote:

Hi guys,
 
   I am having problems with freeing disk space after a massive delete 
operation on a table that had approximately 80 million record. I 
ran the following command, by setting the vacuum memory to 
approximately a GigaByte:
 
SET vacuum_mem TO 1024000

VACUUM FULL ANALYSE VERBOSE oltp.requests
 
Here is what I get:
 
There were 34221203 unused item pointers.

Total free space (including removable row versions) is 8969616624 bytes.
1129827 pages are or will become empty, including 0 at the end of the 
table.
1337307 pages containing 8964065020 free bytes are potential move 
destinations.

CPU 16.03s/9.12u sec elapsed 219.47 sec.
INFO:  index "requests_pkey" now contains 20075362 row versions in 
327419 pages

DETAIL:  8211835 index row versions were removed.
217782 index pages have been deleted, 217782 are currently reusable.
CPU 5.38s/12.53u sec elapsed 100.80 sec.
INFO:  index "idx_oltp_requests_access_date" now contains 20075362 row 
versions in 491725 pages

DETAIL:  8211835 index row versions were removed.
426501 index pages have been deleted, 426501 are currently reusable.
CPU 14.96s/13.47u sec elapsed 200.91 sec.
INFO:  index "idx_oltp_requests_access_time" now contains 20075362 row 
versions in 343915 pages

DETAIL:  8211835 index row versions were removed.
213612 index pages have been deleted, 213612 are currently reusable.
CPU 6.32s/14.03u sec elapsed 111.24 sec.
INFO:  index "idx_oltp_requests_referer" now contains 20075362 row 
versions in 470822 pages

DETAIL:  8211835 index row versions were removed.
376873 index pages have been deleted, 376873 are currently reusable.
CPU 18.85s/17.18u sec elapsed 265.25 sec.
INFO:  index "idx_oltp_requests_session" now contains 20075362 row 
versions in 611141 pages

DETAIL:  8211835 index row versions were removed.
478827 index pages have been deleted, 478827 are currently reusable.
CPU 16.83s/14.33u sec elapsed 258.47 sec.
INFO:  index "idx_oltp_requests_status_code" now contains 20075362 row 
versions in 690337 pages

DETAIL:  8211835 index row versions were removed.
600953 index pages have been deleted, 600953 are currently reusable.
CPU 34.37s/24.44u sec elapsed 297.21 sec.
INFO:  index "idx_oltp_requests_url" now contains 20075362 row 
versions in 336075 pages

DETAIL:  8211835 index row versions were removed.
73821 index pages have been deleted, 73821 are currently reusable.
CPU 17.06s/28.14u sec elapsed 319.16 sec.
   But here, the command simply hangs.
 
   The table description is:
 
htminer=> \d oltp.requests

Tabella "oltp.requests"
  Colonna   |Tipo |Modificatori
+-+
 id_request | integer | not null
 id_session | integer |
 access_time| timestamp(0) with time zone | not null
 request_method | numeric(2,0)| not null default 1
 http_version_major | numeric(1,0)| not null default 1
 http_version_minor | numeric(1,0)| not null default 1
 status_code| numeric(3,0)| not null
 bytes  | integer |
 time_taken | numeric(3,0)|
 id_url | integer | not null
 id_referer | integer |
 content_language   | character(2)|
 dwell_time | smallint| not null default 1
 request_type   | numeric(1,0)|
Indici:
"requests_pkey" PRIMARY KEY, btree (id_request), tablespace 
"htminer_oltp"
"idx_oltp_requests_access_date" btree (date_trunc('day'::text, 
timezone('UTC'::text, access_time))), tablespace "htminer_oltp"
"idx_oltp_requests_access_time" btree (access_time), tablespace 
"htminer_oltp"
"idx_oltp_requests_referer" btree (id_referer), tablespace 
"htminer_oltp"
"idx_oltp_requests_session" btree (id_session, status_code), 
tablespace "htminer_oltp"
"idx

Re: [ADMIN] upgrade 8.0.3 -> 8.2.4

2007-05-10 Thread Peter Koczan

A few months ago, I upgraded postgres from 7.4 to 8.2. There were a few
gotchas, but we don't keep a whole lot of data so even the biggest problems
were, on the whole, minor.

- The cidr data type became more strict, and a few tables in our network
database would not restore until this was fixed.
- One of the primary keys broke and couldn't get created. This was more the
fault of poor admins (before my time) and internal fragmentation than
postgres 7.4. I had to fix the underlying table before it would restore.
- There were a few permissions issues (new acls + an inconsistent previous
policy = fun).
- The system databases went from SQL_ASCII encoding to UTF8 encoding. I had
to explicitly create the database during the restore, or else the database
would have the wrong encoding.

I doubt that you will run into these exact problems, but my point is that
there are inevitably some gotchas. If it's not prohibitively time-consuming,
I'd recommend a full dump/restore of your database(s) from 8.0 to 8.2 so you
can catch many of these gotchas before going live. Also, you might want to
create test versions of your apps and try them against the 8.2 server.

On the whole, I've found postgres to be very good at maintaining backwards
compatibility of interfaces and sql, so I estimate that most queries and db
apps should "just work" with 8.2.

Peter

On 5/10/07, Steve Holdoway <[EMAIL PROTECTED]> wrote:


Are there any gotchas? I've got the opportunity to move to another
database server for this application and yould like to take the opportunity
to upgrade at the same time.

tia,

Steve

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate



Re: [ADMIN] upgrade 8.0.3 -> 8.2.4

2007-05-10 Thread Peter Koczan
I almost forgot one of the biggest gotchas. Remember that users and 
groups were conflated into "roles" in postgres 8.1. In addition to being 
a change, there were some other issues, namely that you couldn't 
manually specify user and group sysids. I had to change our user and 
group management scripts to properly reflect this.


Peter

Peter Koczan wrote:
A few months ago, I upgraded postgres from 7.4 to 8.2. There were a 
few gotchas, but we don't keep a whole lot of data so even the biggest 
problems were, on the whole, minor.


- The cidr data type became more strict, and a few tables in our 
network database would not restore until this was fixed.
- One of the primary keys broke and couldn't get created. This was 
more the fault of poor admins (before my time) and internal 
fragmentation than postgres 7.4. I had to fix the underlying table 
before it would restore.
- There were a few permissions issues (new acls + an inconsistent 
previous policy = fun).
- The system databases went from SQL_ASCII encoding to UTF8 encoding. 
I had to explicitly create the database during the restore, or else 
the database would have the wrong encoding.


I doubt that you will run into these exact problems, but my point is 
that there are inevitably some gotchas. If it's not prohibitively 
time-consuming, I'd recommend a full dump/restore of your database(s) 
from 8.0 to 8.2 so you can catch many of these gotchas before going 
live. Also, you might want to create test versions of your apps and 
try them against the 8.2 server.


On the whole, I've found postgres to be very good at maintaining 
backwards compatibility of interfaces and sql, so I estimate that most 
queries and db apps should "just work" with 8.2.


Peter

On 5/10/07, *Steve Holdoway* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Are there any gotchas? I've got the opportunity to move to another
database server for this application and yould like to take the
opportunity to upgrade at the same time.

tia,

Steve

---(end of
broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[ADMIN] trouble restarting a server

2007-05-21 Thread Peter Koczan

Hi all, I'm having some trouble restarting a long-running server. Basically,
pg_ctl can't kill any processes waiting on a notify interrupt. Here's what
happened after trying a "stop -m fast"

[EMAIL PROTECTED] ~]# ps axvw | grep post
3606 ?S  3:30 23  3260 41575 6644  0.1 /usr/bin/postgres -D
/postgres/db-8.2
4293 ?Ss 0:17  0  3260 40551 27004  0.6 postgres: writer
process
4294 ?Ss 0:00  0  3260  9363 1744  0.0 postgres: stats
collector process
14508 ?Ss 0:01  0  3260 41195 29356  0.7 postgres: me csdb
zim(48570) notify interrupt
20701 ?Ss 0:01  0  3260 41127 29384  0.7 postgres: others
csdb chef(45882) notify interrupt
31179 ?Ss 0:01  0  3260 41127 29428  0.7 postgres: others
csdb yfandes(33113) notify interrupt
7342 ?Ss 0:03  0  3260 41723 30584  0.7 postgres: others
csdb reed(54823) notify interrupt
7343 ?Ss 0:02  0  3260 41127 29396  0.7 postgres: others
csdb reed(54824) notify interrupt
7404 ?Ss 0:02  0  3260 41127 29392  0.7 postgres: others
csdb reed(54830) notify interrupt
29537 ?Ss 0:02  0  3260 41127 29404  0.7 postgres: others
csdb stupid(60028) notify interrupt
26671 pts/0S+ 0:00  119  7976 1700  0.0 su postgres -c
/s/postgresql-8.2/bin/pg_ctl stop -m fast -D /scratch.1/postgres/csdb-8.2 -l
/dev/null
26672 pts/0S+ 0:00  1   283  5556 1180  0.0 -csh -c
/usr/bin/pg_ctl stop -m fast -D /scratch.1/postgres/csdb- 8.2 -l /dev/null
26673 pts/0S+ 0:00  023  6916 1440  0.0 /usr/bin/pg_ctl stop
-m fast -D /postgres/db-8.2 -l /dev/null
26713 ttyS0S+ 0:00  071  5016  644  0.0 grep post

Any idea how postgres gets into this state, or how it can be resolved short
of a system reboot or a "stop -m immediate"? Thanks much.

Peter


Re: [ADMIN] trouble restarting a server

2007-05-22 Thread Peter Koczan

The release is 8.2.4. I haven't been able to reproduce the condition yet,
but I will send along stack traces as soon as I can. I have this strange
feeling that it's only going to happen when I find a reason to make a
restart-worthy config change.

Peter

On 5/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Peter Koczan" <[EMAIL PROTECTED]> writes:
> [ lots of processes stuck in "notify interrupt" code ]

That's weird.  If it's still in that state, or if you can reproduce it,
could you attach to a few of those processes with gdb and get stack
traces?

Looking at the async.c code, an obvious candidate is that that routine
tries to take ExclusiveLock on pg_listener --- so if something had
managed to exit without releasing a lock on that table, hangups could be
expected.  But if that were the case, you'd think the process status
lines would include "waiting".  My guess is they're blocked on something
lower-level than a table lock, but without a stack trace it's hard to
guess what.

Which PG release is this exactly?

regards, tom lane



Re: [ADMIN] trouble restarting a server

2007-05-31 Thread Peter Koczan

It finally reoccurred. Here's what I got from attaching to those processes
from gdb. I attached with the postmaster binary, let me know if I should use
something else.

vero(su): ps axvw | grep notify
24556 ?Ss 0:03  0  3265 41262 29672  0.7 postgres: jerel
csdb chef(36275) notify interrupt
2889 ?Ss 0:04  0  3265 41270 29688  0.7 postgres: ela csdb
newton(32777) notify interrupt
2943 ?Ss 0:04  0  3265 41270 29684  0.7 postgres: stefan
csdb stupid(32788) notify interrupt
5866 ?Ss 0:04  0  3265 41270 29680  0.7 postgres: petska
csdb brian(32786) notify interrupt
27850 ?Ss 0:03  0  3265 41270 29768  0.7 postgres: dparter
csdb yfandes(35456) notify interrupt
18582 ?Ss 0:03  0  3265 41270 29732  0.7 postgres: timc csdb
tornado(47047) notify interrupt
 449 ?Ss 0:02  0  3265 41270 29764  0.7 postgres: archer
csdb spoon(33141) notify interrupt
12731 pts/0S+ 0:00  071  3828  664  0.0 grep notify

vero(su): gdb /s/postgresql/bin/postmaster 24556
[-  Begin lots of messages -]
GNU gdb Red Hat Linux (6.3.0.0-1.132.EL4rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db
library "/lib/tls/libthread_db.so.1".

Attaching to program: /afs/cs.wisc.edu/s/postgresql-8.2.4/@sys/bin/postmaster,
process 5866
Reading symbols from /lib/libpam.so.0...done.
Loaded symbols for /lib/libpam.so.0
Reading symbols from /afs/cs.wisc.edu/s/openssl-0.9.8d
/@sys/lib/libssl.so.0.9.8d...done.
Loaded symbols for /s/openssl-0.9.8d/lib/libssl.so.0.9.8d
Reading symbols from /afs/cs.wisc.edu/s/openssl-0.9.8d
/@sys/lib/libcrypto.so.0.9.8d...done.
Loaded symbols for /s/openssl-0.9.8d/lib/libcrypto.so.0.9.8d
Reading symbols from
/afs/cs.wisc.edu/s/krb5-1.5.1/@sys/lib/libkrb5.so.3...done.Loaded
symbols for /s/krb5-1.5.1/lib/libkrb5.so.3
Reading symbols from /lib/libcrypt.so.1...done.
Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/tls/libm.so.6...done.
Loaded symbols for /lib/tls/libm.so.6
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /afs/cs.wisc.edu/s/krb5-1.5.1
/@sys/lib/libcom_err.so.3...done.
Loaded symbols for /s/krb5-1.5.1/lib/libcom_err.so.3
Reading symbols from /lib/libaudit.so.0...done.
Loaded symbols for /lib/libaudit.so.0
Reading symbols from /afs/cs.wisc.edu/s/krb5-1.5.1
/i386_cent40/lib/libk5crypto.so.3...done.
Loaded symbols for /s/krb5-1.5.1/i386_cent40/lib/libk5crypto.so.3
Reading symbols from /afs/cs.wisc.edu/s/krb5-1.5.1
/i386_cent40/lib/libkrb5support.so.0...done.
Loaded symbols for /s/krb5-1.5.1/i386_cent40/lib/libkrb5support.so.0
Reading symbols from /lib/libresolv.so.2...done.
Loaded symbols for /lib/libresolv.so.2
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Reading symbols from /lib/libnss_dns.so.2...done.
Loaded symbols for /lib/libnss_dns.so.2
0x007ef7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
[- End lots of messages -]

(gdb) bt
#0  0x007ef7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
#1  0x008c60f3 in __write_nocancel () from /lib/tls/libc.so.6
#2  0x0064d734 in sock_write () from /s/openssl-0.9.8d
/lib/libcrypto.so.0.9.8d
#3  0x0008 in ?? ()
#4  0x09ede1a2 in ?? ()
#5  0x0038 in ?? ()
#6  0x09ee50c0 in ?? ()
#7  0x006e06c8 in ?? () from /s/openssl-0.9.8d/lib/libcrypto.so.0.9.8d
#8  0x09ed4148 in ?? ()
#9  0x in ?? ()

All the other processes are the same except for addresses in #4, #6, and #8,
but they're all within a few MB of each other (they're probably asynchronous
interrupts).

Let me know if you need more info.

Peter

On 5/22/07, Peter Koczan <[EMAIL PROTECTED]> wrote:


The release is 8.2.4. I haven't been able to reproduce the condition yet,
but I will send along stack traces as soon as I can. I have this strange
feeling that it's only going to happen when I find a reason to make a
restart-worthy config change.

Peter

On 5/21/07, Tom Lane < [EMAIL PROTECTED]> wrote:
>
> "Peter Koczan" <[EMAIL PROTECTED]> writes:
> > [ lots of processes stuck in "notify interrupt" code ]
>
> That's weird.  If it's still in that state, or if you can reproduce it,
> could you attach to a few of those processes with gdb and get stack
> traces?
>
> Looking at the async.c code, an obvious candida

Re: [ADMIN] Help with database change

2007-06-05 Thread Peter Koczan

Chris Hoover wrote:
I am doing some research into partitioning my postgres database.  
While doing this, I am trying to take the opportunity to improve the 
over all database.design.  One of the things I am thinking of 
implementing would be the use of nullif/coalesce.  Our application 
tends to send a lot of strings that only contain spaces.  While I know 
in the long run, the application needs to be fixed, I was thinking 
about using nullif and coalesce so the database can just store nulls 
in the database and save the space of storing "empty" strings as an 
interim fix.


My question is, how expensive is nullif and coalesce?  Can I expect to 
see a noticeable impact on performance if I do this?


Thanks,

Chris
My gut would say that it's not very expensive, unless you're using very 
expensive regular expressions or already slow functions in expressions 
with nullif, coalesce.


However, you can always run tests yourself. Just generate a bunch of 
rows with dummy data and put them in a test database. There it's easy 
enough to benchmark nullif and coalesce on your own terms and see how 
much overhead they generate.


Peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [ADMIN] troubleshooting "idle in transaction"

2007-06-06 Thread Peter Koczan
Check the pg_locks system view in the pg_catalog schema. It will tell 
you a wealth of information.


Peter

Dan Harris wrote:

Greetings..

I'm running 8.0.12 and the system has been very stable for years now 
with no significant application changes.  I am using 
Apache::Session::Postgres in a web application to store session 
state.  This has really been flawless for us so far, but lately I've 
caught a few occurrences where I will see in GNU top, the following:


 9136 postgres  16   0  546m 9.8m 8080 S0  0.0   0:00.00 1 
postgres: postgres sessions harvard(49197) idle in transaction
10892 postgres  16   0  546m 9180 7356 S0  0.0   0:00.01 3 
postgres: postgres sessions harvard(49649) SELECT waiting
12174 postgres  16   0  546m 9172 7348 S0  0.0   0:00.00 3 
postgres: postgres sessions harvard(51158) SELECT waiting
12175 postgres  16   0  546m 9152 7328 S0  0.0   0:00.01 1 
postgres: postgres sessions harvard(51159) SELECT waiting
12176 postgres  16   0  546m 9112 7288 S0  0.0   0:00.01 1 
postgres: postgres sessions harvard(51160) SELECT waiting


I can connect to the database fine and select from it when this 
occurs, but I'm guessing that the owner of that particular session row 
is refreshing their browser and seeing it 'hang', causing the lock 
jam.  I know this could potentially be a problem with Apache::Session 
logic, but that module has not been updated for as long as I can 
remember, so I'm wondering if this could be a database issue somehow?


Previously, I have just killed the process that's idle in transaction, 
then things clean up..  However, this doesn't feel very clean.


Can anyone recommend a good process for learning why exactly that 
transaction is not completing?  Or, is there a postgresql.conf setting 
that can automatically kill these errant locks?


-Dan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[ADMIN] postgres benchmarking

2007-06-26 Thread Peter Koczan

Hello,

I checked in the docs and forums and couldn't find exactly what I wanted, so
I figured I'd ask around here. Are there any good programs (specifically
portable ones) for doing fairly intense benchmarking of postgres?

Mostly, I'm interested in getting performance numbers from different
hardware configurations (32-bit vs. 64-bit, different disk systems). I'm
building some new servers and I'd like to see if it's really worth it to
splurge a little for faster disks.

Thanks,
Peter


Re: [ADMIN] Load DB - Conversion from 8.1.4 to 8.2.4

2007-06-26 Thread Peter Koczan

Tommy Gildseth wrote:

Campbell, Lance wrote:


I will be converting from PostgreSQL 8.1.4 to 8.2.4. Is it OK to do 
the following command in order to load the new database after install?


pg_dump -h /host1/ –p 1234 /dbname/ | psql -h /host1 –p 56789/ 
/dbname///


Remember that port 1234 is PostgreSQL 8.1.4 and port 5678 is 
PostgreSQL 8.2.4.




Should be fine, as long as you ensure that the pg_dump you use, is the 
one from your 8.2.4 installation, and not the 8.1.4 version.




Two quick addendums:

- pg_dump doesn't dump large objects. You should use the following if 
you use large objects.


pg_dump -h /host1/ –p 1234 -Fc /dbname/ | pg_restore -h /host1 –p 56789/ 
/dbname///


- There might be some other issues with the upgrade. I've run into 
permissions issues, data types that were checked more strictly, and a 
couple broken primary keys in my days as an admin. I'm not saying you'll 
run into these, but if you can, do a dry run of a dump/restore so you 
can solve the problems up front.


Peter

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] local kerberos authentication?

2007-07-09 Thread Peter Koczan

Hi all,

I would like to, but can't seem to get Kerberos working for local 
connections.


Here are the last few lines of my pg_hba.conf

# TYPE  DATABASEUSERIP-ADDRESSIP_MASK   METHOD
local   all all krb5
hostssl all all 128.105.0.0   255.255.0.0   krb5
hostssl all all 198.133.224.0 255.255.255.0 krb5

Here's what trying to connect got me (first locally, then via the 
network and SSL).


[EMAIL PROTECTED](1)] ~ $ psql postgres
psql: FATAL:  missing or erroneous pg_hba.conf file
HINT:  See server log for details.
[EMAIL PROTECTED](2)] ~ $ psql -h mitchell -p 5432 postgres
psql: FATAL:  missing or erroneous pg_hba.conf file
HINT:  See server log for details.

And here are the last few lines of the server log.

Jul  9 19:58:16 mitchell postgres[10730]: [2-1] LOG:  connection 
received: host=mitchell.cs.wisc.edu port=53829
Jul  9 19:58:16 mitchell postgres[10730]: [3-1] LOG:  missing field in 
file "/scratch.1/postgres/testing-8.2/pg_hba.conf" at end of line 69
Jul  9 19:58:16 mitchell postgres[10730]: [4-1] FATAL:  missing or 
erroneous pg_hba.conf file
Jul  9 19:58:16 mitchell postgres[10730]: [4-2] HINT:  See server log 
for details.
Jul  9 19:58:16 mitchell postgres[10731]: [2-1] LOG:  connection 
received: host=mitchell.cs.wisc.edu port=53830
Jul  9 19:58:16 mitchell postgres[10731]: [3-1] LOG:  missing field in 
file "/scratch.1/postgres/testing-8.2/pg_hba.conf" at end of line 69
Jul  9 19:58:16 mitchell postgres[10731]: [4-1] FATAL:  missing or 
erroneous pg_hba.conf file
Jul  9 19:58:16 mitchell postgres[10731]: [4-2] HINT:  See server log 
for details.


If I change the method to trust, it works, so it looks like krb5 isn't 
supported for local connections, at least not on the surface. I'd also 
like to get away from trust authentication because of the wonderful 
security problems it entails.


Has anyone done this? Is this even possible? It's not a huge deal if it 
can't be done, but I'd like to know.


Peter

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] "_" in a serach pattern

2007-07-21 Thread Peter Koczan

Hi, Jessica,
1. How do I get rid of the nonstandard warning, but still using the 
index search?

You have two options.

- Turn off the warnings in the postgresql.conf file. Use this with 
caution (or don't use it at all) as it does pose a potential threat for 
SQL injections if other options aren't properly set. Read up at 
http://www.postgresql.org/docs/8.2/static/runtime-config-compatible.html 
for more detail.
- Use escape-string formatting. This is the best practice since it is 
standards-conforming and more secure. You can do it, for instance, as

   select name from table where name like 'A!_B%' escape '!';
You can escape with most characters, and it's mostly a matter of 
personal preference.


2. How do I search with a wild card % in the middle of the pattern? 
Would varchar(80) or char(80) make a difference about the wild card 
search (% in the middle)?
I think that postgres is seeing the trailing whitespace on the end of 
the char type and not accounting for it in the search. In this case, 
varchar or text types would make a difference since they don't do 
whitespace padding (unless you force it in). You should remember that if 
you plan on converting the data type, trim the extraneous whitespace 
first. However, I would switch to varchar/text so it saves some space 
and saves you these headaches, unless there's an absolute need for 
fixed-length char fields. I use text almost exclusively for string data 
since it's arbitrary-length, I almost never have to worry about overflow 
or later administration.


If you don't want to convert data, you can use the rtrim() function 
(i.e. "select rtrim(name) from ...").


Peter



Need a vacation? Get great deals to amazing places 
on 
Yahoo! Travel. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] "_" in a serach pattern

2007-07-23 Thread Peter Koczan
The only difference is that varchar can have a length limit, text is 
always unlimited length. There's no significant performance difference 
between either. For most intents and purposes, they're equal.


varchar is better than text in that limits are already built in. You can 
always impose limits on text types using rules and domains, but at a 
slight performance hit and having to create and maintain your own rules 
and domains. If you need limits, you might consider using varchar over 
text as everything is there.


I use text since I don't want to worry about overflow. Personally, 
specifying a varchar(500) column to be used as a "big string" field 
makes little sense to me. But, if you need a limit for a display or a 
program, you probably should use varchar. Besides, if you ever need to 
convert data types, any of the postgres 8.* releases make it easy.


Peter

P.S. You might want to file a bug report about your initial problem. 
According to 
http://www.postgresql.org/docs/8.2/static/datatype-character.html


Values of type character are physically padded with spaces to the 
specified width /n/, and are stored and displayed that way. However, the 
padding spaces are treated as semantically insignificant. Trailing 
spaces are disregarded when comparing two values of type character, and 
they will be removed when converting a character value to one of the 
other string types. Note that trailing spaces /are/ semantically 
significant in character varying and text values.


It should have ignored the spaces in the char field and didn't.

Peter

Jessica Richard wrote:

Peter,

thanks a lot for your reply.

Could you please tell me more

What is the difference between varchar and text? what is the benefit 
of each one? and is text alway better than varchar ?--- when it comes 
to a string column...


thanks

*/Peter Koczan <[EMAIL PROTECTED]>/* wrote:

Hi, Jessica,
> 1. How do I get rid of the nonstandard warning, but still using the
> index search?
You have two options.

- Turn off the warnings in the postgresql.conf file. Use this with
caution (or don't use it at all) as it does pose a potential
threat for
SQL injections if other options aren't properly set. Read up at
http://www.postgresql.org/docs/8.2/static/runtime-config-compatible.html

for more detail.
- Use escape-string formatting. This is the best practice since it is
standards-conforming and more secure. You can do it, for instance, as
select name from table where name like 'A!_B%' escape '!';
You can escape with most characters, and it's mostly a matter of
personal preference.
>
> 2. How do I search with a wild card % in the middle of the pattern?
> Would varchar(80) or char(80) make a difference about the wild card
> search (% in the middle)?
I think that postgres is seeing the trailing whitespace on the end of
the char type and not accounting for it in the search. In this case,
varchar or text types would make a difference since they don't do
whitespace padding (unless you force it in). You should remember
that if
you plan on converting the data type, trim the extraneous whitespace
first. However, I would switch to varchar/text so it saves some space
and saves you these headaches, unless there's an absolute need for
fixed-length char fields. I use text almost exclusively for string
data
since it's arbitrary-length, I almost never have to worry about
overflow
or later administration.

If you don't want to convert data, you can use the rtrim() function
(i.e. "select rtrim(name) from ...").

Peter

>

> Need a vacation? Get great deals to amazing places
> on
> Yahoo! Travel.



Yahoo! oneSearch: Finally, mobile search that gives answers 
<http://us.rd.yahoo.com/evt=48252/*http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC>, 
not web links. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [ADMIN] "_" in a serach pattern

2007-07-24 Thread Peter Koczan
Most database systems can figure out the size (in bytes) of a column 
rather quickly from offsets within a record. I doubt that postgres is 
any different. The only difference I can think of is that it might be 
slower to filter out trailing spaces of a char column before comparing, 
though the actual string length may be cached somewhere. Semantically, 
it should be no different. I haven't perused the source code enough to 
know for sure.


Any of the developers that patrol this list can probably answer your 
questions better, especially the question of matching or not matching 
strings of different length.


Peter

Ben Kim wrote:
The only difference is that varchar can have a length limit, text is 
always unlimited length. There's no significant performance 
difference between either. For most intents and purposes, they're equal.


Forgive me for diversion, but out of curiosity, would it make some 
difference whether strings of known length are compared or strings of 
unknown length are compared, like in join condition "using(address)"?


Or, is the actual length stored in a separate field and read first so 
strings of different lengths wouldn't even need to be matched?


Is there some reference on this subject other than the source code?


Thanks.

Ben K.
Developer
http://benix.tamu.edu




---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] stracing a connection

2007-08-01 Thread Peter Koczan
Hi all,

I'm running into a few problems with postgres connections, specifically
notify/listening connections. What program(s) should I attach to the
connection using gdb or strace in order to actually get some useful data?

i.e. I'll be using something like "strace -p [pid] [command]", what should I
use for [command]?

Thanks,
Peter


Re: [ADMIN] stracing a connection

2007-08-01 Thread Peter Koczan

Thank you Alvaro, it worked.

I got confused when I did it before since gdb couldn't figure out half 
of the elements in the backtrace, and strace kept saying "I can't 
attach", which was only because gdb was already attached to that same 
process.


Peter

Alvaro Herrera wrote:

Peter Koczan escribió:
  

Hi all,

I'm running into a few problems with postgres connections, specifically
notify/listening connections. What program(s) should I attach to the
connection using gdb or strace in order to actually get some useful data?

i.e. I'll be using something like "strace -p [pid] [command]", what should I
use for [command]?



strace -p pid
gdb -p pid

You don't need to specify the command at all.

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Configure pg_hba.conf

2007-08-15 Thread Peter Koczan
A few notes. postgres looks through pg_hba.conf until it finds the
*first* entry that matches and tries to authenticate and connect using
that method. There's no fallback, so order of entries is important.

If you just want to have any IP connect using one particular
authentication method, put something like this as the last line in
your pg_hba.conf:
hostall all 0.0.0.0   0.0.0.0   md5

Keep in mind that *anyone* from *any computer* can try to connect to
your database server.  (i.e. caveat emptor)

If you want to do something different for other subnets or restrict
access to specific users, you should put those lines in *before* that
so that postgres will see them.

Check here for more details:
http://www.postgresql.org/docs/8.2/interactive/client-authentication.html

Peter

On 8/15/07, Alexander B. <[EMAIL PROTECTED]> wrote:
> Hi people,
>
> I would like to configure pg_hba for any IP.
> I have several network mask,  some times 192..., or 10, or 190
> ..., ... ...
>
> How can I configure for any IP?
>
> Thanks
> Alexander
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [ADMIN] XML output ?

2007-09-01 Thread Peter Koczan
> Is it possible to generate the output of SQL statements in PostgreSQL to XML
> output?
>
> If anybody has came across with this or worked on it? Or anybody has any
> Documentation for that?

Yes, you can either install the xml2 contrib module for current
versions, or if you can hold out until 8.3, there will be functions
for xml output built into postgres.

Check out http://developer.postgresql.org/pgdocs/postgres/functions-xml.html
and http://developer.postgresql.org/pgdocs/postgres/datatype-xml.html
for the upcoming 8.3 stuff. I couldn't find anything on xml2 quickly,
but I'm sure a few minutes of searching will help you find what you
need.

Peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] geometry poligons performance and index

2007-09-04 Thread Peter Koczan
>  The results are ok, but the query is too slow. I tried to applay a GIST and
>  gtree index but performance did not improve.
>  By explaining the query I see that the where condition gets the query slow.
>  Is there a more simple way to check if two poligons have a not null
>  intersection (they overlap)?
>
>  Is there a kind of index I can use ?

If you stored the polygon geometries in the same table, you could use
a functional index (give the CREATE INDEX statement a function to
evaluate rather than a column). Check the CREATE INDEX page for
caveats. Beware that the index would grow with the square of the
number of rows if you did it like this.

Peter

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[ADMIN] "any" to functions and function aliases

2007-09-08 Thread Peter Koczan
Hi all,

I have a question regarding functions. I'm in the process of porting
old but necessary applications to postgres, and was wondering about a
couple things regarding portability of functions.

- Below I have an error message from pg_restore on an old database
dump. Apparently this function worked properly in 7.4, but doesn't
work on 8.2. The problem is that you can't specify the arbitrary data
type "any" for a return value. Any ideas what's up? Note that there
are isnull functions with specific return types, but this may not
capture all types we need to.

--
pg_restore: [archiver (db)] Error from TOC entry 1156; 0 18175
FUNCTION isnull("any", "any") postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  plpgsql
functions cannot return type "any"
Command was: CREATE FUNCTION "isnull"("any", "any") RETURNS "any"
AS '
declare
test alias for $1;
alt alias for $2;
begin

if nullval...
--

- This function does the same thing as coalesce, is there any nice way
to make isnull an alias for coalesce without having to create a lot of
different functions for this purpose?

I know we could rewrite our stuff to use coalesce instead of isnull,
but if there's a way we could rewrite a less code on the client side,
it would be better.

Thanks,
Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Postgresql takes more time to update

2007-10-06 Thread Peter Koczan
>
> We are using "psql 7.4.2" version of Postgresql, these days all the
> transactions on the database are taking long time to execute. We are
> planning to do "ANALYZE" command on the database. Could you please advice
> us, how much time it takes and what are the conditions we need to keep on an
> eye.
>

As an alternative to Scott's suggestion (upgrading to the newest 7.4), you
could update your postgresql installation to 8.2, or if you can wait a few
months, 8.3. There are *huge* performance gains (I recently made a similar
switch and everything is blazing fast). Please note that this will require a
dump/restore of the data and more involved testing, so only do it if you can
devote the time, money, and energy.

As far as analyze goes, you should be running ANALYZE VERBOSE, or better
yet, VACUUM ANALYZE VERBOSE (see
http://www.postgresql.org/docs/7.4/interactive/sql-vacuum.html) so you can
interpret the output. The vacuum also helps manage disk space, and this
isn't a big performance hit because it doesn't require exclusive locks
(though a VACUUM FULL would, again, read the docs). In fact, you should be
doing this regularly, daily if possible.

For me, I have a 30 GB database cluster, and vacuum/analyze takes about 3
minutes, though YMMV. You want to look for output regarding FSM pages and
relations and adjust as necessary (otherwise you're running into index
bloat).

> Can you please tell us whether we had any other commands are available on
> postgresql to increase the performance of the database and database tools
> available for Postgresql on Solaris sparc machine?
>
I think Scott covered all of this. Alternatively, you could look to
upgrading your hardware (multi-core x86 hardware is very nice), but without
knowing your needs, usage, or budget, I can't make that determination.

Hope this helps.

Peter


Re: [ADMIN] Postgresql takes more time to update

2007-10-08 Thread Peter Koczan
On 10/7/07, Suresh Gupta VG <[EMAIL PROTECTED]> wrote:
>
>  Hi Peter,
>
>
>
> Thanks for your reply and to your colleague Scott. Can you pls explain
> below sentence marked in red.
>
>
>
> -  - -- -
>
> As an alternative to Scott's suggestion (upgrading to the newest 7.4), you
> could update your postgresql installation to 8.2, or if you can wait a few
> months, 8.3. There are *huge* performance gains (I recently made a similar
> switch and everything is blazing fast). Please note that this will require
> a dump/restore of the data and more involved testing, so only do it if you
> can devote the time, money, and energy.
>
> -  -  - --
>
>
>
> Is 8.2 version is not free downloadable? What type of testing is required?
> Pls advice us.
>

Sorry about being ambiguous, 8.2 is still free, but it does have quite a few
changes from 7.4, so it will take time to update your configuration,
recompile/reinstall postgres, dump/restore your data, and test your client
applications. This will take time for the IT staff to do (and therefore
money). This is what I meant by "devoting money".

Specifically, when I upgraded, I ran into these problems:
- A primary key broke and I had to fix it before going ultimately migrating
to 8.2.
- The cidr data type is more strictly checked, I had to fix a couple rows
before migrating.
- Permissions and ownership underwent slight changes.
- User and groups were conflated into roles, which necessitated a change in
my user/group management scripts.

I tested these thoroughly before making the migration final. I found most of
these problems from a simple dump/restore. If you can, dump and restore your
databases to a test server (insofar as you can) and you should be able to
fix most migration issues.

The last thing you'll want to do is test your more critical client
applications. Postgres is very good about maintaining backwards
compatibility of SQL, so most things should "just work." Still, test.

Peter


[ADMIN] separating data and logs

2007-10-12 Thread Peter Koczan
Hi all,

I'm making a lot of changes for my eventual migration to postgres 8.3,
and I was considering changing how data and logs are separated for a
few of my servers.

Currently, the OS and log data are on one disk system, and the actual
data cluster (including configs) are on the other disk system.
Currently, after creating the database cluster, I copy the pg_xlog
directory to the OS system and symlink it from the actual cluster.

So, I'm wondering...

- Are there any best practices, or better practices, than symlinking
this (possible config options)?

- How have other people have this set up, or recommend setting this up
(e.g. also symlinking pg_clog or other things as well)?

I searched through the archives and found a few threads regarding
this, but not quite what I am looking for.

Thanks,
Peter

P.S. For those of you wondering exactly how I have things set up,
here's a representative sample from my test server.

[EMAIL PROTECTED] testing-8.2]# pwd
/scratch.1/postgres/testing-8.2

[EMAIL PROTECTED] testing-8.2]# ls -l
total 48
drwx-- 8 postgres postgres83 Oct  8 16:57 base
drwx-- 2 postgres postgres  4096 Oct 12 05:07 global
drwx-- 2 postgres postgres94 Oct  9 14:28 pg_clog
-rw--- 1 postgres postgres  3841 Aug 28 14:16 pg_hba.conf
-rw--- 1 postgres postgres  1460 Aug  8 14:06 pg_ident.conf
drwx-- 4 postgres postgres34 Aug  8 14:06 pg_multixact
drwx-- 2 postgres postgres17 Oct 12 00:15 pg_subtrans
drwx-- 2 postgres postgres 6 Aug  8 14:06 pg_tblspc
drwx-- 2 postgres postgres 6 Aug  8 14:06 pg_twophase
-rw--- 1 postgres postgres 4 Aug  8 14:06 PG_VERSION
lrwxrwxrwx 1 root root37 Oct  8 16:18 pg_xlog ->
/scratch/postgres/testing-8.2/pg_xlog
-rwxr-xr-x 1 postgres postgres 15212 Oct 11 15:05 postgresql.conf
-rw--- 1 postgres postgres96 Oct 11 15:55 postmaster.opts
-rw--- 1 postgres postgres57 Oct 11 15:55 postmaster.pid
-rw-r--r-- 1 postgres postgres  1805 Aug  8 14:06 server.crt
-rw--- 1 postgres postgres  1675 Aug  8 14:06 server.key

[EMAIL PROTECTED] testing-8.2]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/sda1  1019208396464570136  42% /
tmpfs   517128 0517128   0% /dev/shm
/dev/sda8 58191096164436  58026660   1% /scratch
/dev/sda6  1041984   560   1041424   1% /tmp
/dev/sda2 10472128   5939032   4533096  57% /usr
/dev/sda3  4061572172092   3679832   5% /var
/dev/sda7  1019208300372666228  32% /var/vice/cache
/dev/sdb1 78110004  18864924  59245080  25% /scratch.1
AFS900 0   900   0% /afs

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] convert char to varchar

2007-10-18 Thread Peter Koczan
> Is there any other way to clear trailing spaces when I restore the table?

If you're running 8.x, you can do this in place:

ALTER TABLE c1 ALTER COLUMN name varchar(20) USING rtrim(name), ALTER
COLUMN date varchar(20) USING rtrim(date);

This could take a long time if the table is large or has a lot of indexes on it.

Peter

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] convert char to varchar

2007-10-18 Thread Peter Koczan
On 10/18/07, Peter Koczan <[EMAIL PROTECTED]> wrote:
> > Is there any other way to clear trailing spaces when I restore the table?
>
> If you're running 8.x, you can do this in place:
>
> ALTER TABLE c1 ALTER COLUMN name varchar(20) USING rtrim(name), ALTER
> COLUMN date varchar(20) USING rtrim(date);
>
> This could take a long time if the table is large or has a lot of indexes on 
> it.

Oops. The command should be:

ALTER TABLE c1 ALTER COLUMN name TYPE varchar(20) USING rtrim(name),
ALTER COLUMN date TYPE varchar(20) USING rtrim(date);

I forgot the TYPE keyword...it's important.

Peter

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Postgresql takes more time to update

2007-10-23 Thread Peter Koczan
On 10/23/07, Suresh Gupta VG <[EMAIL PROTECTED]> wrote:
>
>  Hi Peter,
>
>
>
> 1)   We are using "psql 7.4.2" version of Postgresql, need to create a
> new schema similar to the current schema with all the objects as in the
> current schema. Do we have any command to support this operation?
>

If you upgrade to the newest release in the 7.4 branch, you should just have
to stop the database server and start the new version of postgres. If you
are upgrading to an 8.x version (8.0, 8.1, or 8.2), you'll have to
dump/restore using the method Scott mentioned.

2)   We need to shift all the data between 2 different databases in 2
> different servers. What is the best way to go either backup or copy command?
>

Again, it depends on if you're going with the newest 7.4 release or a new
8.x release. For the newest 7.4, you can simply rsync or do a straight copy
of the file system. If you're going with 8.x, you'll need to dump/restore
using the method Scott mentioned.

3)   Any equivalent command to export and import commands in Oracle/SQL.
>

I'm afraid I don't know enough about Oracle to answer that question.

Peter


Re: [ADMIN] way to turn off epochs in log_filename

2007-10-26 Thread Peter Koczan
> The docs state that if there are no %-escapes in log_filename, then it will
> add the epoch onto the end of the log filename. Is there any way to turn off
> this behavior and just use the filename specified in log_filename? I'd like
> have all the log data written to a file like postgresql.log and let
> logrotate do the rotation.

Correct me if I'm wrong, but I think you're trying to redirect stderr.
Why not just use syslog? That's what I use and it works very nicely.

Here are the relevant bits from postgresql.conf (that aren't related
to log formatting and changed from the default).
log_destination = 'syslog'
syslog_facility = 'LOCAL1'

Just make sure you pipe the syslog facility to postgresql.log, and
then you can use logrotate.

Peter

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Postgresql takes more time to update

2007-10-29 Thread Peter Koczan
> Here I need to know from you
>
> a)   Is there any better way to do this other than the above?
>
That's more complicated than you need to make it. What I do is something
like this (in a shell):
pg_dumpall -h host1 -p 5432 | psql -h host2 -p 5432 template1

and then delete the data as necessary via psql afterward.

You may need to tweak that command based on your authentication scheme or
substitute 5432 for the port the server is listening on and to specify the
proper hosts.

> b)   Do I need to stop all the applications which are accessing this
> live Production database? (Should I need to take an outage of my complete
> system)
>
For testing, no, pg_dump takes a snapshot of the database and it doesn't add
a whole lot of overhead (at least not when I do it). You may notice some
slowdown, but nothing that would truly affect performance.

When you actually want to switch to the new version, probably. You'll have
to figure out some way to make sure there are no updates on the database
before dump/restoring if you're concerned about making sure data is
consistent. There will be downtime in this case. You can either disable
outside connections via pg_hba.conf or move the server to a different port.
You should plan it out for when the time comes.

> c)   Are there any core area where I need to take care of these
> activities?
>
I don't understand the question.

> d)   I had 10 lacs of records on "pgsql 7.4.2". We are using "SunOS
> 5.9" version of Solaris machine. How much time it can take to take a dump
> of all the data.
>
That depends greatly on your hardware and your database design. For me,
dump/restoring 10 GB of data (which becomes 50 GB once all the indexes are
created) to decent but modest hardware takes about 4 hours.

> Please advice me to carry on this process successfully and safely with out
> any conflicts. Thanks in advance.
>
Test, figure out how to do things, plan, manage any possible downtime.
That's the process in a nutshell.


Re: [ADMIN] Postgresql pg_dumpall

2007-11-03 Thread Peter Koczan
Hi Suresh, you should Cc: pgsql-admin@postgresql.org in case I don't know or
someone else can help you sooner.

I am using on prod server with psql version : psql 7.4.2
>
> In another test server the version of psql is : Portions Copyright (c)
> 1996-2003, (could not get version)
>
You should figure out what version you're running. You can run psql
--version in a terminal to figure that out. Make sure you're running the
latest revision of whatever major version you choose (8.2.5, for instance).

> I could run this command on test server but couldn't run on prod server.
> May I have some help? Where commercelink is my database name. We use to
> enter into database like this "psql commercelink".
>
pg_dumpall dumps an entire database cluster, not just a database. Either
omit the database name or use pg_dump instead of pg_dumpall.

Peter


Re: [ADMIN] psql schemas

2007-11-05 Thread Peter Koczan
>
> I had a file with name "dump_live" which contains all the dump using
> "pg_dumpall > dump_live". Now, I need to restore it to another schema
> "schema2". How can I restore the data from that file to specified empty
> schema2. (This schema is created with out any objects). Pls advice me how to
> handle.
>
You dumped the entire database cluster...data, schema definitions,
user/group data, etc. You're going to have to specify options for dumping
and restoring data to do what you wish. This is well-documented in the list
archives, the website, man pages, and many, many other places. Please read
them.


Re: [ADMIN] Postgresql pg_dumpall

2007-11-05 Thread Peter Koczan
>
> *CC mailing* : I got the bounced mail as in attachment from you when I
> CCed to "pgsql-admin@postgresql.org" So, I am not doing so.
>
Like the message says, you should contact the powers that be to figure out
what happened. Please always email the mailing list, as I am *not* your
personal help desk.

> *Version* : I used psql –help, it does not work, So, I used psql –V, it
> gave me the result as follows
>
So psql and the server are different versions...that's concerning and
confusing.

> *Updation to new version *: Now we are in implementation of new module
> with tight schedule for our application. Once we complete that we will
> concentrate on this. In the mean time, can you please provide us how to
> update to latest version. We are using "Solaris 9 9/04 s9s_u7wos_09 SPARC"
> and current version of database on this server is Psql 7.4.2
>
 http://www.postgresql.org/docs/8.2/static/install-upgrading.html

> *Pg_dumpall *: pg_dumpall is working without database name on psql 7.4.2.
>


Re: [ADMIN] SHMMAX problem - how much is needed?

2007-11-13 Thread Peter Koczan
> Minimum shared mem:  140731400 bytes = 135.0 MB
> Maximum shared mem:  268435456 bytes = 256.0 MB
>
> kern.ipc.shmmin="140731400"
> kern.ipc.shmmax="268435456"
>
> Nov 13 20:46:19 neptunus postgres[669]: [1-2] DETAIL:  Failed system
> call was shmget(key=5432001, size=21725184, 03600).

The shared memory request, according to the error message, was for ~20
MB, far less than the 135 MB minimum limit you put in the kernel.
You'll need to lower the limit or raise your request size, probably
with regards to shared_buffers, as I think you're only allocating 2 MB
to shared_buffers. You'll want to raise that on a production machine.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Converting from SQL_ASCII to UTF8

2007-11-29 Thread Peter Koczan
Hi all,

I'd like to move my database encoding from SQL_ASCII to UTF8, mostly
because "No encoding conversion will be done when the setting is
SQL_ASCII. Thus, this setting is not so much a declaration that a
specific encoding is in use, as a declaration of ignorance about the
encoding." (from
http://www.postgresql.org/docs/current/static/multibyte.html) I saw a
few threads on the list regarding this before, for instance this one
(http://archives.postgresql.org/pgsql-admin/2004-01/msg00225.php) but
there's a specific issue that I'm having that wasn't addressed.

I have some UTF-8 data in my databases, and it's causing dump/restore
to fail. Specifically, I'm seeing messages like:
pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence
for encoding "UTF8": 0xe14c65
HINT:  This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
CONTEXT:  COPY applicants, line 282

Which happens even if I specify "-E UTF8" in the pg_dump command.

Here's the weirder part. If I just update the encoding by hand in
pg_database (as cautiously suggested by Tom Lane in the aforementioned
thread), it works. I doubt this will work in the general case, and I'd
like to at least offer this option for other people's databases.

I also tried using GNU recode (version 3.6) as suggested in similar
threads, but I got errors in both the plain and custom pg_dump
formats.

$ recode ascii..utf8 man.sql
recode: man.sql failed: Invalid input in step `ANSI_X3.4-1968..UTF-8'
$ recode ..utf8 man.sql
recode: man.sql failed: Invalid input in step `CHAR..UTF-8'

Any ideas?

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


[ADMIN] Legacy foreign keys

2007-12-10 Thread Peter Koczan
Hi all,

I am having a problem with what appear to be legacy foreign keys in
some of my databases (probably pre-7.2 days). They won't restore from
8.2 to 8.3, because the '' field in the called function isn't
unique when I try to restore more than one of these legacy keys


Here's an example
=> \d sushi.atom
...table definition, rules, etc
Triggers:
"RI_ConstraintTrigger_23125842" AFTER INSERT OR UPDATE ON
sushi.atom FROM sushi.atom_type_values NOT DEFERRABLE INITIALLY
IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'atom', 'atom_type_values',
'UNSPECIFIED', 'type', 'atom_type')
"RI_ConstraintTrigger_23125843" AFTER INSERT OR UPDATE ON
sushi.atom FROM sushi.state_values NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_check_ins"('',
'atom', 'state_values', 'UNSPECIFIED', 'state', 'state')

While I could fix this, I figure this would be a good opportunity to
quickly run alter table and get these keys in a not-ancient form.

Does anyone know how to translate these to the modern form (including
proper CASCADE and RESTRICT values so I can drop the trigger and add
the proper foreign key?

Thanks much.
Peter

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Legacy foreign keys

2007-12-10 Thread Peter Koczan
On Dec 10, 2007 1:50 PM, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Peter Koczan escribió:
> > Hi all,
> >
> > I am having a problem with what appear to be legacy foreign keys in
> > some of my databases (probably pre-7.2 days). They won't restore from
> > 8.2 to 8.3, because the '' field in the called function isn't
> > unique when I try to restore more than one of these legacy keys
>
> 8.3's pg_dump contains smarts to dump these things as normal FKs.  Try
> it.

Oh, that explains the problems...I was using 8.2's pg_dump. I'll try
it out. Thanks.

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Legacy foreign keys

2007-12-11 Thread Peter Koczan
> No, pg_dump isn't involved --- the new smarts are inside the server,
> in CREATE CONSTRAINT TRIGGER.  It sounds like Peter has a case that is
> confusing that new code.  Could we see a complete example?

Sure, here's the command:
$ /s/postgresql-8.2.5/bin/pg_dump -h sensei -p 5432 -C -Fc sushi |
/s/postgresql-8.3-beta/bin/pg_restore -h mitchell -p 5434 -C -d
postgres

Here are the errors:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1775; 2620 23125843
TRIGGER RI_ConstraintTrigger_23125843 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "atom" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON atom
FROM state_values
NOT DEFERRABLE INITIALLY ...
pg_restore: [archiver (db)] Error from TOC entry 1783; 2620 23125845
TRIGGER RI_ConstraintTrigger_23125845 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "results" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON results
FROM result_values
NOT DEFERRABLE INITIA...
pg_restore: [archiver (db)] Error from TOC entry 1784; 2620 23125846
TRIGGER RI_ConstraintTrigger_23125846 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "results" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON results
FROM on_disk_values
NOT DEFERRABLE INITI...
pg_restore: [archiver (db)] Error from TOC entry 1780; 2620 23125848
TRIGGER RI_ConstraintTrigger_23125848 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "offline" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON offline
FROM offline_disposition_values
NOT DEFE...
pg_restore: [archiver (db)] Error from TOC entry 1786; 2620 23125852
TRIGGER RI_ConstraintTrigger_23125852 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "wspolicy" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON wspolicy
FROM atom_dump_policy_values
NOT DEFERR...
pg_restore: [archiver (db)] Error from TOC entry 1787; 2620 23125853
TRIGGER RI_ConstraintTrigger_23125853 postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  trigger
"" for relation "wspolicy" already exists
Command was: CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON wspolicy
FROM atom_type_values
NOT DEFERRABLE IN...
WARNING: errors ignored on restore: 6

Table definition in 8.2 server:
sushi=> \d sushi.wspolicy
   Table "sushi.wspolicy"
 Column |  Type  | Modifiers
++---
 os | text   | not null
 dump_policy| text   | not null
 atom_type  | text   | not null
 file_system| text   | not null
 est_epoch_size | bigint |
Indexes:
"su_wspolicy_pkey" PRIMARY KEY, btree (os, dump_policy, atom_type,
file_system)
Triggers:
"RI_ConstraintTrigger_23125851" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_os_policy_values NOT DEFERRABLE
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy', 'atom_os_policy_values',
'UNSPECIFIED', 'os', 'os')
"RI_ConstraintTrigger_23125852" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_dump_policy_values NOT DEFERRABLE
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy',
'atom_dump_policy_values', 'UNSPECIFIED', 'dump_policy',
'dump_policy')
"RI_ConstraintTrigger_23125853" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_type_values NOT DEFERRABLE INITIALLY
IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy', 'atom_type_values',
'UNSPECIFIED', 'atom_type', 'atom_type')


Table definition in 8.3 beta sever:
sushi=> \d sushi.wspolicy
   Table "sushi.wspolicy"
 Column |  Type  | Modifiers
++---
 os | text   | not null
 dump_policy| text   | not null
 atom_type  | text   | not null
 file_system| text   | not null
 est_epoch_size | bigint |
Indexes:
"su_wspolicy_pkey" PRIMARY KEY, btree (os, dump_policy, atom_type,
file_system)
Triggers:
"" AFTER INSERT OR UPDATE ON sushi.wspolicy FROM
sushi.atom_os_policy_values NOT DEFERRABLE INITIALLY IMMEDIATE FOR
EACH ROW EXECUTE PROCEDURE "RI_FKey_check_ins"('',
'wspolicy', 'atom_os_policy_values', 'UNSPECIFIED', 'os', 'os')

It's still trying to create the foreign key as triggers. And the
trigger changed its name to "", which ultimately caused the
problem, since all the other triggers wanted to be renamed to
"" as well.

This happened for both the 8.3 and 8.2 pg_dump.

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Legacy foreign keys

2007-12-12 Thread Peter Koczan
On Dec 11, 2007 9:06 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Peter Koczan" <[EMAIL PROTECTED]> writes:
> >> No, pg_dump isn't involved --- the new smarts are inside the server,
> >> in CREATE CONSTRAINT TRIGGER.  It sounds like Peter has a case that is
> >> confusing that new code.  Could we see a complete example?
>
> > Sure, here's the command:
>
> [ squint... ]  Which 8.3beta are you testing, exactly?  This was dealt
> with in beta3.
>
> If it is beta3 or later, please send the output of "pg_dump -s -t wspolicy"
> from the 8.2 database.

*lightbulb* beta2. I haven't had much time to dedicate to testing new
betas yet (I was just trying to get a test 8.3 server up and running),
but I will do so. Thanks.

Peter

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] best practices for separating data and logs

2008-01-02 Thread Peter Koczan
Hi all,

I'm planning a lot of changes for migrating to PostgreSQL 8.3, among
them being a better way of separating data and logs (transaction logs,
that is).

Currently, the OS and log data are on one disk system, and the data
(including configs) are on the other disk system. After creating the
database cluster, I copy the pg_xlog directory to the OS system and
symlink it from the database.

So, I'm wondering...

- Are there any best practices, or better practices, than symlinking?

- How do other people have this set up, or recommend setting this up
(e.g. also moving pg_clog or other things as well)?

I searched through the archives and found a few threads regarding
separating data and WAL, but nothing regarding best practices or
specifics.

Thanks,
Peter

P.S. Here are exact details on how I have things set up.

[EMAIL PROTECTED] testing-8.2]# pwd
/scratch.1/postgres/testing-8.2

[EMAIL PROTECTED] testing-8.2]# ls -l
total 48
drwx-- 8 postgres postgres83 Oct  8 16:57 base
drwx-- 2 postgres postgres  4096 Oct 12 05:07 global
drwx-- 2 postgres postgres94 Oct  9 14:28 pg_clog
-rw--- 1 postgres postgres  3841 Aug 28 14:16 pg_hba.conf
-rw--- 1 postgres postgres  1460 Aug  8 14:06 pg_ident.conf
drwx-- 4 postgres postgres34 Aug  8 14:06 pg_multixact
drwx-- 2 postgres postgres17 Oct 12 00:15 pg_subtrans
drwx-- 2 postgres postgres 6 Aug  8 14:06 pg_tblspc
drwx-- 2 postgres postgres 6 Aug  8 14:06 pg_twophase
-rw--- 1 postgres postgres 4 Aug  8 14:06 PG_VERSION
lrwxrwxrwx 1 root root37 Oct  8 16:18 pg_xlog ->
/scratch/postgres/testing-8.2/pg_xlog
-rwxr-xr-x 1 postgres postgres 15212 Oct 11 15:05 postgresql.conf
-rw--- 1 postgres postgres96 Oct 11 15:55 postmaster.opts
-rw--- 1 postgres postgres57 Oct 11 15:55 postmaster.pid
-rw-r--r-- 1 postgres postgres  1805 Aug  8 14:06 server.crt
-rw--- 1 postgres postgres  1675 Aug  8 14:06 server.key

[EMAIL PROTECTED] testing-8.2]# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/sda1  1019208434264532336  45% /
/dev/sda8 56394768348344  53135448   1% /scratch
/dev/sda6  1019208 34664931936   4% /tmp
/dev/sda3  4061572202080   3649844   6% /var
/dev/sda7  1019208701972264628  73% /var/vice/cache
/dev/sda2 10154020   5616776   4013124  59% /usr
tmpfs   451556 0451556   0% /dev/shm
/dev/sdb1 78110004   9589792  68520212  13% /scratch.1
AFS900 0   900   0% /afs

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] data transfer/migrate from win to linux

2008-01-09 Thread Peter Koczan
>  After looking for a way to transfer PostgreSQL/PostGIS data from windowsXP
> to linux (Ubuntu 7.10), I did not find it.
>  Please, does anyone know an easy way or free tool for it.

Are pg_dump/pg_restore somehow insufficient for your needs?
(http://www.postgresql.org/docs/current/interactive/backup.html)

You'll have to dump your database to a file, transfer said file to
your linux server, and then restore the database (I don't think you
can directly pipe the output, but I could be wrong). You'll also have
to make sure that PostGIS is installed on the linux side, but this
should work modulo any major version discrepancies.

If you run into an error while doing this, let us know.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] monitoring free space map usage without VACUUM

2008-01-14 Thread Peter Koczan
Hi all,

I'm looking into adding things to the way I monitor pgsql
installations. One of them was free space map stuff. Is there a way to
get free space information (especially used free space) without having
to screen-scrape the output of VACUUM VERBOSE? I know you can get the
allocated fsm pages from "show max_fsm_pages", but I'd like something
along the lines of "show needed_fsm_pages". Is there anything that can
do this? i don't care if the data is a little stale, I'd just like to
have a rough picture of free space map settings and usage.

Peter

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Legacy foreign keys

2008-01-31 Thread Peter Koczan
> > [ squint... ]  Which 8.3beta are you testing, exactly?  This was dealt
> > with in beta3.
> >
> > If it is beta3 or later, please send the output of "pg_dump -s -t wspolicy"
> > from the 8.2 database.
>
> *lightbulb* beta2. I haven't had much time to dedicate to testing new
> betas yet (I was just trying to get a test 8.3 server up and running),
> but I will do so. Thanks.

I finally found more time to test this more extensively. i'm still
running into an issue with this, although it's different this time.
There are no errors printed to the terminal, but neither the foreign
key nor the trigger get made.

8.2 schema for the wspolicy table:
sushi=> \d sushi.wspolicy
   Table "sushi.wspolicy"
 Column |  Type  | Modifiers
++---
 os | text   | not null
 dump_policy| text   | not null
 atom_type  | text   | not null
 file_system| text   | not null
 est_epoch_size | bigint |
Indexes:
"su_wspolicy_pkey" PRIMARY KEY, btree (os, dump_policy, atom_type,
file_system)
Triggers:
"RI_ConstraintTrigger_60045810" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_os_policy_values NOT DEFERRABLE
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy', 'atom_os_policy_values',
'UNSPECIFIED', 'os', 'os')
"RI_ConstraintTrigger_60045811" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_dump_policy_values NOT DEFERRABLE
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy',
'atom_dump_policy_values', 'UNSPECIFIED', 'dump_policy',
'dump_policy')
"RI_ConstraintTrigger_60045812" AFTER INSERT OR UPDATE ON
sushi.wspolicy FROM sushi.atom_type_values NOT DEFERRABLE INITIALLY
IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins"('', 'wspolicy', 'atom_type_values',
'UNSPECIFIED', 'atom_type', 'atom_type')

8.3 schema:
sushi=> \d sushi.wspolicy
   Table "sushi.wspolicy"
 Column |  Type  | Modifiers
++---
 os | text   | not null
 dump_policy| text   | not null
 atom_type  | text   | not null
 file_system| text   | not null
 est_epoch_size | bigint |
Indexes:
"su_wspolicy_pkey" PRIMARY KEY, btree (os, dump_policy, atom_type,
file_system)

And the here's the output of pg_dump that was requested:
$ /s/postgresql/bin/pg_dump -h sensei -p 5432 sushi -s -t sushi.wspolicy
--
-- PostgreSQL database dump
--

SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

SET search_path = sushi, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = true;

--
-- Name: wspolicy; Type: TABLE; Schema: sushi; Owner: postgres; Tablespace:
--

CREATE TABLE wspolicy (
os text NOT NULL,
dump_policy text NOT NULL,
atom_type text NOT NULL,
file_system text NOT NULL,
est_epoch_size bigint
);


ALTER TABLE sushi.wspolicy OWNER TO postgres;

--
-- Name: su_wspolicy_pkey; Type: CONSTRAINT; Schema: sushi; Owner:
postgres; Tablespace:
--

ALTER TABLE ONLY wspolicy
ADD CONSTRAINT su_wspolicy_pkey PRIMARY KEY (os, dump_policy,
atom_type, file_system);


--
-- Name: RI_ConstraintTrigger_60045810; Type: TRIGGER; Schema: sushi;
Owner: postgres
--

CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON wspolicy
FROM atom_os_policy_values
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_check_ins"('', 'wspolicy',
'atom_os_policy_values', 'UNSPECIFIED', 'os', 'os');


--
-- Name: RI_ConstraintTrigger_60045811; Type: TRIGGER; Schema: sushi;
Owner: postgres
--

CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON wspolicy
FROM atom_dump_policy_values
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_check_ins"('', 'wspolicy',
'atom_dump_policy_values', 'UNSPECIFIED', 'dump_policy',
'dump_policy');


--
-- Name: RI_ConstraintTrigger_60045812; Type: TRIGGER; Schema: sushi;
Owner: postgres
--

CREATE CONSTRAINT TRIGGER ""
AFTER INSERT OR UPDATE ON wspolicy
FROM atom_type_values
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_check_ins"('', 'wspolicy',
'atom_type_values', 'UNSPECIFIED', 'atom_type', 'atom_type');


--
-- Name: wspolicy; Type: ACL; Schema: sushi; Owner: postgres
--

REVOKE ALL ON TABLE wspolicy FROM PUBLIC;
REVOKE ALL ON TABLE wspolicy FROM postgres;
GRANT ALL ON TABLE wspolicy TO postgres;
GRANT SELECT ON TABLE wspolicy TO sushi;
GRANT ALL ON TABLE wspolicy TO staff;


--
-- PostgreSQL database dump complete
--

I did see these errors in the postgres syslog pertaining to wspolicy's
foreign keys. It's trying, but it's not working for some reason.

Jan 31 17:21:19 mitchell postgres[3152]: [42-1] NOTICE:  ignoring
incomplete trigger group for constraint "" FOREIGN KEY
wspolicy(os) REFERENCES
Jan 31 17:21:19 mitchell postgres[3152]: [42-2]  a

Re: [ADMIN] Legacy foreign keys

2008-02-01 Thread Peter Koczan
> > I finally found more time to test this more extensively. i'm still
> > running into an issue with this, although it's different this time.
> > There are no errors printed to the terminal, but neither the foreign
> > key nor the trigger get made.
>
> It looks like you are trying to restore from a selective dump (-t
> switch) that includes only one of the two tables involved in the FK
> constraint.  The backend logic for converting triggers to proper FK
> constraints can only work if it sees both tables (and their triggers)
> created during a single session.

I see what you're saying, but the weird thing is that this happened
even when I did a full dump/restore of that database. I'll try out a
few different things (like restoring the tables, then restoring the
triggers). Hopefully I'll be able to suss out the issue or at least
find a workaround.

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Legacy foreign keys

2008-02-01 Thread Peter Koczan
> I see what you're saying, but the weird thing is that this happened
> even when I did a full dump/restore of that database. I'll try out a
> few different things (like restoring the tables, then restoring the
> triggers). Hopefully I'll be able to suss out the issue or at least
> find a workaround.

I have a theory as to what's happening. It can find both tables The
new smarts in the 8.3 server seem to require a "complete" foreign key,
which, I'm guessing, would require additional triggers defining
RESTRICT or CASCADE on UPDATEs and DELETEs.

I think the equivalent command would be:
"ALTER TABLE wspolicy ADD CONSTRAINT "" FOREIGN KEY
(dump_policy) REFERENCES atom_dump_policy_values (os);"

I restored the entire schema, and only the schema, to the 8.3 server.
Then I tried to create the trigger, and it didn't work.

$ /s/postgresql-8.3/bin/pg_dump -h sensei -p 5432 -C --schema-only
sushi | /s/postgresql-8.3/bin/psql -h mitchell -p 5434 postgres
(lots of commands, no errors).

$ /s/postgresql-8.3/bin/psql -h mitchell -p 5434 postgres
sushi=# SET search_path TO sushi;
SET

sushi=# \d wspolicy
   Table "sushi.wspolicy"
 Column |  Type  | Modifiers
++---
 os | text   | not null
 dump_policy| text   | not null
 atom_type  | text   | not null
 file_system| text   | not null
 est_epoch_size | bigint |
Indexes:
"su_wspolicy_pkey" PRIMARY KEY, btree (os, dump_policy, atom_type,
file_system)

sushi=# \d atom_os_policy_values
Table "sushi.atom_os_policy_values"
 Column | Type | Modifiers
+--+---
 os | text | not null
Indexes:
"su_atom_os_policy_values_pkey" PRIMARY KEY, btree (os)

sushi=# CREATE CONSTRAINT TRIGGER ""
   AFTER INSERT OR UPDATE ON wspolicy
   FROM atom_os_policy_values
   NOT DEFERRABLE INITIALLY IMMEDIATE
   FOR EACH ROW
   EXECUTE PROCEDURE "RI_FKey_check_ins"('', 'wspolicy',
'atom_os_policy_values', 'UNSPECIFIED', 'os', 'os');
NOTICE:  ignoring incomplete trigger group for constraint ""
FOREIGN KEY wspolicy(os) REFERENCES atom_os_policy_values(os)
DETAIL:  Found referencing table's trigger.
CREATE TRIGGER

Since there are no definitions for what happens for UPDATES or
DELETES, and possibly there's something else, it's considered, an
"incomplete trigger group" for the foreign key, as the NOTICE says.

The other problem is that other legacy FK triggers on that same table
have the same value for the first field, "", which will cause
name conflicts.

I think I've figured it out enough that I could translate the keys
manually if I need to. It's not a big deal to me if handling this case
isn't included in the 8.3 server, since these are relatively few and
far between, but I figured you should know in case other people run
into this problem.

One more thing. Is there any way to quickly find all the old-style
FKs? I tried looking in pg_trigger but it appears that even new,
legitimate foreign keys have triggers driving them.

Peter

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Legacy foreign keys

2008-02-02 Thread Peter Koczan
> Yeah, it's expecting to see a set of three related triggers.  For
> instance, if I do this in a 7.0 database:
> ...
> If your original database doesn't have all three triggers, then you had
> a problem already --- the FK constraint wasn't being enforced properly.

Ah, these were from before I became the main database admin. It was
probably some previous admin or programmer who thought he could be
clever and drop the other FK-related triggers.

> In 8.3, you could look for the pg_trigger entries with zero tgconstraint
> values.  However, prior versions don't have that column; you'd have to
> look instead for entries that aren't linked to a pg_constraint entry
> by a pg_depend entry.

Thanks again, I should have enough info to fix this now.

Peter

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] SSL question

2008-02-03 Thread Peter Koczan
On Feb 1, 2008 9:08 AM, Bronson, Allan B. (Mission Systems)
<[EMAIL PROTECTED]> wrote:
> I am trying to create a SSl connection to PostGres using JDBC and am not
> sure if it is working.
>
> I read that you must have openssl on the client machine for the connection
> to take place.  I have turned ssl = on in the config file, created a cert
> and key file and placed them in the correct directory, created a truststore
> and add the ?ssl=true to the url for the client.  When I do all this I
> connect and get  data back from the db.  My confusion is when I read that I
> need openssl on the client machine to connect, since I can connect without
> openssl installed.  I do notice that if my truststore does not have the cert
> placed in it, I get an error saying missing certificate ( something like
> that ), does that mean ssl is working for the client and server?
>
> I am using Postgres 8.2 and have read that I may need to compile the
> Postgres src code with openssl for it to be enabled.  Is this correct or is
> SSL already compiled into the src code?
>
> Thanks to anyone who can provide information or point me in the right
> direction.  I am facing a tight deadline and don't have a ton of extra time
> to research this more.

I think you do need to compile PostgreSQL with SSL support. If you're
getting a vendor-made package (a .rpm or .deb, for instance), you
should do a little

As far as JDBC goes, check out
http://jdbc.postgresql.org/documentation/82/ssl.html (though it sounds
like you already have). I'm trying to get SSL support working with
JDBC as well, and I've found that guide very good.

Don't forget to edit your server's pg_hba.conf to allow or force
ssl-based connections, as well.

You should also ask your question on the JDBC list. They focus
exclusively on that and will be best able to help you.

Hope this helps.

Peter

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Multiple postgresql serices on same windows machine..

2008-02-08 Thread Peter Koczan
On Feb 7, 2008 11:27 PM, Vishal Arora <[EMAIL PROTECTED]> wrote:
> You can have more than one instance of PostgreSQL server running on the same
> Windows machine as long as you have different DataDir for each of them. you
> can have initdb process indicating different datadir.

And you need each instance (or cluster, as it's usually called in
postgresql lingo) to listen on a different port. Be careful when doing
this because multiple clusters don't play nice with resources like
shared memory.

Anoo, with the usage scenario you suggested, couldn't you just create
some roles in the database cluster, and create new databases? For
example, you could,

- Create the database "app_a_db" and the role "app_a" (with a properly
encrypted password) for Application A. Application A connects to the
server as user "app_a" and does its work in "app_a_db".
- Create the database "app_b_db" and the role "app_b" (with a properly
encrypted password) for Application B. Application A connects to the
server as user "app_b" and does its work in "app_b_db".

app_a doesn't know the password of app_b, and vice-versa, and neither
knows the Windows system passwords for postgres or ServiceAccount.
Since they're working in different databases, they won't interfere
with each other's data. And, you only have to maintain one
cluster/server instead of two.

If it turns out that this doesn't meet your needs, then a separate
cluster is the way to go. I'm just giving you another option. Hope
this helps.

Peter

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] can't revoke users

2008-02-11 Thread Peter Koczan
On Feb 8, 2008 6:01 PM, Scott Cotton <[EMAIL PROTECTED]> wrote:
> I have a database that I dropped some users on. However, when doing dumps
> and restores, it became clear that these users were still being referenced
> by sysid. So I created new users and assigned them the sysids of the users I
> had dropped.
>
> So now I went to revoke the privileges for the new (temp) users. However,
> some tables I can revoke the users and some I cannot.
> I also tried to drop one of the new users I created and it created the same
> situation as before, where the user still shows up with priveleges, but
> instead of listing a name it lists a sysid.

I've had to deal with stuff like this before.

What exactly are the errors are you getting, and what version of
PostgreSQL are you running?

Peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] postgres rpms for solaris and suse

2008-02-11 Thread Peter Koczan
> > Is there some magic rpm source I am missing? Or should I just install
> > from source?
>
> Honestly, on a db only server it's not an entirely crazy idea.  About
> 2/3 of all pgsql servers I've set up have been source builds for
> various reasons.  Just create a conf.local file that's a sh script
> with all your options and put that script into source repo, and if you
> need ot build a new server you just need that and a .tar.gz file.
> Dead simple.
>
> If you want multiple versions just use --prefix to put each in a
> different directory and set a lib path for each env before starting.

Where I work, I compile source and install binaries and libraries and
such into a networked file system (AFS in this case). When I upgrade,
all I have to do is compile once for each platform that I use and tell
the database server and clients that the new binaries are located at
/s/postgresql-8.2.6, for instance. Through a little hand-waving, smart
use of symlinks, and some AFS magic, everything "just works". It's a
lot easier than installing things everywhere.

Of course, you might not have access to a networked FS for installing
software. You could, in theory, compile PostgreSQL for each platform
you want, and then package it using a preferred format, and distribute
it that way. Sadly, I don't have experience doing this so I don't know
all the caveats involved.

Note that the databases themselves are stored locally on the database
servers, and not in AFS. Networked file systems aren't good for
databases, for both performance and reliability reasons, as has been
discussed before on these mailing lists.

Hope this helps.

Peter

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Database Loading

2008-02-19 Thread Peter Koczan
On Feb 19, 2008 3:04 PM, Carol Walter <[EMAIL PROTECTED]> wrote:
> I went to the page.  It said the OS is Linux.  Is there anything for
> Solaris. I've got Solaris 10.

It looks like it's just some C code, a shell file, and some sql
scripts. It should compile and run fine on any postgresql supported
platform (the operative word being "should"). They probably just
listed Linux as the OS as it's likely the primary OS on which they
wrote and tested it.

Peter

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] PL/pgSQL memory consumption?

2008-02-23 Thread Peter Koczan
On Fri, Feb 22, 2008 at 8:54 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Dawid Kuroczko" <[EMAIL PROTECTED]> writes:
>  > I have recently found a nice utility called memstat (which shows how
>  > much of private
>  > (not shared)) memory each process uses.  What kind of surprised me was
>  > the amount
>  > of memory that is used by PL/pgSQL (one more reason not to provide it
>  > by default? ;-)).
>
>  Why should we put any credence whatsoever in these numbers?
>  I rather doubt that "memstat" knows anything about our palloc
>  mechanism, so I don't see how it could possibly give reliable
>  answers about how much memory one portion or another of Postgres
>  is using.
>
>  Having said that, it would be interesting to know exactly what it
>  *is* measuring.

Correct me if I'm wrong, as well, but I believe that Linux (and
probably other modern Unices) does code-sharing, meaning that separate
processes referring to the same code/libraries will refer to the same
copy in physical memory.

So, even though each process is seeing 40 MB of libpgsql, there's
likely only 1 copy in physical memory. So, the total memory
consumption in the system from this library is 40 MB, not
40*num_processes MB.

Peter

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Newly converted to 8.2.5 and getting this error

2008-03-10 Thread Peter Koczan
On Mon, Mar 10, 2008 at 9:41 AM, Mark Steben <[EMAIL PROTECTED]> wrote:
>   We have an application that cuts and pastes from Word and is getting this
> error now that we have converted to 8.2.5.
>
>   The characters that are erroring are imbedded in the document. We would
> prefer not having to remove them
>
> ...
>
> Our client_encoding is obviously set at UTF8.  Is there a client_encoding
> value that is less restrictive?

http://www.postgresql.org/docs/8.2/static/multibyte.html

SQL_ASCII is less restrictive in that it will do no character
conversion or checking. It comes with many caveats (read the above
page). I think it also matters what the database on the server side is
expecting. I don't think you can insert invalid Unicode into a UTF8
database, even if your client encoding is SQL_ASCII. It might try its
best to convert the data, but don't quote me on that.

If you really want to insert everything and don't care if it's valid
UTF8 or not, create the database with the SQL_ASCII encoding. If you'd
like there to be some semblance of checking, you'll have to think of
some other solution.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] it refuses to go down...

2008-03-25 Thread Peter Koczan
>  > did you try pg_ctl -m immediate stop   ???
>
>  I just did, and it worked.
>
>$ pg_ctl stop -m immediate
>waiting for server to shut down done
>server stopped

I'd be careful about shutting down using "immediate" mode. It forces
the database into recovery mode.

Your problem could be that one or two connections are in a weird state
and even "fast" stopping can't kill them. Next time you have to
restart the server, you should check on the status of connections and
see if any are in a weird state. I had to deal with this recently
where the status was "notify interrupt" and I couldn't even stop fast.
I had to change some application code, but not much.

Just run "ps ax | grep post" (or whatever options you like to give ps
to show all processes) to filter out postgres processes. A connection
entry will look like like.

[pid] ?Ss  0:00 postgres: [user] [database] [client] [status]

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] it refuses to go down...

2008-03-27 Thread Peter Koczan
>  > I had to deal with this recently where the
>  > status was "notify interrupt"
>
>  Is this a response from "pg_ctl status" command?

I'm referring to the last field of "ps ax". For instance, this line...

>13841 ?Rs   234:32 postgres: gadb canon 127.0.0.1(44837) SELECT

would have "notify interrupt" instead of "SELECT". That's the status
I'm referring to.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Moving a tablespace

2008-04-02 Thread Peter Koczan
On Wed, Apr 2, 2008 at 2:05 PM, Hyatt, Gordon
<[EMAIL PROTECTED]> wrote:
> I want to move an existing tablespace on an FC8 box.  I read (at
>  http://www.postgresql.org/docs/8.2/static/manage-ag-tablespaces.html)
>  that one can simply stop the server, update the symbolic link in
>  $PGDATA/pg_tblspc, start the server, then update the pg_tablespace
>  catalog to point to the new location.
>
>  Is this all there is to moving a tablespace?  It seems too easy.
>
>  The tablespace I want to move is only around 1GB in size so dropping and
>  reloading during the next maintenance cycle is not that much of an
>  issue; although, I'd prefer not to.

Couldn't you also create a new tablespace elsewhere, and use the SET
TABLESPACE subcommand of ALTER TABLE
(http://www.postgresql.org/docs/8.2/static/sql-altertable.html) to
migrate all the appropriate tables?

Granted, it's not exactly the same as moving the tablespace, but
there's less manual updating of system catalogs and potentially less
downtime. I don't have a whole lot of experience with tablespaces so I
may be missing something, but that's something you might want to try
out.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] PG 8.3 and kerberos failures

2008-04-17 Thread Peter Koczan
Hi all,

I just upgraded one of my servers and I'm having a bit of trouble
getting some of the kerberos authentication bits working.
Specifically, any Kerberos instance run out of a v5srvtab doesn't work
so well. Using stashed tickets or normal principals worked fine.
Gritty details follow.

Peter

Here are details from the specific v5srvtab's...
[EMAIL PROTECTED] postgres]# klist -k -t /etc/v5srvtab.wsbackup
Keytab name: FILE:/etc/v5srvtab.wsbackup
KVNO Timestamp Principal
 - 
  13 12/20/07 15:56:11 wsbackup/[EMAIL PROTECTED]

[EMAIL PROTECTED] postgres]# klist -k -t /etc/v5srvtab
Keytab name: FILE:/etc/v5srvtab
KVNO Timestamp Principal
 - 
  13 12/20/07 15:56:11 host/[EMAIL PROTECTED]
  13 12/20/07 15:56:11 rcmd/[EMAIL PROTECTED]
  13 12/20/07 15:56:11 telnet/[EMAIL PROTECTED]
  13 12/20/07 15:56:11 ftp/[EMAIL PROTECTED]
  13 12/20/07 15:56:11 pop/[EMAIL PROTECTED]
  13 12/20/07 15:56:11 wsbackup/[EMAIL PROTECTED]
  12 12/20/07 15:56:11 auth/[EMAIL PROTECTED]
   8 12/20/07 15:56:11 postgres/[EMAIL PROTECTED]

Here's the error from the syslog...
Apr 17 11:18:39 sensei postgres[4486]: [3-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=56925
Apr 17 11:18:39 sensei postgres[4486]: [4-1] LOG:  unexpected Kerberos
user name received from client (received "wsbackup", expected
"wsbackup/mitchell.cs.wisc.edu")
Apr 17 11:18:39 sensei postgres[4486]: [5-1] FATAL:  Kerberos 5
authentication failed for user "wsbackup"
Apr 17 11:18:39 sensei postgres[4488]: [3-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=56926
Apr 17 11:18:39 sensei postgres[4488]: [4-1] FATAL:  no pg_hba.conf
entry for host "128.105.207.19", user "wsbackup", database "sushi",
SSL off

Here's the relevant bit from my pg_hba.conf. For compatibility, I'm
using krb5 instead of gss until everything is upgraded from 8.2. If
using gss authentication will work, please let me know.
hostssl all all 128.105.0.0/16 krb5
hostssl all all 198.133.224.0/24   krb5

And from postgresql.conf...
# - Security & Authentication -

#authentication_timeout = 1min  # 1s-600s
ssl = on# (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'  # Allowed SSL ciphers
# (change requires restart)
#password_encryption = on
krb_realm = 'CS.WISC.EDU'
#db_user_namespace = off

# Kerberos and GSSAPI
krb_server_keyfile = '/etc/v5srvtab.postgres'   # (change requires restart)
#krb_srvname = 'postgres'   # (change requires restart, kerberos onl
y)
#krb_server_hostname = ''   # empty string matches any keytab entry
# (change requires restart, kerberos onl
y)
#krb_caseins_users = off# (change requires restart)

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PG 8.3 and kerberos failures

2008-04-18 Thread Peter Koczan
On Thu, Apr 17, 2008 at 11:40 AM, Peter Koczan <[EMAIL PROTECTED]> wrote:
> Hi all,
>
>  I just upgraded one of my servers and I'm having a bit of trouble
>  getting some of the kerberos authentication bits working.
>  Specifically, any Kerberos instance run out of a v5srvtab doesn't work
>  so well. Using stashed tickets or normal principals worked fine.
>  Gritty details follow.
>
>  Peter
>
>  Here are details from the specific v5srvtab's...
>  [EMAIL PROTECTED] postgres]# klist -k -t /etc/v5srvtab.wsbackup
>  Keytab name: FILE:/etc/v5srvtab.wsbackup
>  KVNO Timestamp Principal
>   - 
> 
>   13 12/20/07 15:56:11 wsbackup/[EMAIL PROTECTED]

Here's what happens when I do this (it's on a different machine but
it's the same mechanism).

[EMAIL PROTECTED] ~ $ su - wsbackup
ator(1)% kinit -f -k -t /etc/v5srvtab.wsbackup -l 1d
wsbackup/[EMAIL PROTECTED]
ator(2)% klist
Ticket cache: FILE:/var/adm/krb5/tmp/tkt/krb5cc_28528
Default principal: wsbackup/[EMAIL PROTECTED]

Valid starting ExpiresService principal
04/18/08 12:25:00  04/19/08 12:25:00  krbtgt/[EMAIL PROTECTED]


Kerberos 4 ticket cache: /tmp/tkt28528
klist: You have no tickets cached
ator(4)% /s/postgresql-8.2/bin/psql -h sensei -p 5432 postgres
Connecting to 8.2 works...

ator(5)% /s/postgresql-8.3/bin/psql -h sensei -p 5432 postgres
Connecting to 8.2 via 8.3 binaries works...

ator(6)% /s/postgresql-8.3/bin/psql -h sensei -p 49173 postgres
psql: FATAL:  no pg_hba.conf entry for host "128.105.162.36", user
"wsbackup", database "postgres", SSL off

And then it fails as above...
Apr 18 12:20:41 sensei postgres[4486]: [3-1] LOG:  connection
received: host=ator.cs.wisc.edu port=56925
Apr 18 12:20:41 sensei postgres[4486]: [4-1] LOG:  unexpected Kerberos
user name received from client (received "wsbackup", expected
"wsbackup/ator.cs.wisc.edu")
Apr 18 12:20:41 sensei postgres[4486]: [5-1] FATAL:  Kerberos 5
authentication failed for user "wsbackup"
Apr 18 12:20:41 sensei postgres[4488]: [3-1] LOG:  connection
received: host=ator.cs.wisc.edu port=56926
Apr 18 12:20:41 sensei postgres[4488]: [4-1] FATAL:  no pg_hba.conf
entry for host "128.105.162.36", user "wsbackup", database "postgres",
SSL off

And this is what syslog shows when I try GSSAPI authentication.
Apr 18 12:34:40 sensei postgres[25885]: [3-1] LOG:  connection
received: host=ator.cs.wisc.edu port=41148
Apr 18 12:34:40 sensei postgres[25885]: [4-1] FATAL:  GSSAPI
authentication failed for user "wsbackup"
Apr 18 12:34:40 sensei postgres[25886]: [3-1] LOG:  connection
received: host=ator.cs.wisc.edu port=41149
Apr 18 12:34:40 sensei postgres[25886]: [4-1] FATAL:  no pg_hba.conf
entry for host "128.105.162.36", user "wsbackup", database "postgres",
SSL off

Is this something I'm just going to have to find a way to work around
or should I file a bug report?

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PG 8.3 and kerberos failures

2008-04-22 Thread Peter Koczan
On Fri, Apr 18, 2008 at 12:43 PM, Peter Koczan <[EMAIL PROTECTED]> wrote:
> On Thu, Apr 17, 2008 at 11:40 AM, Peter Koczan <[EMAIL PROTECTED]> wrote:
>  > Hi all,
>  >
>  >  I just upgraded one of my servers and I'm having a bit of trouble
>  >  getting some of the kerberos authentication bits working.
>  >  Specifically, any Kerberos instance run out of a v5srvtab doesn't work
>  >  so well. Using stashed tickets or normal principals worked fine.
>  >  Gritty details follow.
>  >
>  >  Peter
>  >
>  >  Here are details from the specific v5srvtab's...
>  >  [EMAIL PROTECTED] postgres]# klist -k -t /etc/v5srvtab.wsbackup
>  >  Keytab name: FILE:/etc/v5srvtab.wsbackup
>  >  KVNO Timestamp Principal
>  >   - 
> 
>  >   13 12/20/07 15:56:11 wsbackup/[EMAIL PROTECTED]
>
>  Here's what happens when I do this (it's on a different machine but
>  it's the same mechanism).
>
>  [EMAIL PROTECTED] ~ $ su - wsbackup
>  ator(1)% kinit -f -k -t /etc/v5srvtab.wsbackup -l 1d
>  wsbackup/[EMAIL PROTECTED]
>  ator(2)% klist
>  Ticket cache: FILE:/var/adm/krb5/tmp/tkt/krb5cc_28528
>  Default principal: wsbackup/[EMAIL PROTECTED]
>
>  Valid starting ExpiresService principal
>  04/18/08 12:25:00  04/19/08 12:25:00  krbtgt/[EMAIL PROTECTED]
>
>
>  Kerberos 4 ticket cache: /tmp/tkt28528
>  klist: You have no tickets cached

One more thing to note, I said before that stashed tickets and login
principals "just work." Here might be something...

[EMAIL PROTECTED] koczan $ klist
Ticket cache: FILE:/var/adm/krb5/tmp/tkt/krb5cc_3258_ZtKJNK
Default principal: [EMAIL PROTECTED]
...

[EMAIL PROTECTED] ~]# export 
KRB5CCNAME=/var/adm/krb5/tmp/stash/krb5cc_2.stash
[EMAIL PROTECTED] ~]# klist
Ticket cache: FILE:/var/adm/krb5/tmp/stash/krb5cc_2.stash
Default principal: [EMAIL PROTECTED]
...

They don't contain hostname data in the default principal like the
keytab principal does, and yet they both connect fine. There could be
something to this, but I don't know what, or how to take advantage of
it.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Bus error in postgres 8.3

2008-04-28 Thread Peter Koczan
Hi all,

I'm looking to push 8.3 out this week, but I'm running into a
particularly nasty bus error. I'm not sure what's causing it as it
appears to be transient (or at least somewhat random), but I do know
that it bites on connection time and takes down the entire server with
it.

I'm going to try to force a crash on my test server to see if I can
find out anything more. Does anyone know how to get useful debugging
info at the time of the crash? I don't think I have things set to dump
core anywhere, but that and/or stack traces would be nice (especially
for child processes).

If anyone has any idea what's going on and could clue me in, that
would be excellent.

Peter

P.S. Here's the syslog transcript of the crash.

Apr 28 23:02:02 mitchell postgres[3442]: [3-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=37588
Apr 28 23:02:02 mitchell postgres[3442]: [4-1] LOG:  connection
authorized: user=postgres database=template1
Apr 28 23:02:02 mitchell postgres[3442]: [5-1] LOG:  disconnection:
session time: 0:00:00.239 user=postgres database=template1
host=mitchell.cs.wisc.edu port=37588
Apr 28 23:02:02 mitchell postgres[3444]: [3-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=37589
Apr 28 23:02:02 mitchell postgres[3444]: [4-1] LOG:  connection
authorized: user=postgres database=564testdb
Apr 28 23:02:02 mitchell postgres[461]: [3-1] LOG:  server process
(PID 3444) was terminated by signal 7: Bus error
Apr 28 23:02:02 mitchell postgres[461]: [4-1] LOG:  terminating any
other active server processes
Apr 28 23:02:02 mitchell postgres[461]: [5-1] LOG:  all server
processes terminated; reinitializing
Apr 28 23:02:02 mitchell postgres[3447]: [6-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=37590
Apr 28 23:02:02 mitchell postgres[3447]: [7-1] FATAL:  the database
system is in recovery mode
Apr 28 23:02:02 mitchell postgres[3448]: [6-1] LOG:  connection
received: host=mitchell.cs.wisc.edu port=37591
Apr 28 23:02:02 mitchell postgres[3448]: [7-1] FATAL:  the database
system is in recovery mode
Apr 28 23:02:23 mitchell postgres[461]: [6-1] LOG:  startup process
(PID 3446) was terminated by signal 7: Bus error
Apr 28 23:02:23 mitchell postgres[461]: [7-1] LOG:  aborting startup
due to startup process failure
Apr 28 23:22:15 mitchell postgres[3702]: [1-1] LOG:  could not load
root certificate file "root.crt": No such file or directory
Apr 28 23:22:15 mitchell postgres[3702]: [1-2] DETAIL:  Will not
verify client certificates.
Apr 28 23:22:15 mitchell postgres[3703]: [2-1] LOG:  database system
was interrupted; last known up at 2008-04-28 07:02:32 CDT
Apr 28 23:22:15 mitchell postgres[3703]: [3-1] LOG:  database system
was not properly shut down; automatic recovery in progress
Apr 28 23:22:15 mitchell postgres[3703]: [4-1] LOG:  record with zero
length at 6/915320A0

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Bus error in postgres 8.3

2008-04-30 Thread Peter Koczan
On Tue, Apr 29, 2008 at 1:01 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>  Make sure the postmaster is started under ulimit -c unlimited.
>  On a depressingly large fraction of modern platforms, daemons are
>  started with ulimit -c 0 by default :-(.  Try putting "ulimit -c unlimited"
>  into your PG init script and restarting.

> On Tue, Apr 29, 2008 at 10:35 AM, Peter Koczan <[EMAIL PROTECTED]> wrote:
>  Yeah, a stack trace and if possible, a self contained test case to
>  reproduce the bug would help. If you are using a custom build, then
>  using a debug build would help a lot too.

So far, this problem hasn't reoccurred since I first saw it, no matter
how much I pummel my test server.

I was testing some patches about the time that this happened
(including one in the libpq backend), so there may have been an ABI
mismatch which led to the bus error.

In any case, I've set the postmaster to have ulimit -c unlimited in
case this problem reoccurs. I'll also look into using a debug build in
case this happens again. But until then, I'm confident enough to keep
pushing out 8.3.

Thanks again.
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] old, dead connections?

2008-05-19 Thread Peter Koczan
Hi all,

I'm looking to upgrade a server to postgres 8.3, but there's one
connection that's being problematic. It continually says it's
UPDATE'ing, but it's been at it for weeks now, and while the database
is big, it shouldn't take weeks.

Even weirder is that it looks like the client-side connection died a
long time ago.

[EMAIL PROTECTED] ~ $ ps axvw | grep post
  330 pts/0S+ 0:00  075  3820   648  0.0 grep post
14321 ?S  0:00  9  3274 1056081 20428  0.6
/s/postgresql-8.2.6/bin/postgres -D /scratch.1/postgres/fsstudy-8.2
14323 ?Ss19:31 16  3274 1055873 1036904 30.7 postgres:
writer process
14324 ?Ss 0:00  0  3274  8257  1644  0.0 postgres:
stats collector process
20459 pts/8S+ 0:01  3   227  5480  3444  0.1
/s/postgresql/bin/psql -h haddock -p 5432 fsstudy
20460 ?Ss   2295:5127  3274 1057221 1045304 31.0 postgres:
nitina fsstudy haddock.cs.wisc.edu(50957) idle
32247 ?Rs   17051:221  3274 1056653 1043964 30.9 postgres:
nitina fsstudy haddock.cs.wisc.edu(41114) UPDATE

[EMAIL PROTECTED] ~ $ netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address
 State
tcp0  0 haddock.cs.wisc.:postgresql
haddock.cs.wisc.edu:50957   ESTABLISHED
tcp0208 haddock.cs.wisc.edu:ssh ator.cs.wisc.edu:46929
 ESTABLISHED
tcp0  0 haddock.cs.wisc.edu:50957
haddock.cs.wisc.:postgresql ESTABLISHED
udp0  0 localhost:33544 localhost:33544
 ESTABLISHED
...

I'm fairly certain that the connections noted in netstat refer to the
idle connection.

So, I don't want to just kill the connection right now unless I'm
certain it can be done safely. However, I'd like to deal with it as
it's a potential impediment to the upgrade and it's taking up lots of
CPU time on the machine.

Any advice, or any idea of what's going on?

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Advice on running two database clusters on one server machine

2008-06-19 Thread Peter Koczan
On Sun, Jun 15, 2008 at 12:11 PM, Andreas Philipp
<[EMAIL PROTECTED]> wrote:
> Hi all,
>
> We are implementing a hospital information system and a human
> resources/payroll processing system on two identical dedicated servers with
> two Xeon Quad Core processors and 32 GB RAM each, both servers being attached
> via FC to a SAN, and both applications running on PostgreSQL 8.3 / CentOS 51.
>
> We are wondering about the advisability to distribute the databases between
> the two server machines, both machines acting as active production systems
> for one application each, and as warm standby servers for the other, using
> WAL shipping to a second database cluster running on another port on each of
> the two server machines.
>
> What would be the performance cost of doing so, rather than running all
> databases on one database cluster on one machine, and using the second
> machine as a warm standby server for all databases of the two applications?
>
> What other considerations should we take into account? We have no prior
> experience with PostgeSQL administration, having run our previous systems on
> Windows Servers and MS SQL Server.
>
> Thanks to all for your input!

I've experimented a bit with this. Probably the biggest thing to keep
in mind is that different clusters don't play nice with resources,
especially shared memory. You're ostensibly cutting your available
memory in half by running two clusters on one machine.

Other things to keep in mind...they can't use the same user and group
data (so roles and passwords may be different and it's a bit of work
to keep them sync'd up if you would want that). You also have to
maintain two different sets of configs, have data located at two
different places, and listen to data on two different ports. It's
about double the basic administration.

I would recommend one cluster per machine for production machines, if
it works well for you, that is.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Database size in Postgresql

2008-06-24 Thread Peter Koczan
On Thu, Jun 19, 2008 at 7:18 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Kunal Gupta wrote:
>>
>> Hi ,
>>
>> I have a question, I am suppose to create database with limited size. Is
>> there any way I can set limit to my database created in postgresql.
>
> Not reasonably no.
>
>> - Since in postgresql (or may be other also), when a database is created,
>> a path to a directory is given in which all data is stored.
>> - If I restrict the size of this directory then by database size will be
>> automatically controlled.
>
> And you might possibly corrupt data when you hit the barrier.

You could always regularly poll the database for the size and generate
a notice or email for you to deal with if it's above a certain
threshold.

postgres=> select pg_database_size('test') as dbsize;
  dbsize
---
 609975524

Or, if you need to do do this in bulk,

postgres=> select datname as database, pg_database_size(datname) as
size from pg_database order by database;

I'm not sure how quickly the size statistics get updated, maybe only
after a VACUUM, but this will work as a reactive policy. There's not a
whole lot you can do to enforce proactive quotas in postgres, and as
Joshua mentioned, you might run into bad things.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Major upgrade advice

2008-06-28 Thread Peter Koczan
On Wed, Jun 18, 2008 at 2:00 PM, Roberto Garcia
<[EMAIL PROTECTED]> wrote:
> Just to mention one issue we had here:
>
> In 8.1 we did this to retrieve all data from a specific date:
> SELECT * FROM xxx
> WHERE  LIKE '2008-05-20%'
>
> In 8.3 we had to change to:
> SELECT * FROM xxx
> WHERE  >= CAST('2008-05-20' as timestamp) AND
>  < CAST('2008-05-21' as timestamp)

Also, don't forget that bareword numbers don't automatically cast to
text anymore.

For instance, assuming col is a text type (char, varchar, text), the query

select * from table where col = 1000

Will throw an error in 8.3 whereas it will work under pre-8.3
releases. You can one either of the following to make it work.

select * from table where col = 1000::text
select * from table where col = '1000'

I've been bitten by that bug a few times.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Postgres 8.3.3, GSSAPI, and Windows XP

2008-09-10 Thread Peter Koczan
Hello,

I'm trying to offer a Windows XP client for Postgres 8.3.3 to my
users. I have it built and installed and everything is working except
for GSSAPI. I'm getting the following error:

C:\temp>S:\postgresql-8.3.3\bin\psql -h mitchell -p 5432 postgres
psql: SSPI continuation error: No credentials are available in the
security package
 (8009030e)

But authenticating against a server using krb5 authentication works
perfectly fine.
C:\temp>S:\postgresql-8.3.3\bin\psql -h vero -p 5432 postgres
Welcome to psql 8.3.3, the PostgreSQL interactive terminal.
...

Which is promising, but I'm really looking to go to GSSAPI since
native krb5 is deprecated.

I made sure that a copy of gssapi32.dll is in the same directory as
psql, and it's still giving me the same errors. Any ideas what is
going on or what I can do to fix it?

Gritty details:
PostgreSQL 8.3.3
Windows XP SP2 (32-bit)
Kerberos for Windows 3.2.2
Server is running on Red Hat Enterprise Linux 5.2, Kerberos 1.6.2

Thanks much,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres 8.3.3, GSSAPI, and Windows XP

2008-09-11 Thread Peter Koczan
On Wed, Sep 10, 2008 at 6:50 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> Peter,
>
> * Peter Koczan ([EMAIL PROTECTED]) wrote:
>> Gritty details:
>> PostgreSQL 8.3.3
>> Windows XP SP2 (32-bit)
>> Kerberos for Windows 3.2.2
>> Server is running on Red Hat Enterprise Linux 5.2, Kerberos 1.6.2
>
> When using GSSAPI and KfW, you need to set a couple of environment
> variables for libpq to pick up on which indicate it should use GSSAPI
> rather than SSPI (standard Windows Kerberos/AD stuff):
>
> PGGSSLIB = gssapi
>
> and you'll probably also want:
>
> PGKRBSRVNAME = postgres
>
> Just set those through your computer settings, probably system wide on
> the systems in question.
>
> Since it defaults to upper-case POSTGRES on Windows, but if you're
> running KfW, I'm guessing you're using just regular MIT/Kerberos and
> using 'postgres/[EMAIL PROTECTED]'.

That worked perfectly. Thanks.

Is there a way to specify gssapi as the default to use at build time,
so no environment variables would have to be set? It's fine if the
answer is no since setting one ENV variable isn't a huge deal, but
it'd be nice if it could "just work".

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] SSL and md5 password?

2008-10-29 Thread Peter Koczan
Hi all,

I'm looking to add an md5-based user to a postgres server, and I can't
seem to get psql to connect using SSL. Normally I use Kerberos to
connect, and that works flawlessly with SSL. I'd much prefer to use
SSL connections, so I'm wondering if there's something I'm doing
wrong.

Here's what happens with a Kerberos connection...works just fine,
connects using SSL:

$ /s/postgresql/bin/psql -h dumbo.cs.wisc.edu -p 5432 koczan_test
Welcome to psql 8.3.4, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

koczan_test=>

And an md5 connection...

$ /s/postgresql/bin/psql -h dumbo.cs.wisc.edu -p 5432 -U test_md5 koczan_test
Password for user test_md5:
psql: FATAL:  no pg_hba.conf entry for host "128.105.162.36", user
"test_md5", database "koczan_test", SSL off

Here's my pg_hba.conf

# TYPE  DATABASEUSERCIDR-ADDRESS   METHOD

# this part disables remote "postgres" superuser connections
hostssl all postgres127.0.0.1/32   krb5
hostssl all postgres128.105.2.15/32krb5
hostssl all postgres128.105.207.19/32  krb5
hostssl all postgres128.105.0.0/16 reject
hostssl all postgres198.133.224.0/24   reject

# Testing out md5 users
hostssl all test_md5  128.105.0.0/16 md5
hostssl all test_md5  198.133.224.0/24   md5

# this part enables non-superuser connections
hostssl all nobody  128.105.0.0/16 trust
hostssl all nobody  198.133.224.0/24   trust
hostssl all all 128.105.0.0/16 krb5
hostssl all all 198.133.224.0/24   krb5

And "ssl = on" in my postgresql.conf.

Am I missing something, or am I just SOL when using SSL?

Thanks,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Recovering disk space...

2008-10-29 Thread Peter Koczan
On Fri, Oct 24, 2008 at 8:43 AM, Carol Walter <[EMAIL PROTECTED]> wrote:
> I'm running on Solaris 10 and postgreSQL 8.2.10.
>
> Thanks,
> Carol
>
> On Oct 24, 2008, at 7:47 AM, Helio Campos Mello de Andrade wrote:
>
>> On Thu, Oct 23, 2008 at 4:12 PM, Carol Walter <[EMAIL PROTECTED]> wrote:
>>>
>>> Greetings,
>>>
>>> I'm moving files in preparation of loading them into their appropriate
>>> databases.  I just ran out of space on my system before I got all the
>>> files
>>> moved.  I seem to remember reading some place that there is a command
>>> that
>>> will go clean up after a successful postgres installation.  I want to get
>>> rid of all the set up files that I no longer need.  Is there such a
>>> command
>>> or is this just wishful thinking.
>>>
>>> Carol

Are you referring to VACUUM? That only clears out and marks free space
within the database.

If it's external files, then I don't really know. Which files do you
need to "clean up"?

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SSL and md5 password?

2008-10-29 Thread Peter Koczan
On Wed, Oct 29, 2008 at 1:49 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> According to this, you're trying to connect with SSL off, right?  I
> don't think you've got any matching lines for that in your
> pg_hba.conf.  But I could be reading that wrong.

Well, psql is trying to connect with SSL off even though postgres was
built with SSL support, the server has SSL turned on, and psql
connects with SSL by default for Kerberos connections. Where I was
getting confused was that, for some reason, md5 authentication wasn't
using SSL (or at least that's what the error suggested) and the only
thing I changed was what user/auth-method I was using.

On Wed, Oct 29, 2008 at 9:11 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> libpq tries to connect with SSL on and then falls back to SSL off.  If
>> the latter fails, the error message from the first attempt has been
>> overwritten, and only the second one is logged.
>> This has been fixed by Magnus on CVS HEAD.
>
> Right.  So in the meantime, what Peter should do is try the problem case
> after
>export PGSSLMODE=require
> so that he can see the error message that's relevant to his desired use.

After exporting that environment variable, it works. Thanks.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SSL and md5 password?

2008-10-30 Thread Peter Koczan
On Wed, Oct 29, 2008 at 10:53 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Peter Koczan" <[EMAIL PROTECTED]> writes:
>> On Wed, Oct 29, 2008 at 9:11 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Right.  So in the meantime, what Peter should do is try the problem case
>>> after
>>> export PGSSLMODE=require
>>> so that he can see the error message that's relevant to his desired use.
>
>> After exporting that environment variable, it works. Thanks.
>
> Huh?  That makes no sense at all ... unless you had it set to "disable"
> before.

Oh, I was just mistyping the password and had to reset it. I was just
getting confused by the "SSL off" error message.

Now I feel like a fool (*turns away in shame*), though I'm glad we got
that cleared up.

Thanks again,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backing and restoring whole database server - how does this script look

2009-04-10 Thread Peter Koczan
On Fri, Apr 10, 2009 at 4:34 AM, Kevin Bailey
 wrote:
> We're trying to get a script to backup and restore a whole database server.
>
> So far we're looking at something like
> ...
> which is obviously a rough-cut - and the main problem we have is that a
> client has set up a DB where the owner names do not match up with the DB
> names.

You should consider adding a few different flags. Here's what I do to
dump/restore a specific database.

pg_dump -h [oldhost] -p 5432 -C -Fc [database] | pg_restore -h
[newhost] -p 5432 -C -d template1

The -C flag in pg_dump adds a command to create the database in the
dumpfile (and connect to it immediately afterward). -Fc dumps the
output in a custom, compressed format, but that output isn't readable
by psql.

Then, the -C flag in pg_restore says to read and execute that create
command we dumped, but we have to give it a different database to
initially connect to.

Of course, depending on how you have your network and pg_hba.conf set
up, you might not be able to use a direct pipe. If you can't, then
you'll have to add the steps of dumping the output to a file and
transferring said file before restoring. No big deal, I'm sure you can
interpolate for what you need.

> Just to make thigs more interesting, we have to do this for a version 8.1
> server and an older 7.4 server which has blobs in it.

If I remember correctly, pg_dumpall only allows text format dumps, and
blobs need to be dumped in non-text formats. This might have changed
since 7.4, but I couldn't find any source either way.

Hope this helps.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] ident authentication over tcp

2009-12-09 Thread Peter Koczan
Hi all,

I'm having some trouble configuring ident authentication for TCP
connections. I can get it to work fine for local connections, but I
can't

Local connection:

$ psql postgres
...
postgres=>

Server Log:
Dec  9 11:06:59 mitchell postgres[27482]: [4-1] LOG:  connection
received: host=[local]
Dec  9 11:06:59 mitchell postgres[27482]: [5-1] LOG:  connection
authorized: user=koczan database=postgres
Dec  9 11:07:03 mitchell postgres[27482]: [6-1] LOG:  disconnection:
session time: 0:00:04.370 user=koczan database=postgres host=[local]

Remote connection to localhost:
$ psql -h localhost postgres
psql: FATAL:  no pg_hba.conf entry for host "127.0.0.1", user
"koczan", database "postgres", SSL off

Server log:
Dec  9 11:07:06 mitchell postgres[27494]: [4-1] LOG:  connection
received: host=localhost port=41434
Dec  9 11:07:06 mitchell postgres[27494]: [5-1] LOG:  could not
connect to Ident server at address "127.0.0.1", port 113: Connection
refused
Dec  9 11:07:06 mitchell postgres[27494]: [6-1] FATAL:  Ident
authentication failed for user "koczan"
Dec  9 11:07:06 mitchell postgres[27495]: [4-1] LOG:  connection
received: host=localhost port=41435
Dec  9 11:07:06 mitchell postgres[27495]: [5-1] FATAL:  no pg_hba.conf
entry for host "127.0.0.1", user "koczan", database "postgres", SSL
off

My pg_hba.conf file looks like this:
# TYPE  DATABASEUSERCIDR-ADDRESS   METHOD

local   all koczan  ident sameuser
hostssl all koczan  127.0.0.1/32   ident sameuser

I'm running Red Hat Enterprise 5.4, and I can't find any place where
an ident server is actually running on this host.

How is it working for local connections, and what might I be able to
do to get it to work for localhost TCP connections.

Thanks,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ident authentication over tcp

2009-12-09 Thread Peter Koczan
On Wed, Dec 9, 2009 at 1:25 PM,   wrote:
> This looks like you have no ident server running...
>
> I guess you must install it first in recent distros, as there is not much
> use of ident anymore. So you have to search the packages for your OS.

I found the packages. I'm still confused why local ident
authentication works even without a running ident server, though. Does
anyone know why that is?

Thanks,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ident authentication over tcp

2009-12-10 Thread Peter Koczan
On Wed, Dec 9, 2009 at 3:29 PM, Adam Tauno Williams
 wrote:
> On Wed, 2009-12-09 at 15:18 -0600, Peter Koczan wrote:
>> I found the packages. I'm still confused why local ident
>> authentication works even without a running ident server, though. Does
>> anyone know why that is?
>
> Does "local" really use ident?  I always assumed local-ident was just
> aliased to an NSS call.
>
> Seriously, ident is dead, obsolete, and insecure tech.  You are best
> just not using it.

I was just considering it as one possibility for a limited purpose,
and after reading up I'm going to try something else.

> If by "local ident" you mean "when connecting thru unix-domain sockets" then
> read here:
>
> http://www.postgresql.org/docs/current/static/auth-methods.html#AEN28519

Yep, that's what I meant. Thanks for the info.

Thanks again,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Is regular vacuuming with autovacuum needed?

2010-08-16 Thread Peter Koczan
Hi all,

I have an database server that is experiencing some lock contention
and deadlock. It's infrequent, maybe once every two months, but
time-consuming to deal with.

The issue was that a routine VACUUM process (vacuumdb -az, called
nightly via cron) was locking a table and wasn't completing. This
server is also running autovacuum. This wasn't the source of the
deadlock, but I'm wondering if regular vacuuming is necessary or even
desirable with autovacuum running. Is there any reason for me to not
disable the vacuum cron job and just tweak autovacuum parameters (or
even just to leave the defaults)?

Details:
PostgreSQL version 8.4.4
Default vacuum/autovacuum parameters in postgresql.conf:
#vacuum_cost_delay = 0ms
#vacuum_cost_page_hit = 1
#vacuum_cost_page_miss = 10
#vacuum_cost_page_dirty = 20
#vacuum_cost_limit = 200
...
#autovacuum = on
#log_autovacuum_min_duration = -1
#autovacuum_max_workers = 3
#autovacuum_naptime = 1min
#autovacuum_vacuum_threshold = 50
#autovacuum_analyze_threshold = 50
#autovacuum_vacuum_scale_factor = 0.2
#autovacuum_analyze_scale_factor = 0.1
#autovacuum_freeze_max_age = 2
#autovacuum_vacuum_cost_delay = 20ms
#autovacuum_vacuum_cost_limit = -1
...
#vacuum_freeze_min_age = 5000
#vacuum_freeze_table_age = 15000

Cheers,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Is regular vacuuming with autovacuum needed?

2010-08-16 Thread Peter Koczan
On Mon, Aug 16, 2010 at 1:34 PM, Scott Marlowe  wrote:
> If autovac is properly configured, very few, if any, PostgreSQL
> databases need routine vacuuming jobs.  However, other than sleep
> states making it run slower, autovacuum is no different than a regular
> old vacuum.  Are you sure this wasn't a vacuum full, which is almost
> never a desired operation to be regularly scheduled?

I'm sure it wasn't a full vacuum. I almost never do those and when I
do, I have to schedule downtime.

I think another process got hung up somewhere and couldn't release its
lock on the table in question, and there were several other processes
waiting. It's possible that it was just a symptom of a larger problem
at the time. I didn't have time to do a thorough analysis (and the
problem state is lost now), and what was cause vs. effect is probably
immaterial at this point.

Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Is regular vacuuming with autovacuum needed?

2010-08-16 Thread Peter Koczan
On Mon, Aug 16, 2010 at 3:01 PM, Tom Lane  wrote:
> Greg Smith  writes:
>> Tom Lane wrote:
>>> On versions where autovacuum is on by default, I would certainly
>>> recommend trying to use only autovacuum.  cron-driven vacuum still
>>> has some uses but they are corner cases.
>
>> Corner cases implies something a bit more rare than I'd consider the
>> case here.
>
> Well, it certainly has some uses, but I still think it's good advice to
> first see if autovac alone will keep you happy.
>
>> The other alternative here is to just tune autovacuum so it runs really
>> slowly, so it won't kill responsiveness during any peak period.  While
>> in theory that's the right thing to do, this is much harder to get
>> working well than what I just described.
>
> But you really have to do that *anyway*, if you're not going to turn
> autovac off.
>
> I think the case where you want to combine cron-driven vacuum with
> autovac is where, having made sure autovac is dialed down enough to not
> present performance issues, you find that it can't keep up with the
> required vacuuming.  Then you need to do some not-hobbled vacuuming
> during your maintenance windows.  Eventually probably autovac will have
> some understanding of maintenance windows built-in, but it doesn't yet.

For this application (and most of my databases), I'm fairly certain
that autovacuum will work fine on its own. I'm going to disable the
cron-vacuuming and try running with autovacuum alone.

Thanks for the help,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] access log for 1 table

2011-09-12 Thread Peter Koczan
Is there any way to set an access log for a specific table?

What I'd like to do is have a record of all access for one table. It
would be nice to do this by raising a notice and having it show up in
syslog. I could use triggers, but I would need to log for SELECT
queries as well. Rules look like like they are not the best way to go
about this, either.

Are there any ideas?

Thanks,
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin