Re: [HACKERS] Proposal: Trigonometric functions in degrees

2015-10-26 Thread Bear Giles
>
> ​
>

Stupid question - is sin(3m) a call-through to the math coprocessor?​ It
probably only matters when doing a series of calculations (where the extra
guard bits can matter) and not when doing a simple one-time lookup but it
might be something to consider in regards to setting a precedent.

Bear


[HACKERS] what would tar file FDW look like?

2015-08-17 Thread Bear Giles
I'm starting to work on a tar FDW as a proxy for a much more specific FDW.
(It's the 'faster to build two and toss the first away' approach - tar lets
me get the FDW stuff nailed down before attacking the more complex
container.) It could also be useful in its own right, or as the basis for a
zip file FDW.

I have figured out that in one mode the FDW mapping that would take the
name of the tarball as an option and produce a relation that has all of the
metadata for the contained files - filename, size, owner, timestamp, etc. I
can use the same approach I used for the /etc/passwd FDW for that.

(BTW the current version is at https://github.com/beargiles/passwd-fdw.
It's skimpy on automated tests until I can figure out how to handle the
user mapping but it works.)

The problem is the second mode where I pull a single file out of the FDW.
I've identified three approachs so far:

1. A FDW mapping specific to each file. It would take the name of the
tarfile and the embedded file. Cleanest in some ways but it would be a real
pain if you're reading a tarball dynamically.

2. A user-defined function that takes the name of the tarball and file and
returns a blob. This is the traditional approach but why bother with a FDW
then? It also brings up access control issues since it requires disclosure
of the tarball name to the user. A FDW could hide that.

3. A user-defined function that takes a tar FDW and the name of a file and
returns a blob. I think this is the best approach but I don't know if I can
specify a FDW as a parameter or how to access it.

I've skimmed the existing list of FDW but didn't find anything that can
serve as a model. The foreign DB are closest but, again, they aren't
designed for dynamic use where you want to do something with every file in
an archive / table in a foreign DB.

Is there an obvious approach? Or is it simply a bad match for FDW and
should be two standard UDF?  (One returns the metadata, the second returns
the specific file.)

Thanks,

Bear


Re: [HACKERS] what would tar file FDW look like?

2015-08-17 Thread Bear Giles
I've written readers for both from scratch. Tar isn't that bad since it's
blocked - you read the header, skip forward N blocks, continue. The hardest
part is setting up the decompression libraries if you want to support
tar.gz or tar.bz2 files.

Zip files are more complex. You have (iirc) 5 control blocks - start of
archive, start of file, end of file, start of index, end of archive, and
the information in the control block is pretty limited. That's not a huge
burden since there's support for extensions for things like the unix file
metadata. One complication is that you need to support compression from the
start.

Zip files support two types of encryption. There's a really weak version
that almost nobody supports and a much stronger modern version that's
subject to license restrictions.  (Some people use the weak version on
embedded systems because of legal requirements to /do something/, no matter
how lame.)

There are third-party libraries, of course, but that introduces
dependencies. Both formats are simple enough to write from scratch.

I guess my bigger question is if there's an interest in either or both for
real use. I'm doing this as an exercise but am willing to contrib the
code if there's a general interest in it.

(BTW the more complex object I'm working on is the .p12 keystore for
digital certificates and private keys. We have everything we need in the
openssl library so there's no additional third-party dependencies. I have a
minimal FDW for the digital certificate itself and am now working on a way
to access keys stored in a standard format on the filesystem instead of in
the database itself. A natural fit is a specialized archive FDW. Unlike tar
and zip it will have two payloads, the digital certificate and the
(optionally encrypted) private key. It has searchable metadata, e.g.,
finding all records with a specific subject.)

Bear

On Mon, Aug 17, 2015 at 8:29 AM, Greg Stark st...@mit.edu wrote:

 On Mon, Aug 17, 2015 at 3:14 PM, Bear Giles bgi...@coyotesong.com wrote:
  I'm starting to work on a tar FDW as a proxy for a much more specific
 FDW.
  (It's the 'faster to build two and toss the first away' approach - tar
 lets
  me get the FDW stuff nailed down before attacking the more complex
  container.) It could also be useful in its own right, or as the basis
 for a
  zip file FDW.

 Hm. tar may be a bad fit where zip may be much easier. Tar has no
 index or table of contents. You have to scan the entire file to find
 all the members. IIRC Zip does have a table of contents at the end of
 the file.

 The most efficient way to process a tar file is to describe exactly
 what you want to happen with each member and then process it linearly
 from start to end (or until you've found the members you're looking
 for). Trying to return meta info and then go looking for individual
 members will be quite slow and have a large startup cost.


 --
 greg



[HACKERS] FDW question - how to identify columns to populate in response?

2015-08-14 Thread Bear Giles
Hi, I'm working on a FDW for the unix/linux user database - think
/etc/passwd and /etc/group although I'm actually using system calls that
could be quietly redirected to LDAP or other backends. It's easy to create
the FDW and a table associated with it, something like

CREATE TABLE passwd (
   name text,
   passwd text,
   uid int,
   ...

The problem is the user could decide to reorder or remove columns so I
can't make the assumption that values[0] is always going to be the username.

I have a solution that requires looking at the rel, extracting the atts,
and then doing a loop where I check the attname against all possible values
for each column. Anything that doesn't match is set to null. This isn't too
bad here but it would be a pain if there are many columns.

Is there a cleaner way? I've looked at a number of other FDW
implementations but they are generally mapping columns to columns (so it's
a short bit of lookup code inside the loop), not copying data provided by a
system call.

Thanks,

Bear


Re: SSL/TLS support (Was: Re: [HACKERS] 7.3.1 stamped)

2002-12-28 Thread Bear Giles
Bruce Momjian wrote:

Marc G. Fournier wrote:


My suggestion would be to eventually phase out ssl2 in favor of ssl3 or
tls.  And, as we are phasing it out, make it an opt-in thing, where the
dba has to turn on ssl2 KNOWING he is turning on a flawed protocol.


That was sort of my point --- if we allow SSLv2 in the server, are we
open to any security problems?  Maybe not.  I just don't know.


There are some weaknesses in SSLv2 that were fixed in SSLv3, but 
it takes a knowledgeable attacker to exploit them.  Anyone who is 
seriously concerned can easily change the startup code in both 
client and server and migrate to TLSv1.  We kept the current 
approach solely for backward compatibilty.

Bear


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] First cut at SSL documentation

2002-06-13 Thread Bear Giles

 Sorry, there is a newer version.  I will use that one.

You may want to hold off on that - I've been busy lately and haven't had
a chance to revisit the documentation or change some of the literal constants
to numeric constants, but it's been on my to do list.

The latter didn't affect the other patches since I planned on doing a
latter-day patch anyway, but the documentation may need some big changes
to emphasize that the rule that it's use SSH tunnels if you just want
to prevent eavesdropping, use SSL directly if you need to firmly establish
the identity of the server or clients.

(And sorry about responding via the lists, but your mail server doesn't
like to talk to cable modem users.)

Bear

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Security policy

2002-05-23 Thread Bear Giles

It occurs to me that part of the problem with wasted and incomplete
efforts can be fixed with a clear security policy.  The part that
I'm interested in is provided below, in a very truncated form.


Secure Communications Channels
--

Secure communications channels can be provided with Kerberos, GSS-API,
and SSL, and Unix sockets for local communications.  The goals of the 
secure commuications channel are:

* Confidentiality

  Confidentiality means that the data is kept secret from all third
  parties.

- Perfect Forward Security (PFS)

  Perfect Forward Security is the logical endpoint of confidentiality.
  It is a form of confidentiality where the data remains secret even
  if the static private keys used by the server (and client) are
  exposed.

* Message Integrity

  Message integrity means that the message received is identical to
  the message sent.  It is not possible for third parties to add, 
  delete, or modify data midstream.

* Endpoint Authentication

  Endpoint Authentication means that the identity of the other party
  can be firmly established.

- Mutual Authentication

  Mutual Authentication is endpoint authentication by both parties.

- Strong Authentication

  Strong Authentication means that the party has authenticated themselves
  with at least two of the following: something they know (e.g., password),
  something they have (e.g., private key, smart card), or something they
  are (e.g., biometrics).

A mechanism to map channel-level authentication (Kerberos principal
name, SSL distinguished name) to PostgreSQL userids is desirable,
but not required.

Initial support for all new protocols shall always include, at a 
minimum, all features present in the sample client and server provided
with the respective toolkit.  Any omissions must be clearly documented
and justified.

The development team shall maintain a matrix cross-referencing each
protocol and the goals satisfied.  Any omissions from normal practice
for each protocol shall be clearly documented and provided to users.

| L-SSL | L-KRB | SSL | GSS-API | SASL | Unix
+---+---+-+-+--+--
Confidentiality |   Y   |   N   |  Y  |Y|   Y  |   Y  
PFS |   N   |   N   |  Y  |?|   ?  |   Y  
Message Integrity   |   N   |   N   |  Y  |Y|   Y  |   Y  
Authentication (server) |  N(1) |  ?(2) |  Y  |Y|   Y  |   Y  
Authentication (mutual) |   N   |  ?(2) |  Y  |Y|   Y  |   Y  
+---+---+-+-+--+--

  L-SSL   legacy SSL
  L-KRB   legacy Kerberos 4  5
  SSL current SSL patches
  GSS-API GSS-API (Kerberos 5 reimplementation)
  SASLSASL with appropriate plug-ins
  UnixUnix sockets

(1) a server certificate is required, but it is not verified by the
client.

(2) PostgreSQL provides some level of authentication via Kerberos 4 and
Kerberos 5, but it may not have been properly implemented.


As I mentioned in an earlier post on -patches, I'm not sure that the
current Kerberos implementation is what people think it is.  I may
develop a GSS-API patch for evaluation purposes, but it will almost
certainly require a different port.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Security policy

2002-05-22 Thread Bear Giles

I sent this earlier, but accidently sent it from the wrong account
and it's been sitting in the pending spool all day.

Since writing it, I've sketched in server-side GSS-API and SASL
support for my prior patches.  The objective isn't to immediately
support either, but to ensure that future support can be added with
minimal effort.



It occurs to me that part of the problem with wasted and incomplete
efforts can be fixed with a clear security policy.  The part that
I'm interested in is provided below, in a very truncated form.


Secure Communications Channels
--

Secure communications channels can be provided with Kerberos, GSS-API,
and SSL, and Unix sockets for local communications.  The goals of the 
secure commuications channel are:

* Confidentiality

  Confidentiality means that the data is kept secret from all third
  parties.

- Perfect Forward Security (PFS)

  Perfect Forward Security is the logical endpoint of confidentiality.
  It is a form of confidentiality where the data remains secret even
  if the static private keys used by the server (and client) are
  exposed.

* Message Integrity

  Message integrity means that the message received is identical to
  the message sent.  It is not possible for third parties to add, 
  delete, or modify data midstream.

* Endpoint Authentication

  Endpoint Authentication means that the identity of the other party
  can be firmly established.

- Mutual Authentication

  Mutual Authentication is endpoint authentication by both parties.

- Strong Authentication

  Strong Authentication means that the party has authenticated themselves
  with at least two of the following: something they know (e.g., password),
  something they have (e.g., private key, smart card), or something they
  are (e.g., biometrics).

A mechanism to map channel-level authentication (Kerberos principal
name, SSL distinguished name) to PostgreSQL userids is desirable,
but not required.

Initial support for all new protocols shall always include, at a 
minimum, all features present in the sample client and server provided
with the respective toolkit.  Any omissions must be clearly documented
and justified.

The development team shall maintain a matrix cross-referencing each
protocol and the goals satisfied.  Any omissions from normal practice
for each protocol shall be clearly documented and provided to users.

| L-SSL | L-KRB | SSL | GSS-API | SASL | Unix
+---+---+-+-+--+--
Confidentiality |   Y   |   N   |  Y  |Y|   Y  |   Y  
PFS |   N   |   N   |  Y  |?|   ?  |   Y  
Message Integrity   |   N   |   N   |  Y  |Y|   Y  |   Y  
Authentication (server) |  N(1) |  ?(2) |  Y  |Y|   Y  |   Y  
Authentication (mutual) |   N   |  ?(2) |  Y  |Y|   Y  |   Y  
+---+---+-+-+--+--

  L-SSL   legacy SSL
  L-KRB   legacy Kerberos 4  5
  SSL current SSL patches
  GSS-API GSS-API (Kerberos 5 reimplementation)
  SASLSASL with appropriate plug-ins
  UnixUnix sockets

(1) a server certificate is required, but it is not verified by the
client.

(2) PostgreSQL provides some level of authentication via Kerberos 4 and
Kerberos 5, but it may not have been properly implemented.


As I mentioned in an earlier post on -patches, I'm not sure that the
current Kerberos implementation is what people think it is.  I may
develop a GSS-API patch for evaluation purposes, but it will almost
certainly require a different port.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] 2nd cut at SSL documentation

2002-05-21 Thread Bear Giles

A second cut at SSL documentation



SSL Support in PostgreSQL
=

Who needs it?
=

The sites that require SSL fall into one (or more) of several broad
categories.

*) They have insecure networks. 

   Examples of insecure networks are anyone in a corporate hotel,
   any network with 802.11b wireless access points (WAP) (in 2002,
   this protocol has many well-known security weaknesses and even
   'gold' connections can be broken within 8 hours), or anyone 
   accessing their database over the internet.

   These sites need a Virtual Private Network (VPN), and either
   SSH tunnels or direct SSL connections can be used.

*) They are storing extremely sensitive information.

   An example of extremely sensitive information is logs from
   network intrusion detection systems.  This information *must*
   be fully encrypted between front- and back-end since an attacker
   is presumably sniffing all traffic within the VPN, and if they
   learn that you know what they are doing they may attempt to
   cover their tracks with a quick 'rm -rf /' and 'dropdb'

   In the extreme case, the contents of the database itself may
   be encrypted with either the crypt package (which provides
   symmetrical encryption of the records) or the PKIX package
   (which provides public-key encryption of the records).

*) They are storing information which is considered confidential
   by custom, law or regulation.

   This includes all records held by your doctor, lawyer, accountant,
   etc.  In these cases, the motivation for using encryption is not
   a conscious evaulation of risk, but the fear of liability for 
   'failure to perform due diligence' if encryption is available but
   unused and an attacker gains unauthorized access to the harm of
   others.

*) They have 'road warriors.'

   This includes all sites where people need to have direct access
   to the database (not through a proxy such as a secure web page)
   from changing remote addresses.  Client certificates provide a
   clean way to grant this access without opening up the database
   to the world.

Who does not need it?
-

It's at least as important to know who does not need SSL as it
is to know who does.  Sites that do not need SSL fall into several
broad categories.

*) Access is limited to the Unix socket.

*) Access is limited to a physically secure network.

   Physically secure networks are common in the clusters and
   colocation sites - all database traffic is restricted to dedicated
   NIC cards and hubs, and all servers and cabling are maintained in
   locked cabinets.


Using SSH/OpenSSH as a Virtual Private Network (VPN)


SSH and OpenSSH can be used to construct a Virtual Private Network
(VPN) to provide confidentiality of PostgreSQL communications.  
These tunnels are widely available and fairly well understood, but 
do not provide any application-level authentication information.

To set up a SSH/OpenSSH tunnel, a shell account for each
user should be set up on the database server.  It is acceptable
for the shell program to be bogus (e.g., /bin/false), if the
tunnel is set up in to avoid launching a remote shell.

On each client system the $HOME/.ssh/config file should contain
an additional line similiar to

 LocalForward  psql.example.com:5432

(replacing psql.example.com with the name of your database server).
By putting this line in the configuration file, instead of specifying
it on the command line, the tunnel will be created whenever a 
connection is made to the remote system.

The psql(1) client (or any client) should be wrapped with a script
that establishes an SSH tunnel when the program is launched:

  #!/bin/sh
  HOST=psql.example.com
  IDENTITY=$HOME/.ssh/identity.psql
  /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60'  \
/usr/bin/psql -h $HOST -p  $1

Alternately, the system could run a daemon that establishes and maintains
the tunnel.  This is preferrable when multiple users need to establish
similar tunnels to the same remote site.

Unfortunately, there are many potential drawbacks to SSL tunnels:

*) the SSH implementation or protocol may be flawed.  Serious problems
   are discovered about once every 18- to 24- months.

*) the systems may be misconfigured by accident.

*) the database server must provide shell accounts for all users
   needing access.  This can be a chore to maintain, esp. in if
   all other user access should be denied.

*) neither the front- or back-end can determine the level of
   encryption provided by the SSH tunnel - or even whether an
   SSH tunnel is in use.  This prevents security-aware clients
   from refusing any connection with unacceptly weak encryption.

*) neither the front- or back-end can get any authentication
   information pertaining to the SSH tunnel.

Bottom line: if you just need a VPN, SSH tunnels are a good solution.
But if you explicitly need a secure 

[HACKERS] revised SSL patches submitted

2002-05-20 Thread Bear Giles

Another set of SSL patches have been sent to the patches list.
(No idea when they'll get through the system.)  This is a new
baseline set of patches that fix many of the problems identified
earlier and also add a number of security patches.

N.B., some of these changes are visible to the user, but are
common practice for SSL code.  The most notable is a minimal
certificate validation that requires that certs be current
(no more expired certs) and that the cert's common name match
the hostname used with contacting the backend.

This means that a cert containing a common name such as
'eris.example.com' *must* be accessed via

  psql -h eris.example.com ...

not

  psql -h eris ...

A future patch can relax this so that the common name can
resolve to the address returned by getpeername(2).

Client certs are optional, but if they exist they are expected
in the user's home directory, under the .postgresql directory.
Encrypted private keys are not yet supported.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] SASL, compression?

2002-05-20 Thread Bear Giles

 What are the benefits of SASL+Postgresql compared to Postgresql over plain SSL?
 
SASL is orthogonal to SSL.  SASL is an application-layer library
and can be run over either regular sockets or SSL.  However there
are SASL hooks to tell it that it's running over a secure channel.

The anticipated benefit of SASL is that it would replace all of the
current authetication code with a set of standard plugins.  The 
authority problem would be reduced to a simple text mapping.

(BTW, I didn't make it clear earlier but authentication is figuring
out who the other party is, authority is figuring out what they're
entitled to do.)

PAM is *not* a solution to network authentication since it was never
designed for it.  One well-known nightmare scenario is the Kerberos
PAM modules - they were designed to be used by local users logging
onto a virtual terminal, to eliminate the need to modify login to
acquire Kerberos credentials directly.  But in the wild they've been 
seen used with Apache mod_pam modules to autheticate Kerberos
users.  Since they require the Kerberos principal and password to be 
transmitted across the wire in the clear, they're major security holes
when used this way.

 Coz Postgresql already supports SSL right?

Postgresql minimally supports SSL.  It contains some significant
coding errors, poor initialization, and no support for client
certificates.  My recent patches should go a long way towards 
fixing that.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] SASL, compression?

2002-05-20 Thread Bear Giles

 I can see the benefit of SASL as a standard in public exposed network 
 services like email servers (SMTP, POP, IMAP), where you can support 
 different email clients which themselves may or may not support SASL and 
 may use different SASL libraries.
 
 But for Postgresql - communications is mainly between internal db clients 
 (which use the pgsql libraries) and postmaster.
 
Remember that the current authentication code requires that each
database specify the method(s) used to access it.  With SASL, it
should be possible to specify generic sensitivities (e.g., 'public,'
'low,' 'high' and 'extreme') and let the systems negotiate any
authentication method that satisfies the properties indicated by
these sensitivities.  Including local authentication methods that
we've never heard of.

 Would the SASL code allow JDBC, Perl DBI+DBD postgresql clients support 
 SASL (and encryption) seamlessly? If it would then that's great. If it's 
 just psql then not so great.

Some clients can allow the user to specify a mechanism, but others
will require the client to autonegotiate the authentication.  Exactly
how we'll handle this is one of the open questions.

 Because replacing current authentication code doesn't seem as obvious a 
 benefit to me. The plugin thing sounds useful tho - modular. But would the 
 simple text mapping for authorisation be as simple when UserX is only 
 supposed to have SELECT access to certain tables?

The authorization question HBA deals with is mapping Kerberos principals
to pgusers.  That level of authorization is handled by the database,
not postmaster.

 Cool. WRT the patch which requires strict matches on server hostnames - are 
 wildcards allowed or is there an option for the client to ignore/loosen 
 things a bit?

A lot of CAs won't sign certs with wildcards.  They aren't
necessary since you can set up the nameserver to provide aliasing.
It's also possible to add an arbitrary number of altSubjName extensions,
so you could always explicitly name all systems if you wanted.

Adding reverse DNS lookup and support for altSubjName extensions
is on my todo list, but was a lower priority than getting the big
patch out for feedback.

As for loosening the cert verification checks, I think a better
solution is providing a tool that makes it easy to create good certs.
It's too easy to come up with man-in-the-middle attacks if it's easy
to disable these checks.

As a compromise, I think it may be possible to run the server with
*no* cert.  This option would be used by sites that only want an
encrypted channel, and sites that want authentication will make the
commitment to creating valid certs.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] First cut at mkcert

2002-05-20 Thread Bear Giles

Attached is the first cut at mkcert.sh, a tool to create PostgreSQL
server certificates.  It also sets up a directory suitable for the
OpenSSL CA tool, something that can be used to sign client certs.

The root cert should be added to the backend SSL cert verification
tools, and copied to user's .postgresql directory so the client
can verify the server cert.  This one root cert can be used for
multiple server certs in addition to all client certs. 

Also, this script sets up DSA keys/certs.  With empheral DH keys the
server (and client) keys are only used to sign the emphermal keys,
so you can use DSA keys.  Without emphermal keys you would need to
use RSA keys since those keys are used for encryption in addition
to signing.

Some predictable changes:

1) the root key should be encrypted, since it isn't necessary for
   the system to boot.  (Extreme case: the root key should be
   kept off the hard disk, perhaps in a smart cart.)

2) the 'openssl.conf' file could be split into 'root.conf' and
   'server.conf' files so the prompts can be a bit more suggestive.
   There should also be a 'client.conf' file for client certs,
   and it should be copied to /etc/postgresql and visible to clients.

   (To avoid the hassles of requiring clients have the OpenSSL
   tools bundled, pgkeygen should be a binary program instead of
   a script.)

3) there should be a sample domain-component config file in addition
   to the geopolitical one.  That gives DNs like

 [EMAIL PROTECTED]

   instead of

 [EMAIL PROTECTED]

Bear


#!/bin/sh -e

# === FIRST DRAFT ===

#
# this script creates the root (CA) certificate and
# server cert for PostgreSQL.  The OpenSSL applications
# must be in the path.
#

if [ $PG_HOME. = . ]
then
  /bin/echo You must define \$PG_HOME before running this program.
  exit 0
fi

#
# generate DSA parameters file used for keys, if one does
# not already exist.
#
if [ ! -f $PG_HOME/dsa1024.pem -o -z $PG_HOME/dsa1024.pem ]
then
  openssl dsaparam -out $PG_HOME/dsa1024.pem 1024
fi

#
# generate CA directory tree and contents, if it does not already
# exist.
#
if [ ! -d $PG_HOME/CA ]
then
  /bin/mkdir $PG_HOME/CA;
fi
if [ ! -d $PG_HOME/CA/certs ]
then
  /bin/mkdir $PG_HOME/CA/certs
fi
if [ ! -d $PG_HOME/CA/crl ]
then
  /bin/mkdir $PG_HOME/CA/crl
fi
if [ ! -d $PG_HOME/CA/newcerts ]
then
  /bin/mkdir $PG_HOME/CA/newcerts
fi
if [ ! -d $PG_HOME/CA/private ]
then
  /bin/mkdir $PG_HOME/CA/private
  /bin/chmod 0700 $PG_HOME/CA/private
fi
if [ ! -f $PG_HOME/CA/index.txt ]
then
  /usr/bin/touch $PG_HOME/CA/index.txt
fi
if [ ! -f $PG_HOME/CA/serial ]
then
  /bin/echo 01  $PG_HOME/CA/serial
fi

#
# generate root key, if one does not already exist.
#
if [ ! -f $PG_HOME/CA/private/cakey.pem -o -z $PG_HOME/CA/private/cakey.pem ]
then
  openssl gendsa -out $PG_HOME/CA/private/cakey.pem $PG_HOME/dsa1024.pem
  /bin/chmod 0700 $PG_HOME/CA/private/cakey.pem
fi

#
# generate self-signed root certificate, if one does not already exist
#
if [ ! -f $PG_HOME/CA/cacert.pem -o -z $PG_HOME/CA/cacert.pem ]
then
  /bin/echo Creating the CA root certificate
  /bin/echo The common name should be something like 'PostgreSQL Root Cert'
  /bin/echo 
  openssl req -new -x509 -out $PG_HOME/CA/cacert.pem \
-key $PG_HOME/CA/private/cakey.pem \
-config $PG_HOME/openssl.conf
  /bin/link -s 
fi

#
# generate server key, if one does not already exist.
#
if [ ! -f $PG_HOME/server.key -o -z $PG_HOME/server.key ]
then
  openssl gendsa -out $PG_HOME/server.key $PG_HOME/dsa1024.pem
  /bin/chmod 0700 $PG_HOME/CA/private/cakey.pem
fi

#
# generate server certificate, if one does not already exist.
#
if [ ! -f $PG_HOME/server.crt -o -z $PG_HOME/server.crt ]
then
  /bin/echo Creating the PostgreSQL server certificate
  /bin/echo The common name must be the fully qualified domain name of server!
  /bin/echo 
  openssl req -new -x509 -out $PG_HOME/server.self \
-key $PG_HOME/server.key \
-config $PG_HOME/openssl.conf
  if [ -f $PG_HOME/server.self ]
  then
openssl ca -out $PG_HOME/server.crt -config $PG_HOME/openssl.conf \
-ss_cert $PG_HOME/server.self
/bin/rm -f $PG_HOME/server.self
  fi
fi


#
# PostgreSQL sample configuration for *root* cert.
# Contrast and compare with server.conf and client.conf.
#

# This definition stops the following lines choking if HOME isn't
# defined.
#HOME   = .
#RANDFILE   = $ENV::HOME/.rnd


[ ca ]
default_ca  = CA_default# The default ca section


[ CA_default ]

dir = $ENV::PG_HOME/CA  # Where everything is kept
certs   = $dir/certs# Where the issued certs are kept
crl_dir = $dir/crl  # Where the issued crl are kept
database= $dir/index.txt# database index file.
new_certs_dir   = $dir/newcerts # default place 

[HACKERS] First cut at SSL documentation

2002-05-20 Thread Bear Giles

Attached is the first cut at some SSL documetation for the
PostgreSQL manual.  It's in plain text, not DocBook, to make
editing easy for the first few revisions.  The documentation
leads the code by a day or so.

Also, I'm still having problems with the patches list - none
of my recent submissions have gotten through, and I haven't
even gotten the confirmation note from when I tried to resubscribe
to that list.  That's why the main SSL patches haven't appeared yet.

Bear


SSL Support
===

PostgreSQL provides 4 levels of support for SSL.


SSH Tunnels
---

The minimal SSL support is provided by SSH tunnels.  The benefits of
SSL tunnels are that they are widely available and fairly well 
understood.

Unfortunately, there are many potential drawbacks to SSL tunnels:

 - the SSH implementation may be flawed,
 - the SSH protocol may be flawed,
 - the installation may be flawed,
 - the database server must provide shell accounts for all users
   (or systems) setting up SSH tunnels.  This makes it very hard
   to ensure that the database server is 'locked down,'
 - it can get very confusing if multiple users on the same system
   connect to a remote database (do you set up a systemwide tunnel?
   individual tunnels?),
 - neither the database nor clients can determine the level of 
   encryption provided by the SSH tunnel, and
 - SSH tunnels don't provide SSL authentication to applications.

The problems with the SSH protocols and implementations should not
be trivialized - both SSH and OpenSSH seem to be have a critical
bug that requires an immediate software upgrade every 18- to 24-months.

Bottom line: if you're concerned about confidentiality, SSH tunnels
are a good first step while you evaluate whether the performance hit
due to the encryption is acceptable.  But in the long run you'll 
probably want to migrate to a true SSL solution.

Setup
.

To set up an SSH tunnel to a database server, use something like

  $HOME/.ssh/config
  -
+ LocalForward  psql.example.com:5432

  $HOME/bin/psql
  ---
  #!/bin/sh
  HOST=psql.example.com
  IDENTITY=$HOME/.ssh/identity.psql
  /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60'  \
/usr/bin/psql -h $HOST -p  $1


ANONYMOUS DH Server
---

The biggest problem with SSH tunnels is the need to maintain those
tunnels.  If we the backend can support SSL connections directly,
most of the problems with SSH tunnel will disappear.

ANONYMOUS DH is the most basic SSL implementation.  It does
not require a server certificate, so it can be enabled immediately
once the database has SSL support compiled in.  Once this has been
done the database server does not need to provide shell accounts
for users, and can be fully locked down.

The drawback to ANONYMOUS DH is that the client still has no
way of determining the true identity of the backend.

Setup
.

To set up ANONYMOUS DH, make sure the database is compiled with 
support for SSL and start postmaster with the -i -l flags.
There is no need to set up a server key.


SERVER-only Authentication
--

The biggest problem with ANONYMOUS DH servers is that the client
has no way of determining the identity of the server.  Server 
Authentication (with certificates) solves this problem - indeed it
should eliminate all man-in-the-middle attacks.

The main drawback to Server Authentication is that is requires
creating a server certificate, and distributing the server cert
(or a CA cert) to all clients.  Fortunately it is straightforward
to compile a certificate into the client library so that the net
cost to clients is no more than for ANONYMOUS DH SSL.

A more subtle drawback to Server Authentication is that it gives
the server no information about the clients connecting to it.
This can result in confusing instructions for road warriors
who need to access the database remotely.  (Remember that the
problem isn't granting access to the road warriors, it's letting
them in while blocking the 6-billion odd people who shouldn't
have remote access to your database.)

Setup
.

To set up Server Authentication, run the mkcert.sh script provided
with PosgreSQL to generate the server cert and private key.
Then either copy the root.crt file to each user's $HOME/.postgresql
directory, or recompile the database with [[insructions to be determined]].


MUTUAL Authentication
-

The biggest drawback to Server-only Authentication is the poor
support for road warriors.  Mutual authentication neatly
solves this problem since clients are permitted (or required) to
provide their own certificate to establish a connection.

Setup
.

Set up the server as described above, then have each user run
pgkeygen(1) to generate their own client certificate and private
key.  The client certificates should be mailed to the DBA for
him to sign these certs and associate a PostgreSQL user with
each of them.  The signed certs should then be stored in the

Re: [HACKERS] pq_eof() broken with SSL

2002-05-19 Thread Bear Giles

  a better fix is to explicitly create a new abstraction layer.
 
 Well, this is supposed to be an abstraction already. ;-)
 
The new abstraction layer would localize SSL vs. plain sockets, and
possibly SASL as well.

The SSL issues I've identified to date are:

critical

 - no check for SSL_get_error() after reads and writes. (*)

 - code assumes zero bytes read or write indicates an error.
   This isn't necessarily the case with SSL because of control
   messages.

severe

 - pq_eof() fails on SSL.  Since this is localized to the password
   handling code, I don't consider this error critical since the
   system can reliably work provided known problematic conditions
   are avoided.

 - both front- and back-end should call SSL_shutdown() immediately
   prior to closing connection. (1/2 *)

 - private keys should be regular files with mode 0600 or 0400. (*)
   they should be owned by the running process.

 - backend should use empheral DH keys.

 - encrypted private keys should be supported.

important

 - client cert handling. (*)

 - makecert(?), a tool to generate PostgreSQL server certs.
   It is comparable in function to Apache mod-ssl script of
   the same name, and should be run when installing database
   if SSL is enabled.

 - pgkeygen(?), a tool to generate client certificates.  It is
   comparable to sshkeygen for SSH.

 - client and server should migrate to TLS.

 - connections should expire after a period of inactivity.

 - clients should provide identification of remote system to
   user. (*)

 - clients should verify that the server cert identifies the
   server.  (server common name should resolve to IP address
   of server.)

 - DSA keys should work.

ongoing

 - change protocol to use 'STARTTLS' type negotiation, instead
   of current approach.

 - SASL?

 - using client certs for authentication

unknown

 - I'm not sure select() is guaranteed to work with SSL.

(*) have had patches submitted, but may be superceded by subsequent
patches.


Unfortunately, I'm not sure that this list is complete - I'm still
doing research.  The patches I already submitted are fairly straight-
forward - OpenSSL contains sample clients and servers that demonstrate
good techniques.  Right now I'm cross-checking the code with my
_SSL and TLS_ book to make sure there aren't other holes, and that
takes time.

I hadn't planned on doing any of this, but I got caught up in it while
setting up snort to log to PostgreSQL via an encrypted channel.  As 
an aside, this is a good example of a case where an SSH tunnel is 
inadequate!

So to answer the question I clipped, I'm looking at it but I don't
want to do a half-assed solution.  But as the scope of the solution
expands, it becomes more important to have consensus that something
needs to be done and this is the right solution.  So right now I'm
not ready to make any commitments.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] SASL, compression?

2002-05-18 Thread Bear Giles

I've been looking at the authentication and networking code and
would like to float a trial balloon.

1) add SASL.  This is a new standards-track protocol that is often
   described as PAM for network authentication.  PostgreSQL could
   remove *all* protocol-specific authentication code and use
   standard plug-in libraries instead.

   (It's worth noting that SSL/TLS operates at a lower level than
   SASL.  This has some interesting consequences, see below.)

   After the black-box authentication finishes, the postmaster will
   have up to pieces of information: the peer's client cert (SSL)
   and a string containing the Kerberos principal, user name verified
   with password/one-time-password/CRAM, etc.

   PostgreSQL authentication would be reduced to specifying which
   authentication methods are acceptable for each database, then
   mapping that authenticated user string and/or cert to a pguser.

2) add ZLIB compression.

The last point needs a bit of explanation.  With SASL, the buffers
may be modified due to the authentication protocol selected, so the
low-level routines in pqcomm.c and fe-connect.c must be modified.
But since this is happening anyway, it would be easy to wrap
sasl_encode with ZLIB compression and sasl_decode with ZLIB decompression,
with pq_flush() (and client's equivalent) doing a sync flush of
the compression buffer.

You obviously don't need compression on the Unix socket or a fast
network connection, but if you're on a T1 or slower the reduced
transmission time should more than offset the time spent in 
compression/decompression.

Drawbacks

The biggest drawback, at least initially, is that the initial
exchange will need to be totally rewritten.  One possibility
could use something like this:

  S: 220 example.com PostgreSQL 8.1
  C: HELO client.com
  S: 250-example.com
  S: 250-AUTH ANONYMOUS KERBEROS4 list of authentication methods
  S: 250-STARTTLS server accepts SSL/TLS
  S: 250-COMPRESSION  compress datastream
  S: 250 HELP
  C: STARTTLS pq.virtual.com  allows virtual domains
 SSL/TLS negotation occurs *here*
  S: 250-pq.virtual.com
  S: 250-AUTH ANONYMOUS PLAIN KERBEROS4   note extra method
  S: 250-COMPRESSION
  S: 250-some extract functions only available with TLS/SSL sessions
  S: 250 HELP
  C: AUTH PLAIN user password use simple username/password
  S: 220 OK
  C: COMPRESSION ON
  S: 220 OK
  C: OPEN database
  S: 220 OK

and then the system drops back to the existing data exchange
format.  Or it could look like something entirely different - the
most important thing is that the server needs to provide a list
of authentication methods, the client chooses one, and it either
succeeds or the client can retry.  However a protocol something
like this has the strong advantage of being well-tested in the 
existing protocols.

Bear

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] pq_eof() broken with SSL

2002-05-18 Thread Bear Giles

I came across another bug in the SSL code.  backend/libpq/pqcomm.c:pq_eof()
calls recv() to read a single byte of data to check for EOF.  The
character is then stuffed into the read buffer.

This will not work with SSL.  Besides the data being encrypted, you
could end up reading a byte from an SSL control message instead of a
data message, or messing up counts.  Fortunately this procedure only
seems to be called in some password code - if you use 'trust' or 'ident'
then the SSL should work fine.

The quick fix is to add another USE_SSL block, a better fix is to
explicitly create a new abstraction layer.

Bear

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] SASL, compression?

2002-05-18 Thread Bear Giles

 I'm not that clueful about SASL -- would this mean that we could get
 rid of the PostgreSQL code that does SSL connections, plus MD5, crypt,
 ident, etc. based authentication, and instead just use the SASL stuff?

We would still need the ability to map user identities - pgusers for
those methods where the client can't specify an arbitrary user name
(e.g., Kerberos and GSSAPI), but strictly speaking that's an authorization
problem, not an authentication problem, and it can be handled entirely
within the backend.

 [W]ould SSL/TLS support need to co-exist with SASL?

Yes.  SASL effectively works at the application layer.  It's now common
practice for one of the application commands to be STARTTLS (perhaps by
another name) that both sides use as a signal to negotiate a TLS/SSL
session.

The benefit of this approach is that it's easily migrated to Unix
sockets, IPv6, etc.

  2) add ZLIB compression.
 
 This was discussed before, and the conclusion was that compression
 is of fairly limited utility, and can be accomplished by using
 ssh -- so it's not worth the bloat. But there were some dissenting
 opinions at the time, so this might merit further discussion...

I agree, it wasn't worth the effort with the existing code.  But
if we rewrite the lowest level routines then the amount of bloat can
be minimized. 

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] SASL, compression?

2002-05-18 Thread Bear Giles

 Bear Giles [EMAIL PROTECTED] writes:
  1) add SASL.  This is a new standards-track protocol that is often
 described as PAM for network authentication.
 
 To me, new standards-track protocol translates as pie in the sky.
 When will there be tested, portable, BSD-license libraries that we
 could *actually* use?

http://asg.web.cmu.edu/sasl/sasl-implementations.html

 Unless you can make a credible case that using SASL would
 allow us to rip out PAM, Kerberos, MD5, etc *now* (not in a few releases
 when everyone's switched to SASL), I think this will end up just being
 another one :-(.

http://asg.web.cmu.edu/sasl/sasl-projects.html

If it's being used in Sendmail, Cyrus IMAP and OpenLDAP, with preliminary
work (sponsored by Carnegie Mellon University) in supporting it for CVS
and LPRng and possibly SSH I think it's safe to say it's beyond vaporware
at this point.

The only reason I was waving my hands a bit is that I'm not sure if
SASL 2.x is considered production-ready yet.  We could support SASL 1.x,
but if 2.x is coming out within 6-12 months then it may make more sense
to target 2.x instead of releasing 1.x today, then switching to 2.x in 
the next release.

If there's a concensus that we should proceed, I would also be the 
first to argue that we should contact CMU for assistance in the 
conversion.  Hopefully they have enough experience with their cyrus
package that we can really put this issue to bed.  (Meanwhile PostgreSQL
would get more free advertising as another major project using their
SASL library.)

 (It doesn't help any that PAM support was sold to us just one release
 cycle back on the same grounds that it'd be the last authentication
 method we'd need to add.  I'm more than a tad wary now...)

Umm... I don't know what to say.  This is a common misunderstanding of
PAM (and one reason I *really* hate those PAM Kerberos modules) but people
keep repeating it.  PAM was only designed for local use, but people keep
trying to use it for network authentication even though us security 
freaks keep pointing out that using some of those modules on a network
will leave your system wide open.  In contrast SASL was designed from the
start to work over an untrusted network.

This isn't to say that PAM support is totally useless - it may be a
clean way to handle the ongoing Kerberos principal - pguser issue, but
it's a nonstarter for authentication purposes unless you know you're
on the Unix socket.

  2) add ZLIB compression.
 
 Why do people keep wanting to reinvent SSH tunneling?

One good reason is that secure sites will prohibit them.  SSH tunnels
require that clients have shell accounts on the remote system, and
on a dedicated database server you may have no accounts other than the
sysadmins who administer the box.

I'm aware of the various tricks you can do - setting the shell to 
/bin/false, requiring RSA authentication and setting the no-tty flag
in the 'known_keys' file, etc., but at the end of the day there are 
still extra shell accounts on that system.

SSH tunnels are a good stopgap measure while you add true TLS/SSL
support, but they can't be considered a replacement for that support.

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Kerberos principal to dbuser mapping

2002-05-16 Thread Bear Giles

   Is there any existing way of making queries from
   postmaster (other than setting up a client
   connection from it)?
  
  There is no existing way, and none will be added in
  the future either.
  There are good system-reliability reasons for
  keeping the postmaster
  away from the database.
 
 Ok, but it seems wasteful to build primitive database
 functionality in parallell to the real database.

This issue affects mutual SSL authentication and PKIX in 
addition to Kerberos.  See a followup post for details
Bottom line: we should identify and document a canonical
solution.

P.S., in the case of PKIX, there's a well-defined interface
and there's no conceptual problem with maintaining the database
via the regular client interface.  Bootstrapping the system may
be another matter.

Bear

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] help with bison

2002-04-10 Thread Bear Giles

 In fact, my grammar currently has an obscene
 20 shift/reduce and 4 reduce/reduce conflicts!

A shift/reduce conflict, IIRC, usually indicates a situation where
the grammar is unambiguous but may be inefficient.  Eliminating them
is nice, but not critical.

A R/R conflict, in contrast, is a point where the grammar is ambiguous
and you *must* fix it.

 (Unfortunately, bison isn't very
 helpful: it doesn't provide line-numbers when it warns me about
 the # of conflicts).

Turn on the verbose flag (-v|--verbose).  Near the top it should
list the S/R and R/R states.  You can then examine the states and
rules and see exactly where the problem is.

Cutting to the chase, the R/R problems are due to TEMP and
TEMPORARY being both unreserved keywords and part of
OptTempTableName.  If you comment them out in 'unreserved keywords'
the R/R error goes away but this may introduce errors elsewhere.

What is probably better is to move those tokens someplace else
where there's some context:

  into_clause : INTO OptTempTableName
| /* EMPTY */
;

needs to be replaced with something like

  into_options : /* EMPTY */
| TEMPORARY
| TEMP
;

  into_clause : INTO into_options OptTempTableName
| /* EMPTY */
;

with the corresponding modifiers removed from the OptTempTableName

Unfortunately, when I quickly tested that the number of S/R conflicts 
ballooned to 378.

As an aside, is there any reason to treat TEMP and TEMPORARY as two
separate identifiers?  It's common practice to have synonyms mapped
to a single identifier in the lexical analyzer, and the grammar itself
looks like it could benefit from some helper rules such as:

  temporary
: TEMPORARY  { $$ = 1; }
| TEMP   { $$ = 1; }
|{ $$ = 0; }
;

  scope
: GLOBAL { $$ = 1; }
| LOCAL  { $$ = 2; }
|{ $$ = 0; }
;

  something : scope temporary somethingelse { ... }

Bear

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] inserting user defined types through a rule?

2002-04-01 Thread Bear Giles

I recently discovered a problem inserting a user-defined type when 
going through a rule.  I'm not sure if it's a -hackers or -users question,
but since it involves the interaction of a user-defined type and rules
I thought it envitable that I would end up here anyway.

The object in question is my X.509 type.  For compelling reasons
beyond the scope of this discussion, I need to define a table as:

create table certs (
namevarchar(20),
certx509,

-- fields used with CRL lookups
serial_number   hugeint not null
  constraint c1 check (serial_number = serial_number(cert)),
issuer  principal not null
  constraint c2 check (issuer = issuer(cert)),
subject principal not null unique
  constraint c3 check (subject = subject(cert)),

...
);

where the constraints guarantee that the cached attributes accurately
reflect the contents of the cert (but these fields can be indexed and
searched).  In practice it's impossible to get those fields right in
a query so I also defined:

  create view cert_insert as select name, cert from certs;

  create rule certi as on insert to cert_insert do instead
  insert into certs (name, cert, serial_number, subject, issuer,...
  )
  values (new.name, new.cert,
  serial_number(new.cert), subject(new.cert), issuer(new.cert),...
  );

The problem is that I can insert literal text:

  create table t ( cert x509 );
  insert into t values (' BEGIN CERTIFICATE  ');

but when I try the same with cert_insert it's clear that new.cert 
isn't getting initialized properly.  (It works fine when the cert is
already in the database.)  Trying to explicitly cast the literal to 
as part of the query doesn't help - it seems that the rule just rewrites
the query and the cast is getting lost.

Workarounds don't seem to be viable.  I can't use a trigger on a temporary
table since there doesn't seem to be a clean way to trigger a rule from
one.  (I need to get parameters from the trigger to the SQL function to
the rule, and SQL functions don't seem to be able to take parameters --
or its undocumented if it can take something like $1, $2, etc.)  I can't
use a rule on the temporary table since it appears a rule still looks
at the original parameters, not the temp table.

Any ideas?  Is this something addressed in 7.2?  (I'm trying to stick
with the oldest useable version to avoid forcing DB upgrades.)  Or is
this a genuine hole in the user type/rules/triggers model?

Bear

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] inserting user defined types through a rule?

2002-04-01 Thread Bear Giles

I'm using 7.1.3 currently, but am building and installing 7.2.1 tonight
to see if this fixes the problem.

I don't know the standard types and functions well enough to be able to
whip out a test case, but I think I do have an idea on what the problem
is.  If I'm right, the problem is triggered by any rule with a function 
that operates on one of the parameters.  If the parameter is already the
type then the rule succeeds.  If the parameter needs to be cast (e.g.,
because it's a literal value) then the rule fails.

E.g., if there is a function like

   function strlen(varchar) returns int4 ...

try

   create table test (s varchar(20), len int4);

   create view test_view as select s from test;

   create rule test_rule as on insert to test_view 
 do instead insert into test (s, strlen(s));

then

   insert into test_view values ('crash-n-burn!');

will fail.

Taken even further, you could probably use

   create rule test_rule2 as on insert to test_view
 do instead insert into test2 (strlen(s));

The earlier example is just an updateable view with the tweak that
some of hidden underlying fields are also updated.  Strictly speaking
this breaks 3NF, but with the consistency checks it's a useful way of 
caching derived values while ensuring that they can't get out of sync
with the objects they cache.

Bear

P.S., it just occured to me that rules can allow multiple statements.
Maybe the workaround is

   create rule...
 do instead (
   insert into temporary table;
   insert into final table from temporary table using functions;
   clear temporary table
   );

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] XML

2002-03-20 Thread Bear Giles

 You have to write xmlGetUser() to take in the userid and return the
 xml required for it.  I see no advantage to generating the xml in the
 db rather than in the servlet.

As a counterexample, my PKIX extensions defined an XML datatype
that could be used to generate XML instead of the standard format.
E.g.,

  select cert as xml from certs where ;

But this was an exceptional case - for many of the objects the
standard format is a base-64 encoded ASN.1 string, but with
XML I need to extract the fields *and* still include the object
as a base-64 encoded ASN.1 string.  It was *much* easier to just 
generate it in the backend than to do it at the db-xml level.

More generally, I think we need to keep an mind an important 
distinction here.  Most of the time XML represents the contents
of an entire tuple, and each field corresponds to a attribute.
In these cases an external db-xml layer makes the most sense.

But with extensions it's common to have complex objects in a
single attribute, and there may be a standard way to represent
the object in XML.  (E.g., all of my XML conversions are extracted
from the proposed Signature schema at W3C.)  In these cases
it may make more sense for the extension to provide its own
XML mechanisms, but it would be nice if there was s standard 
way of handling this.

My suggestion was mentioned above.  Just make xml a standard
data type.  It can be transparently converted to/from text, but
you can define functions that return xml (or accept xml) and
use casts to specify when you want XML instead of the normal
format for that attribute.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster