[HACKERS] Out of town these long weekends...

2001-01-13 Thread Vadim Mikheev






Re: [HACKERS] CRCs

2001-01-13 Thread Nathan Myers

On Fri, Jan 12, 2001 at 11:30:30PM -0500, Tom Lane wrote:
> >> AFAICS, disk-block CRCs do not guard against mishaps involving intended
> >> writes.  They will help guard against data corruption that might creep
> >> in due to outside factors, however.
> 
> > Right.  
> 
> Given that we seem to have agreed on that, I withdraw my complaint about
> disk-block-CRC not being in there for 7.1.  I think we are still a ways
> away from the point where externally-induced corruption is a major share
> of our failure rate ;-).  7.2 or so will be time enough to add this
> feature, and I'd really rather not force another initdb for 7.1.

More to the point, putting CRCs on data blocks might have unintended
consequences for dump or vacuum processes.  7.1 is a monumental 
accomplishment even without corruption detection, and the sooner
the world has it, the better.

Nathan Myers
[EMAIL PROTECTED]



Re: [HACKERS] CRCs

2001-01-13 Thread Nathan Myers

On Fri, Jan 12, 2001 at 04:38:37PM -0800, Mikheev, Vadim wrote:
> Example.
> 1. Tuple was inserted into index.
> 2. Looking for free buffer bufmgr decides to write index block.
> 3. Following WAL core rule bufmgr first calls XLogFlush() to write
>and fsync log record related to index tuple insertion.
> 4. *Believing* that log record is on disk now (after successful fsync)
>bufmgr writes index block.
> 
> If log record was not really flushed on disk in 3. but on-disk image of
> index block was updated in 4. and system crashed after this then after
> restart recovery you'll have unlawful index tuple pointing to where?
> Who knows! No guarantee that corresponding heap tuple was flushed on
> disk.
> 
> Isn't database corrupted now?

Note, I haven't read the WAL code, so much of what I've said is based 
on what I know is and isn't possible with logging, rather than on 
Vadim's actual choices.  I know it's *possible* to implement a logging 
database which can maintain consistency without need for strict write 
ordering; but without strict write ordering, it is not possible to 
guarantee durable transactions.  That is, after a power outage, such 
a database may be guaranteed to recover uncorrupted, but some number 
(>= 0) of the last few acknowledged/committed transactions may be lost.

Vadim's implementation assumes strict write ordering, so that (e.g.) 
with IDE disks a corrupt database is possible in the event of a power 
outage.  (Database and OS crashes don't count; those don't keep the 
blocks from finding their way from on-disk buffers to disk.)  This is 
no criticism; it is more efficient to assume strict write ordering, 
and a database that can lose (the last few) committed transactions 
has limited value.

To achieve disk write-order independence is probably not a worthwhile 
goal, but for systems that cannot provide strict write ordering (e.g., 
most PCs) it would be helpful to be able to detect that the database 
has become corrupted.  In Vadim's example above, if the index were to
contain not only the heap blocks' numbers, but also their CRCs, then 
the corruption could be detected when the index is used.  When the 
block is read in, its CRC is checked, and when it is referenced via 
the index, the two CRC values are simply compared and the corruption
is revealed. 

On a machine that does provide strict write ordering, the CRCs in the 
index might be unnecessary overhead, but they also provide cross-checks
to help detect corruption introduced by bugs and whatnot.

Or maybe I don't know what I'm talking about.  

Nathan Myers
[EMAIL PROTECTED]



[HACKERS] (forw) Re: CVS Commit message generator...

2001-01-13 Thread Larry Rosenman

FYI...
- Forwarded message from Jordan Hubbard <[EMAIL PROTECTED]> -

From: Jordan Hubbard <[EMAIL PROTECTED]>
Subject: Re: CVS Commit message generator... 
Date: Fri, 12 Jan 2001 19:50:33 -0800
Message-ID: <[EMAIL PROTECTED]>
To: Larry Rosenman <[EMAIL PROTECTED]>

Sure, it's all available from:

ftp://ftp.freebsd.org/pub/FreeBSD/development/FreeBSD-CVS/CVSROOT

Regards,

- Jordan

> Jordan,
>Would it be possible to get a copy of whatever files are necessary
> to have a CVS server generate the commit messages like the FreeBSD
> project commits generate? 
> 
>I'm involved with the PostgreSQL project and our commits generate
> one message per directory, and would much prefer to see them move
> towards the FreeBSD style. 
> 
>Thanks for any help. 
> 
> 
> 
> -- 
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

- End forwarded message -

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



[HACKERS] Re: FWD: bizarre behavior of 'time' data entry

2001-01-13 Thread Thomas Lockhart

> ps shows postgresql running:
> /usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster start
> /usr/bin/postmaster -i
> I can poke a hole in my firewall and let you connect to the database if you
> would like troubleshoot my sytem.  But I'll need some help setting up
> permissions to allow external connections.  Let me know and I'll send my IP
> address to your private email.

I won't have time to do this in the next few weeks, since I'll be
traveling most of the time. You might find another volunteer from this
list, but if not I would suggest the following steps:

1) send us your default rpm compiler and build options. Use

  rpm --showrc > tempfiletomail.txt
  gcc --version >>&! tempfiletomail.txt

2) try building postgresql from sources. See if the problem persists (it
won't).

3) try building the postgresql rpm from sources. The steps are

 a) rpm -ivv postgresql-xxx.src.rpm
 b) cd /usr/src/RPM/SPEC (for Mandrake, RedHat uses /usr/src/RedHat...)
 c) rpm -ba postgresql.spec (verify the spec file name)
 d) cd /usr/src/RPM/RPMS/ixxx
 e) rpm -Uvh --force postgresql*.ixxx.rpm

I'm doing these steps from memory, and you had better save your database
contents somewhere just in case it gets trashed ;)

> Also, let me know what particular build problems might cause this, and I'll
> post to Trustix mail list.

Not sure, since I've never seen this before for this data type :(

 - Thomas



Re: [HACKERS] Re: Beta2 ... ?

2001-01-13 Thread Thomas Lockhart

> > What I am gathering from all this conversation is that there is no
> > repository for packages.

Whoops. There is a repository for packages on ftp.postgresql.org, and
you are welcome to contribute packages to there. As Peter points out, we
probably aren't helping folks if we have some independent track of
package development, so we would do better to also coordinate with the
distro package maintainers at the same time. And we would all really
prefer if the packages posted on ftp.postgresql.org are traceable to the
"official" builds of packages elsewhere.

For most folks running a particular OS and distro, there are certain
places they would look for packages, and it would be great if those
usual places have the benefit of your contributions too.

For cases where more coordination is required, such as with the RPM
packaging used for a bunch of distros, having them posted on
ftp.postgresql.org has helped us keep the RPM package itself consistant
with the various packagers. Not sure if you will find the same
coordination problem with your platform.

> Well, in the light of the openpackages.org effort it seems you have just
> signed yourself up to create a BSD-independent package. ;-)  Asking the
> relevant maintainer might be a first step, though.

:)

- Thomas



[HACKERS] Re: Bruce Momjian's interview in LWN.

2001-01-13 Thread Thomas Lockhart

> > Oh, not a problem. You're famous for, er, non-verbosity.
> I am.  Hmm...

*rofl* 

No need to take that as a personal challenge to remove the "non-" from
Lamar's opinion... ;)

- Thomas



[HACKERS] Re: RPMs (was Re: Re: Beta2 ... ?)

2001-01-13 Thread Thomas Lockhart

> It's pretty dramatic to get the 'You don't have permissions to install'
> message from the perl 'make install' when I am performing the build (and
> the make install) as root.  Particularly when 7.0's perl 'make install'
> worked semi-properly.  I say semi-properly because the packing list had
> to be rewritten -- but at least the install did its job to the proper
> build-root'ed location.

Just an fyi...

I have been having trouble with one of the package files disappearing
from the temporary instllation area during the rpm build of pg-7.0.3 for
Mandrake on Mandrake-7.2 using a recent source RPM. I haven't tracked
down the problem yet :(

   - Thomas



[HACKERS] Re: FWD: bizarre behavior of 'time' data entry

2001-01-13 Thread Thomas Lockhart

> See attached tmpfile.txt...

This distro uses the same or similar compiler flags as does Mandrake,
*and it is the wrong thing to do* The gcc folks recommend against
ever using "-O3" with "-fast-math", but both of these distros do it
anyway.

And you see the results :(

Pick up the .rpmrc I've posted at ftp.postgresql.org for Mandrake (look
somewhere under /pub/binaries/...) and put it in your root account's
home directory. Then try rebuilding from your .src.rpm and see what
happens...

  - Thomas



Re: [HACKERS] pgaccess: russian fonts && SQL window???

2001-01-13 Thread Tom Lane

"Len Morgan" <[EMAIL PROTECTED]> writes:
> I have used Postgres and Tcl/Tk for quite some time and yes, when 8.2 came
> out, I had trouble accessing ANYTHING because of the UTF-8 switch.  My
> solution was to upgrade my pgsql.tcl file with a new one.  I tried it once
> and it worked but other events have prevented me from switching all of my
> code yet.  Pgsql.tcl is a tcl source only interface to postgres (as opposed
> to a .dll or .so).  Whether the changes in there have made it into
> libpgtcl.so/dll I don't know.  I would be happy to forward it somewhere if
> you would like to try it out.

Yes, I'd like to see it.  It'd be even more useful if you also have the
old version, so I can see what was changed to fix the problem.

regards, tom lane



Re: [HACKERS] CRCs

2001-01-13 Thread Tom Lane

[EMAIL PROTECTED] (Nathan Myers) writes:
> To achieve disk write-order independence is probably not a worthwhile 
> goal, but for systems that cannot provide strict write ordering (e.g., 
> most PCs) it would be helpful to be able to detect that the database 
> has become corrupted.  In Vadim's example above, if the index were to
> contain not only the heap blocks' numbers, but also their CRCs, then 
> the corruption could be detected when the index is used.  When the 
> block is read in, its CRC is checked, and when it is referenced via 
> the index, the two CRC values are simply compared and the corruption
> is revealed. 

A row-level CRC might be useful for this, but it would have to be on
the data only (not the tuple commit-status bits).  It'd be totally
impractical with a block CRC, I think.  To do it with a block CRC, every
time you changed *anything* in a heap page, you'd have to find all the
index items for each row on the page and update their copies of the
heap block's CRC.  That could easily turn one disk-write into hundreds,
not to mention the index search costs.  Similarly, a check value that is
affected by tuple status updates would enormously increase the cost of
marking tuples committed or dead.

Instead of a partial row CRC, we could just as well use some other bit
of identifying information, say the row OID.  Given a block CRC on the
heap page, we'll be pretty confident already that the heap page is OK,
we just need to guard against the possibility that it's older than the
index item.  Checking that there is a valid tuple at the slot indicated
by the index item, and that it has the right OID, should be a good
enough (and cheap enough) test.

regards, tom lane



[HACKERS] diffs available?

2001-01-13 Thread Konstantinos Agouros

Hi,

since I have limited bandwidth. Are the diffs between the different versions
available to use with patch instead of always downloading the whole package?

Konstantin
-- 
Dipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: [EMAIL PROTECTED]
Otkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185

"Captain, this ship will not sustain the forming of the cosmos." B'Elana Torres



[HACKERS] Bug in datetime formatting for very large years

2001-01-13 Thread Oliver Elphick

If the year is very large, datetime formatting overflows its limits and
gives very weird results.  Either the formatting needs to be improved
or there should be an upper bound on the year.


bray=# select version();
 version  
--
 PostgreSQL 7.1beta1 on i686-pc-linux-gnu, compiled by GCC 2.95.3
(1 row)


bray=#  select 'now'::datetime + '10y'::interval;
  ?column?   
-
 102001-01-13 22:128
(1 row)

bray=#  select 'now'::datetime + '100y'::interval;
  ?column?   
-
 1002001-01-13 22:32
(1 row)
-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "Wherefore let him that thinketh he standeth take heed 
  lest he fall."I Corinthians 10:12 





Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!

2001-01-13 Thread selkovjr

I am sorry I wasn't listening -- I may have helped by at least
answering the direct questions and by testing. I have, in fact,
positively tested both my and Oleg's code in the today's snapshot on a
number of linux and FreeBSD systems. I failed on this one:

SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1

on which configure didn't detect the absence of libz.so

I don't think my applications are affected by Oleg's changes. But I
understand the tension that occurred during the past few days and even
though I am now satisfied with the agreement you seem to have
achieved, I could have hardly influenced it in any reasonable way. I
am as sympathetic with the need for a smooth an solid code control as
I am with promoting great features (or, in this case, just keeping a
feature alive). So, if I were around at the time I was asked to vote,
I wouldn't know how. I usually find it difficult to take sides in
"Motherhood vs. Clean Air" debates. It is true that throwing a core
during a regression test does gives one a black eye. It is also true
that there are probably hundreds of possible users, ignorant of the
GiST, trying to invent surrogate solutions. As far as I am concerned,
I will be satisfied with whatever solution you arrive at. I am pleased
that in this neighborhood, reason prevails over faith.

--Gene



[HACKERS] primary keys

2001-01-13 Thread Felipe Diaz Cardona



Hi.
 
Dos any one know any sql sentence to Find primary 
keys in a table.
 
I'm using postgresql v.7.0 (Mandrake 
7.2)


Re: [HACKERS] CRCs

2001-01-13 Thread Nathan Myers

On Sat, Jan 13, 2001 at 12:49:34PM -0500, Tom Lane wrote:
> [EMAIL PROTECTED] (Nathan Myers) writes:
> > ... for systems that cannot provide strict write ordering (e.g., 
> > most PCs) it would be helpful to be able to detect that the database 
> > has become corrupted.  In Vadim's example above, if the index were to
> > contain not only the heap blocks' numbers, but also their CRCs, then 
> > the corruption could be detected when the index is used.  ...
> 
> A row-level CRC might be useful for this, but it would have to be on
> the data only (not the tuple commit-status bits).  It'd be totally
> impractical with a block CRC, I think.   ...

I almost wrote about an indirect scheme to share the expected block CRC
value among all the index entries that need it, but thought it would 
distract from the correct approach:

> Instead of a partial row CRC, we could just as well use some other bit
> of identifying information, say the row OID.   ...

Good.  But, wouldn't the TID be more specific?  True, it would be pretty
unlikely for a block to have an old tuple with the right OID in the same
place.  Belt-and-braces says check both :-).  Either way, the check seems 
independent of block CRCs.   Would this check be simple enough to be safe
for 7.1? 

Nathan Myers
[EMAIL PROTECTED]



Re: [HACKERS] CRCs

2001-01-13 Thread Horst Herb

On Sunday 14 January 2001 04:49, Tom Lane wrote:

> A row-level CRC might be useful for this, but it would have to be on
> the data only (not the tuple commit-status bits).  It'd be totally
> impractical with a block CRC, I think.  To do it with a block CRC, every
> time you changed *anything* in a heap page, you'd have to find all the
> index items for each row on the page and update their copies of the
> heap block's CRC.  That could easily turn one disk-write into hundreds,
> not to mention the index search costs.  Similarly, a check value that is
> affected by tuple status updates would enormously increase the cost of
> marking tuples committed or dead.

Ah, finally. Looks like we are moving in circles (or spirals ;-) )Remember 
that some 3-4 months ago I requested help from this list several times 
regarding a trigger function that implements a crc only on the user defined 
attributes? I wrote one in pgtcl which was slow and had trouble with the C 
equivalent due to lack of documentation. I still believe this is that useful 
that it should be an option in Postgresand not a user defined function.

Horst



[HACKERS] Transactions vs speed.

2001-01-13 Thread mlw

I have a question about Postgres:

Take this update:
update table set field = 'X' ;


This is a very expensive function when the table has millions of rows,
it takes over an hour. If I dump the database, and process the data with
perl, then reload the data, it takes minutes. Most of the time is used
creating indexes.

I am not asking for a feature, I am just musing. 

I have a database update procedure which has to merge our data with that
of more than one third party. It takes 6 hours to run.

Do you guys know of any tricks that would allow postgres operate really
fast with an assumption that it is operating on tables which are not
being used. LOCK does not seem to make much difference.

Any bit of info would be helpful.

-- 
http://www.mohawksoft.com



[HACKERS] Re: RPMs (was Re: Re: Beta2 ... ?)

2001-01-13 Thread Lamar Owen

On Sat, 13 Jan 2001, Thomas Lockhart wrote:
> > It's pretty dramatic to get the 'You don't have permissions to install'
> > message from the perl 'make install' when I am performing the build (and

> I have been having trouble with one of the package files disappearing
> from the temporary instllation area during the rpm build of pg-7.0.3 for
> Mandrake on Mandrake-7.2 using a recent source RPM. I haven't tracked
> down the problem yet :(

To say that I am interested in the outcome would be an understatement.

I fixed my perl problems by invoking the Makefile (not the GNUmakefile) for the
second install and setting PREFIX properly then.  Yes, the second install
phase.  I'll see if I can't fix this a little better -- but my goal was to get
a build so that I can start fixing some things.

Most of the stuff I am having to 'fix' is kludgery that Peter's config and
makefile changes are eliminating :-).

But having to regen patches and rebuild from scratch each time I change the
spec makes it time-consuming.  But the single DESTDIR addition makes it a
much easier and cleaner build
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!

2001-01-13 Thread Tom Lane

[EMAIL PROTECTED] writes:
> I am sorry I wasn't listening -- I may have helped by at least
> answering the direct questions and by testing. I have, in fact,
> positively tested both my and Oleg's code in the today's snapshot on a
> number of linux and FreeBSD systems. I failed on this one:

> SunOS typhoon 5.7 Generic_106541-10 sun4u sparc SUNW,Ultra-1

> on which configure didn't detect the absence of libz.so

Really?  Details please.  It's hard to see how it could have messed
up on that.

regards, tom lane



Re: [HACKERS] Transactions vs speed.

2001-01-13 Thread Alfred Perlstein

* mlw <[EMAIL PROTECTED]> [010113 17:19] wrote:
> I have a question about Postgres:
> 
> Take this update:
>   update table set field = 'X' ;
> 
> 
> This is a very expensive function when the table has millions of rows,
> it takes over an hour. If I dump the database, and process the data with
> perl, then reload the data, it takes minutes. Most of the time is used
> creating indexes.
> 
> I am not asking for a feature, I am just musing. 

Well you really haven't said if you've tuned your database at all, the
way postgresql ships by default it doesn't use a very large shared memory
segment, also all the writing (at least in 7.0.x) is done syncronously.

There's a boatload of email out there that explains various ways to tune
the system.  Here's some of the flags that I use:

-B 32768   # uses over 300megs of shared memory
-o "-F" # tells database not to call fsync on each update

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[HACKERS] Re: Transactions vs speed.

2001-01-13 Thread mlw

Alfred Perlstein wrote:
> 
> * mlw <[EMAIL PROTECTED]> [010113 17:19] wrote:
> > I have a question about Postgres:
> >
> > Take this update:
> >   update table set field = 'X' ;
> >
> >
> > This is a very expensive function when the table has millions of rows,
> > it takes over an hour. If I dump the database, and process the data with
> > perl, then reload the data, it takes minutes. Most of the time is used
> > creating indexes.
> >
> > I am not asking for a feature, I am just musing.
> 
> Well you really haven't said if you've tuned your database at all, the
> way postgresql ships by default it doesn't use a very large shared memory
> segment, also all the writing (at least in 7.0.x) is done syncronously.
> 
> There's a boatload of email out there that explains various ways to tune
> the system.  Here's some of the flags that I use:
> 
> -B 32768   # uses over 300megs of shared memory
> -o "-F" # tells database not to call fsync on each update

I have a good number of buffers (Not 32768, but a few), I have the "-F"
option.


-- 
http://www.mohawksoft.com



[HACKERS] Re: Transactions vs speed.

2001-01-13 Thread Alfred Perlstein

* mlw <[EMAIL PROTECTED]> [010113 19:37] wrote:
> Alfred Perlstein wrote:
> > 
> > * mlw <[EMAIL PROTECTED]> [010113 17:19] wrote:
> > > I have a question about Postgres:
> > >
> > > Take this update:
> > >   update table set field = 'X' ;
> > >
> > >
> > > This is a very expensive function when the table has millions of rows,
> > > it takes over an hour. If I dump the database, and process the data with
> > > perl, then reload the data, it takes minutes. Most of the time is used
> > > creating indexes.
> > >
> > > I am not asking for a feature, I am just musing.
> > 
> > Well you really haven't said if you've tuned your database at all, the
> > way postgresql ships by default it doesn't use a very large shared memory
> > segment, also all the writing (at least in 7.0.x) is done syncronously.
> > 
> > There's a boatload of email out there that explains various ways to tune
> > the system.  Here's some of the flags that I use:
> > 
> > -B 32768   # uses over 300megs of shared memory
> > -o "-F" # tells database not to call fsync on each update
> 
> I have a good number of buffers (Not 32768, but a few), I have the "-F"
> option.

Explain a "good number of buffers" :)

Also, when was the last time you ran vacuum on this database?

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] Transactions vs speed.

2001-01-13 Thread Tom Lane

mlw <[EMAIL PROTECTED]> writes:
> Take this update:
>   update table set field = 'X' ;
> This is a very expensive function when the table has millions of rows,
> it takes over an hour. If I dump the database, and process the data with
> perl, then reload the data, it takes minutes. Most of the time is used
> creating indexes.

Hm.  CREATE INDEX is well known to be faster than incremental building/
updating of indexes, but I didn't think it was *that* much faster.
Exactly what indexes do you have on this table?  Exactly how many
minutes is "minutes", anyway?

You might consider some hack like

drop inessential indexes;
UPDATE;
recreate dropped indexes;

"inessential" being any index that's not UNIQUE (or even the UNIQUE
ones, if you don't mind finding out about uniqueness violations at
the end).

Might be a good idea to do a VACUUM before rebuilding the indexes, too.
It won't save time in this process, but it'll be cheaper to do it then
rather than later.

regards, tom lane

PS: I doubt transactions have anything to do with it.



Re: [HACKERS] CRCs

2001-01-13 Thread Tom Lane

[EMAIL PROTECTED] (Nathan Myers) writes:
>> Instead of a partial row CRC, we could just as well use some other bit
>> of identifying information, say the row OID.   ...

> Good.  But, wouldn't the TID be more specific?

Uh, the TID *is* the pointer from index to heap.  There's no redundancy
that way.

> Would this check be simple enough to be safe for 7.1? 

It'd probably be safe, but adding OIDs to index tuples would force an
initdb, which I'd rather avoid at this stage of the cycle.

regards, tom lane



[HACKERS] RPMS for 7.1beta3 in progress.

2001-01-13 Thread Lamar Owen

Well, I finally got a good build of 7.1beta3 in the RPM build environment. 
Woohoo.

Most regression tests pass -- 10 of 76 fail in serial mode.  I'll be analyzing
the diffs tomorrow afternoon to see what's going on, then will be tidying up
the RPMset for release.  Tidy or no, a release will happen before midday Monday
-- if I am really patient I may upload the RPM's from home, but don't hold your
breath.

The documentation in README.rpm-dist will be needing an overhaul -- so, for the
first beta RPM release that file will be INCORRECT unless I get really
industrious in the afternoon :-).

There are substantial differences -- and that's BEFORE I reorg the packages! 
But, now's a good time to reorg, since whole directories are moving around

Upgrading from prior releases will be unsupported for this first beta.

More details later.  Time to go to bed
--
Lamar Owen
WGCR Internet Radi
1 Peter 4:11