>>>>> "et" == Erik Trimble <[email protected]> writes:
et> In that case, can I be the first to say "PANIC! RUN FOR THE
et> HILLS!"
Erik I thought most people already understood pushing to the public hg
gate had stopped at b147, hence Illumos and OpenIndiana. it's not
that you're wrong, just that you should be in the hills by now if you
started out running.
the S11 Express release without source and with its new, more-onerous
license than SXCE is new dismal news, and the problems on other
projects and the waves of smart people leaving might be even more
dismal for opensolaris since in the past there was a lot of
integration and a lot of forward progress, but what you were
specifically asking about dates in hg was already included in the old
bad news AFAIK. And anyway there was never complete source code, nor
source for all new work (drivers), nor source for the stable branch,
which has always been a serious problem.
The good news to my view is that Linux may actually be only about one
year behind (and sometimes ahead) on the non-ZFS features in Solaris.
FreeBSD is missing basically all of this, ex jails are really not as
thorough as VServer or LXC, but Linux is basically there already:
* Xen support is better. Oracle is sinking Solaris Xen support in
favour of some old Oracle Xen kit based on Linux, I think?
which is disruptive and annoying for me, because I originally used
OpenSolaris Xen to get some isolation from the churn of Linux Xen.
but it means there's a fully-free-software path that's not even
less annoying a transition than what Oracle's offering through
partially-free uncertain-future tools.
* Infiniband support in Linux was always good. They don't have a
single COMSTAR system which is too bad, but they have SCST for SRP
(non-IP RDMA SCSI, the COMSTAR one that people say works with
VMWare), and stgt for iSER (the one that works with the Solaris
initiator).
* instead of Crossbow they have RPS and RFS, which give some
performance boost with ordinary network cards, not just with 10gig
ones with flow caches. My understanding's hazy but I think, with
an ordinary card, you still have to take an IPI, but it will touch
hardly any of the packet on the wrongCPU so you can still take
advantage of per-core caches hot with TCP-flow-specific structures.
I'm not a serious enough developer to know whether RPS+RFS is more
or less thorough than the Crossbow-branded stuff, but it was
committed to mainline at about the same time as Crossbow.
* Dreamhost is already selling Linux zones based on VServer and has
been for many years, so there *is* a zones alternative on Linux,
and better yet unlike the incompletely-delivered and eventually
removed lx brand, on Linux you get Linux zones with Linux packages
and nginx working with epoll and sendfile (on solaris, for me
eventport works but sendfile does not). There's supposedly a total
rewrite of VServer in the works called LXC, so maybe that will be
the truly good one. It may take them longer to get sysadmin tools
that match zonecfg/zoneadm, but the path is set.
* LTTng is an attempt at something dtrace-like. It's still
experimental, but has the same idea of large libraries of probes,
programs cannot tell if they're being traced or not, and relatively
sophisticated bundled analysis tools.
http://multivax.blogspot.com/2010/11/introduction-to-linux-tracing-toolkit.html
-- LTTng linux dtrace competitor
The only thing missing is ZFS. To me it looks like a good replacement
for that is years away. I'm not excited about ocfs, or about kernel
module ZFS ports taking advantage of the Linus kmod ``interpretation''
and the grub GPLv3 patent protection.
Instead I'm hoping they skip this stage and style of storage and go
straight to something Lustre-like that supports snapshots. I've got
my eye on ceph, and on Lustre itself of course because of the IB
support. ex perhaps in the end you will have 64 - 256MB of
atftpd-provided initramfs which never goes away where init and sshd
and libc and all the complicated filesystem-related userspace lives,
so there is no more problems of running /usr/sbin/zpool off of a
ZFS---you will always be able to administrate your system even if
every ``disk'' is hung (or if cluster access is disrupted). and there
will not be a complexity difference between a laptop with local disks
and cluster storage---everything will be the full-on complicated
version.
I feel ZFS doesn't scale small enough for phones, nor big enough for
what people are already doing in data centers, so why not give up on
small completely and waste even more RAM and complexity in the laptop
case? and one of the most interesting appnotes to me about ZFS is
this one relling posted long ago:
http://docs.sun.com/app/docs/doc/820-7821/girgb?a=view
which is an extremely limited analog of what ceph and Lustre do, where
compute and storage nodes do not necessarily need to be separate, and
the storage nodes are interconnected by Ethernet or IB not by a
dedicated fabric managed by proprietary not-introspectable software
like SAS.
pgpD5awn0SEf3.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
