>>>>> "k" == Khyron  <khyron4...@gmail.com> writes:

     k> FireWire is an Apple technology, so they have a vested
     k> interest in making sure it works well [...]  They could even
     k> have a specific chipset that they exclusively use in their
     k> systems,

yes, you keep repeating yourselves, but there are only a few firewire
host chips, like ohci and lynx, and apple uses the same ones as
everyone else, no magic.  Why would you speak such a complicated
fantasy out loud without any reason to believe it other than your
imaginations?

I also tried to use firewire on Solaris long ago and had a lot of
problems with it, both with the driver stack in Solaris and with the
embedded software inside a cheaper non-Oxford case (Prolific).  I
think y'all forum users shuold stick to SAS/SATA for external disks
and avoid firewire and USB both.

Realize, though, that it is not just the chip driver but the entire
software stack that influences speed and reliability.  Even above what
you normally consider the firewire stack, above all the mid-layer and
scsi emulation stuff, Mac OS X for example is rigorous about handling
force-unmounting, both with umount -f and disks that go away without
warning.  FreeBSD OTOH has major problems with force-unmounting,
panicing and waiting forever.  Solaris has problems too with freezing
zpool maintenance commands, access to pools unrelated to the one with
the device that went away, and NFS serving anything while any zpool is
frozen.  This is a problem even if you don't make a habit of yanking
disks because it can make diagnosing problems really difficult: what
if your case, like my non-Oxford one, has a firmware bug that makes it
freeze up sometimes?  or a flakey power supply or lose cable?  If the
OS does not stay up long enough to report the case detached, and stay
sane enough for you to figure out what makes it retach (waiting a
while, rebooting the case, jiggling the power connector, jiggling the
data connector) then you will probably never figure out what's wrong
with it, as I didn't for months while if I'd had the same broken case
on a Mac I'd have realized almost immediately that it sometimes
detaches itself for no reason and retaches when I cycle it's power
switch but not when I plug/unplug its data cable and not when I reboot
the Mac, so I'd know the case had buggy firmware, while with Solaris I
just get these craaaaaazy panic messages.  Once your exception
handling reaches a certain level of crappyness, you cannot touch
anything without everything collapsing.

And on Solaris all this freezing/panicing behavior depends a lot which
disk driver yuo're using while Mac OS X it's, meh, basically working
the same for SATA, USB, Firewire, or NFS client, and also you can
mount images with hdiutil over NFS without getting weird checksum
errors or deadlocks like you do with file or lofiadm-backed ZFS.
(globalsan iscsi is still a mess though, worse than all other mac disk
drivers and worse than the solaris initiator)

I do not like the Mac OS much because it's slow, because the
hardware's overpriced and fragile, because the only people running it
inside VM's are using piratebay copies, and because I distrust Apple
and strongly disapprove of their master plan both in intent and
practice like the way they crippled dtrace, the displayport bullshit,
and their terrible developer relations like nontransparent last-minute
API yanking and ``agreements'' where you even have to agree not to
discuss the agreement, and in general of their honing a talent for
manipulating people into exploitable corners by slowly convincing them
it's okay to feel lazy and entitled.  But yes they've got some things
relevant to server-side storage working better than Solaris does like
handling flakey disks sanely, and providing source for the stable
supported version of their OS not just the development version.

Attachment: pgpzf9yUTzCYk.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to