On 2012-04-26 14:47, Ian Collins wrote:
> I don't think it even made it into Solaris 10.

Actually, I see the kernel modules available in both Solaris 10,
several builds of OpenSolaris SXCE and an illumos-current.

$ find /kernel/ /platform/ /usr/platform/ /usr/kernel/ | grep -i cachefs
/kernel/fs/amd64/cachefs
/kernel/fs/cachefs
/platform/i86pc/amd64/archive_cache/kernel/fs/amd64/cachefs
/platform/i86pc/archive_cache/kernel/fs/cachefs

$ uname -a
SunOS summit-blade5 5.11 oi_151a2 i86pc i386 i86pc

It did have local backing store, but my current desktop has more RAM
than that Solaris 8 box had disk and my network is 100 times faster, so
it doesn't really matter any more.

Well, it depends on your working set size. A matter of scale.

If those researchers dig into their terabyte of data each
("each" seems important here for conflict/sync resolution),
on a gigabit-connected workstation, it would still take
them a couple of minutes to just download the dataset from
the server, let alone random-seek around it afterwards.

And you can easily have a local backing store for such
cachefs (or equivalent) today, even on an SSD or a few.

Just my 2c for possible build of that cluster they wanted,
and perhaps some evolution/revival of cachefs with today's
realities and demands - if it's deemed appropriate for
their task.

MY THEORY based on marketing info:  I believe they could
make a central fileserver with enough data space for
everyone, and each worker would use cachefs+nfs to access
it. Their actual worksets would be stored locally in the
cachefs backing stores on each workstation, and not
abuse networking traffic and the fileserver until there
are some writes to be replicated into central storage.
They would have approximately one common share to mount ;)

//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to