On 4/20/07, Tim Thomas <[EMAIL PROTECTED]> wrote:
My initial reaction is that the world has got by without file systems
that can do this for a long time...so I don't see the absence of this as
a big deal. On the other hand, it hard to argue against a feature that
I admit that this is "typically
Torrey McMahon wrote:
> Lyndon Nerenberg wrote:
>
>>> But a tape in a van is a very high bandwidth connection :)
>>
>>
>> Australia used to get it's usenet feed on FedExed 9-tracks.
>
>
> But you had to put them in the reader upside down and read them back
> to front.
>
>
Not necessary, upside dow
Don't take this as gospel, and someone chime in if I'm off here, but I
just saw an ARC case about this issue
The fw in the T3 line might already take the NV_SYNC request. If it
doesn't then we'll have a conf file where you can set it per array.
Also, I would think the module or conf file w
Lyndon Nerenberg wrote:
But a tape in a van is a very high bandwidth connection :)
Australia used to get it's usenet feed on FedExed 9-tracks.
But you had to put them in the reader upside down and read them back to
front.
___
zfs-discuss mailing l
Spencer,
Summary: I am not sure that v4 would have a significant
advantage over v3 or v2 in all envirs. I just believe it can
have a significant advantage (no/minimal drawbacks)
and one should use it if at all possbile to verify
that it is not the bottleneck
I'm potentially stepping in areas I don't quite know enough about, but
others can jump in if I speak any mistruths :)
More inline...
Georg-W. Koltermann wrote:
Hi,
ok I know zfs-fuse is still incomplete and performance has not been considered,
but still, before I'm going to use it for m
I have a master system w/ few non-glabal zones on zfs fs runnin sol 10
11/06. I need to build few systems using the master system as the master
image.
I know I can't flasharchive the whole system. I could just flasharchive
non-zfs fs and use that to jumpstart the other boxes. Then use zfs send |
Hi,
ok I know zfs-fuse is still incomplete and performance has not been considered,
but still, before I'm going to use it for my /home I wanted a rough estimate.
Another benchmark already asserted that zfs by itself, on Solaris, is a very
fast beast
(http://cmynhier.blogspot.com/2006/05/zfs-ben
On Apr 21, 2007, at 9:46 AM, Andy Lubel wrote:
so what you are saying is that if we were using NFS v4 things
should be dramatically better?
I certainly don't support this assertion (if it was being made).
NFSv4 does have some advantages from the perspective of enabling
more aggressive file
On 21 Apr 2007, at 04:42, Rich Brown wrote:
Hi,
so far, discussing filesystem code via opensolaris
means a certain
"specialization", in the sense that we do have:
zfs-discuss
ufs-discuss
fuse-discuss
Likewise, there are ZFS, NFS and UFS communities
(though I can't quite
figure out if we hav
The controller unit contains all of the cache.
On 4/21/07, Albert Chin
<[EMAIL PROTECTED]> wrote:
On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote:
> Does anyone have a 6140 expansion shelf that they can hook directly to
> a host? Just wondering if this configuration works. Previo
See my blog on this topic:
http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
The quick summary is that if there is more than one vdev comprising the
pool, the copies will be spread across multiple vdevs. If there is only
one, then the copies are spread out physically (at least by
The filesystem allows to keep two or more copies of the data written. What I'm
interested in to know is how the placement of the copies is done. Consider a
JBOD pool, having set the filesystem to keep two copies, will the copies be
actively placed on two different volumes, as such allowing to su
so what you are saying is that if we were using NFS v4 things should be
dramatically better?
do you think this applies to any NFS v4 client or only Suns?
-Original Message-
From: [EMAIL PROTECTED] on behalf of Erblichs
Sent: Sun 4/22/2007 4:50 AM
To: Leon Koll
Cc: zfs-discuss@opensolar
On Sat, Apr 21, 2007 at 09:05:01AM +0200, Selim Daoud wrote:
> isn't there another flag in /etc/system to force zfs not to send flush
> requests to NVRAM?
I think it's zfs_nocacheflush=1, according to Matthew Ahrens in
http://blogs.digitar.com/jjww/?itemid=44.
> s.
>
>
> On 4/20/07, Marion Haka
Leon Koll,
As a knowldegeable outsider I can say something.
The benchbark (SFS) page specifies NFSv3,v2 support, so I question
whether you ra n NFSv4. I would expect a major change in
performance just to version 4 NFS version and ZFS.
The benchmark seems
Hi Lori,
Thanks to you and your team for posting the zfs boot image kit. I was able
to jumpstart a VMWare virtual machine using a Nevada b62 image patched with
your conversion kit and it went very smoothly.
Here is the profile that I used:
# Jumpstart profile for VMWare image w/ two emulated
Roch,
isn't there another flag in /etc/system to force zfs not to send flush
requests to NVRAM?
s.
On 4/20/07, Marion Hakanson <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] said:
> We have been combing the message boards and it looks like there was a lot of
> talk about this interaction of zfs
18 matches
Mail list logo