>Anyone is working to fix it? On some slower servers this is really
>annoying (I know flash would 'fix' it).
Not that I am aware of; it is really annoying on older
hardware.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Hello Casper,
Thursday, December 14, 2006, 6:40:31 PM, you wrote:
>>Bill Sommerfeld wrote:
>>> Similarly, the bulk of the synchronous I/O done during the import of SMF
>>> manifests early in boot after an install or upgrade are wasted effort..
>>
>>I've done hundreds of installs. Empirically, my
>Bill Sommerfeld wrote:
>> Similarly, the bulk of the synchronous I/O done during the import of SMF
>> manifests early in boot after an install or upgrade are wasted effort..
>
>I've done hundreds of installs. Empirically, my observation is that
>the SMF manifest import scales well with processor
Roch - PAE wrote:
Right on. And you might want to capture this in a blog for
reference. The permalink will be quite useful.
such as:
http://blogs.sun.com/erickustarz/entry/zil_disable
?
We did have a use case for zil synchronicity which was a
big user controlled transaction :
turn
Bill Sommerfeld wrote:
Similarly, the bulk of the synchronous I/O done during the import of SMF
manifests early in boot after an install or upgrade are wasted effort..
I've done hundreds of installs. Empirically, my observation is that
the SMF manifest import scales well with processors. In o
Anton B. Rang wrote:
> The ZIL is a necessary part of ZFS. Just because the ZFS file structure will
> be consistent after a system crash even with the ZIL disabled does not mean
> that disabling it is safe!
Is there a list of battery-backed RAID controllers supported by Solaris x86
somewhere? Doe
On Thu, 2006-12-14 at 11:33 +0100, Roch - PAE wrote:
> We did have a use case for zil synchronicity which was a
> big user controlled transaction :
>
> turn zil off
> do tons of thing to the filesystem.
> big sync
> turn zil back on
Yep. The bulk of the "heavy lifting" o
Right on. And you might want to capture this in a blog for
reference. The permalink will be quite useful.
We did have a use case for zil synchronicity which was a
big user controlled transaction :
turn zil off
do tons of thing to the filesystem.
big sync
turn zil
> Also, (Richard can address this better than I) you may want to disable
> the ZIL or have your array ignore the write cache flushes that ZFS issues.
The latter is quite a reasonable thing to do, since the array has
battery-backed cache.
The ZIL should almost [b]never[/b] be disabled. The only r
Kory Wheatley wrote:
The Luns will be on separate "SPA" controllers"not on all
the same controller, so that's why I thought if we split
our data on different disks and ZFS Storage Pools we would
get better IO performance. Correct?
The way to think about it is that, in general, for best
perfo
The Luns will be on separate "SPA" controllers"not on all the same controller,
so that's why I thought if we split our data on different disks and ZFS Storage
Pools we would get better IO performance. Correct?
This message posted from opensolaris.org
_
> Were looking for pure performance.
>
> What will be contained in the LUNS is Student User
> account files that they will access and Department
> Share files like, MS word documents, excel files,
> PDF. There will be no applications on the ZFS
> Storage pools or pool Does this help on what
> s
Also there will be no NFS services on this system.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Were looking for pure performance.
What will be contained in the LUNS is Student User account files that they will
access and Department Share files like, MS word documents, excel files, PDF.
There will be no applications on the ZFS Storage pools or pool Does this help
on what strategy might
Are you looking purely for performance, or for the added reliability that ZFS
can give you?
If the latter, then you would want to configure across multiple LUNs in either
a mirrored or RAID configuration. This does require sacrificing some storage in
exchange for the peace of mind that any “si
Are you looking purely for performance, or for the added reliability that ZFS
can give you?
If the latter, then you would want to configure across multiple LUNs in either
a mirrored or RAID configuration. This does require sacrificing some storage in
exchange for the peace of mind that any “sil
16 matches
Mail list logo