+--
| On 2010-09-16 18:08:46, Ray Van Dolson wrote:
|
| Best practice in Solaris 10 U8 and older was to use a mirrored ZIL.
|
| With the ability to remove slog devices in Solaris 10 U9, we're
| thinking we may get more ba
Best practice in Solaris 10 U8 and older was to use a mirrored ZIL.
With the ability to remove slog devices in Solaris 10 U9, we're
thinking we may get more bang for our buck to use two slog devices for
improved IOPS performance instead of needing the redundancy so much.
Any thoughts on this?
If
On Wed, 15 Sep 2010, Brandon High wrote:
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I thin
David Dyer-Bennet wote:
> Sure, if only a single thread is ever writing to the
> disk store at a time.
>
> This situation doesn't exist with any kind of
> enterprise disk appliance,
> though; there are always multiple users doing stuff.
Ok, I'll bite.
Your assertion seems to be that "any kind of
> "dd" == David Dyer-Bennet writes:
dd> Sure, if only a single thread is ever writing to the disk
dd> store at a time.
video warehousing is a reasonable use case that will have small
numbers of sequential readers and writers to large files. virtual
tape library is another obviously
On Thu, Sep 16, 2010 at 08:15:53AM -0700, Rich Teer wrote:
> On Thu, 16 Sep 2010, Erik Ableson wrote:
>
> > OpenSolaris snv129
>
> Hmm, SXCE snv_130 here. Did you have to do any server-side tuning
> (e.g., allowing remote connections), or did it just work out of the
> box? I know that Sendmail
On Thu, Sep 16, 2010 at 8:21 AM, Mike DeMarco wrote:
> What are the ramifications to changing the recordsize of a zfs filesystem
> that already has data on it?
>
> I want to tune down the recordsize to speed up very small reads to a size
> that is more in line with the read size.
> can I do this
On Thu, 16 Sep 2010, Erik Ableson wrote:
> OpenSolaris snv129
Hmm, SXCE snv_130 here. Did you have to do any server-side tuning
(e.g., allowing remote connections), or did it just work out of the
box? I know that Sendmail needs some gentle persuasion to accept
remote connections out of the box;
What are the ramifications to changing the recordsize of a zfs filesystem that
already has data on it?
I want to tune down the recordsize to speed up very small reads to a size that
is more in line with the read size. can I do this on a filestystem that has
data already on it and how does it ef
On Wed, September 15, 2010 16:18, Edward Ned Harvey wrote:
> For example, if you start with an empty drive, and you write a large
> amount
> of data to it, you will have no fragmentation. (At least, no significant
> fragmentation; you may get a little bit based on random factors.) As life
> goe
We have the following setup configured. The drives are running on a couple PAC
PS-5404s. Since these units do not support JBOD, we have configured each
individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is
connected to the solaris box through a dual port 4G Emulex fibrechan
On Thu, 16 Sep 2010, erik.ableson wrote:
> And for reference, I have a number of 10.6 clients using NFS for
> sharing Fusion virtual machines, iTunes library, iPhoto libraries etc.
> without any issues.
Excellent; what OS is your NFS server running?
--
Rich Teer, Publisher
Vinylphile Magazine
We downloaded zilstat from
http://www.richardelling.com/Home/scripts-and-programs-1 but we never could get
the script to run. We are not really sure how to debug. :(
./zilstat.ksh
dtrace: invalid probe specifier
#pragma D option quiet
inline int OPT_time = 0;
inline int OPT_txg = 0;
inline
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks
(Seagate Constellation) and the pool seems sick now. The pool has four
raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few
months ago. I replaced two disks in the second set (c2t0d0, c3t0d0) a
coupl
On 15 sept. 2010, at 22:04, Mike Mackovitch wrote:
> On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote:
>> any resolution to this issue? I'm experiencing the same annoying
>> lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver
>> 3. Would somehow going back to the earlier 8/
My understand of the read cache is that L2ARC has a read thread to read the
cache from ARC. Hence my question.
if primarycache is set to 'metadata', will L2ARC get to cache user data?
similarly, what if primarycache is set to none.
Thanks,
--Jackie
--
This message posted from opensolaris.org
__
16 matches
Mail list logo