On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling <richard.ell...@sun.com> wrote:
> Scott Laird wrote:
>>
>> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
>> <mritun+opensola...@gmail.com> wrote:
>>
>>>
>>> As for source, here you go :)
>>>
>>>
>>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650
>>>
>>
>> Thanks.  It's in the middle of get_replication, so I suspect it's a
>> bug--zpool tries to check on the replication status of existing vdevs
>> and croaks in the process.  As it turns out, I was able to add the
>> cache devices just fine once the resilver completed.
>>
>
> It is a bug because the assertion failed.  Please file one.
> http://en.wikipedia.org/wiki/Assertion_(computing)
> http://bugs.opensolaris.org
>
>> Out of curiosity, what's the easiest way to shove a file into the
>> L2ARC?  Repeated reads with dd if=file of=/dev/null doesn't appear to
>> do the trick.
>>
>
> To put something in the L2ARC, it has to be purged from the ARC.
> So until you run out of space in the ARC, nothing will be placed into
> the L2ARC.

I have a ~50G working set and 8 GB of RAM, so I'm out of space in my
ARC.  My read rate is low enough for the disks to keep up, but I'd
like to see lower latency.  Also, 30G SSDs were cheap last week :-).

My big problem is that dd if=file of=/dev/null doesn't appear to
actually read the whole file--I can loop over 50G of data in about 20
seconds while doing under 100 MB/sec of disk I/O.  Does Solaris's dd
have some sort of of=/dev/null optimization?  Adding conv=swab seems
to be making it work better, but I'm still only seeing write rates of
~1 MB/sec per SSD, even though they're mostly empty.


Scott
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to