Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no slices or
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to the
data pool with a
Darren J Moffat wrote:
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2ARC device must be a
Darren J Moffat wrote:
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn't seem right. Or is
Thank you Darren.
So no zvol's as L2ARC cache device. That leaves partitions and slices.
When I tried to add a second partition, the first contained slices with the
root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2
(note P2) wasn't supported. Perhaps I did
F. Wessels wrote:
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what
Just clarifying Darren's comment - we got bitten by this pretty badly so I
figure it's worth saying again here. ZFS will *allow* you to use a ZVOL of
one pool as a ZDEV in another pool, but it results in race conditions and an
unstable system. (At least on Solaris 10 update 8).
We tried to use
you can't use anything but a block device for the L2ARC device.
sure you can...
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
On Mar 29, 2010, at 1:10 PM, F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a
Hi all,
yes it works with the partitions.
I think that I made a typo during the initial testing off adding a partition as
cache, probably swapped the 0 for an o.
Tested with a b134 gui and text installer on the x86 platform.
So here it goes:
Install opensolaris into a partition and leave some
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system)
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote:
The caiman installer allows you to control the size of the partition
on the boot disk but it doesn't allow you (at least I couldn't
figure out how) to control the size of the slices. So you end with
slice0 filling the entire
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote:
You can:
- install to a partition that's the size you want rpool
- expand the partition to the full disk
- expand the s2 slice to the full disk
- leave the s0 slice for rpool alone
- make another slice for l2arc in the
17 matches
Mail list logo