90 reads and not a single comment? Not the slightest hint of what's going on?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/ 5/10 11:09 AM, Brad wrote:
I yanked a disk to simulate failure to the test pool to test hot spare failover
- everything seemed fine until the copy back completed. The hot spare is still
showing in used...do we need to remove the spare from the pool to get it to
deattach?
Once the
Can anyone confirm my action plan is the proper way to do this? The reason I'm
doing this is I want to create 2xraidz2 pools instead of expanding my current
2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1
pool over, destroy that pool and then add it as a 1xraidz2 vde
On 05/ 4/10 05:37 PM, Vadim Comanescu wrote:
Im wondering is there a way to actually delete a zvol ignoring the fact
that it has attached LU?
You didn't say what version of what OS you are running. As of b134
or so it seems to be impossible to delete a zfs iscsi target. You might
look at the th
I yanked a disk to simulate failure to the test pool to test hot spare failover
- everything seemed fine until the copy back completed. The hot spare is still
showing in used...do we need to remove the spare from the pool to get it to
deattach?
# zpool status
pool: ZPOOL.TEST
state: ONLINE
Hello,
Im new to this discussion lists so i hope im posting in the right place. I
started using zfs not too long ago. Im trying to figure out the ISCSI and
NFS sharing for the moment. For the ISCSI sharing at the moment im using
COMSTAR. A created the appropriate target, also a LU corespondent to t
Hi Dick,
Experts on the cifs-discuss list could probably advise you better.
You might even check the cifs-discuss archive because I hear that
the SMB/NFS sharing scenario has been covered previously on that
list.
Thanks,
Cindy
On 05/04/10 03:06, Dick Hoogendijk wrote:
I have some ZFS datasets
Thanks! I might just have to order a few for the next time I take the server
apart. Not that my bent up versions don't work, but I might as well have them
be pretty too. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
On 04/05/2010 18:19, Tony MacDoodle wrote:
How would one determine if I should have a separate ZIL disk? We are
using ZFS as the backend of our Guest Domains boot drives using
LDom's. And we are seeing bad/very slow write performance?
if you can disable ZIL and compare the performance to when
Ok, thanks.
So, if I understand correctly, it will just remove the device from the VDEV and
continue to use the good ones in the stripe.
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 5
The L2ARC will continue to function.
-marc
On 5/4/10, Michael Sullivan wrote:
> HI,
>
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
> relocated
On Tue, May 4, 2010 at 12:16 PM, Michael Sullivan <
michael.p.sulli...@mac.com> wrote:
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
> relocated
Anybody has an idea what I can do about it?
On 04/05/2010 16:43, "eXeC001er" wrote:
> Perhaps the problem is that the old version of pool have shareiscsi, but new
> version have not this option, and for share LUN via iscsi you need to make
> lun-mapping.
>
>
>
> 2010/5/4 Przemyslaw Ceglowski
On 05 May, 2010 - Michael Sullivan sent me these 0,9K bytes:
> HI,
>
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will
> be relocated back to the spo
HI,
I have a question I cannot seem to find an answer to.
I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
relocated back to the spool. I'd probably have it mirrored anyway, just in
case. However you cannot
No, beadm doesn't take care of all the steps that I provided
previously and included below.
Cindy
You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
On Tue, May 4, 2010 at 10:19 AM, Tony MacDoodle wrote:
> How would one determine if I should have a separate ZIL disk? We are using
> ZFS as the backend of our Guest Domains boot drives using LDom's. And we are
> seeing bad/very slow write performance?
There's a dtrace script that Richard Elling
How would one determine if I should have a separate ZIL disk? We are using
ZFS as the backend of our Guest Domains boot drives using LDom's. And we are
seeing bad/very slow write performance?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
I just wanted to share this useful info as I haven't seen it anywhere.
My scrounging-genius colleague, Lawrence, found standard PCI-e
replacement brackets for the justifiably popular Supermicro AOC-USAS-L8i
cards. They cost a few bucks each, fit perfectly and allow us to use
these cards exte
On Tue, May 4, 2010 at 7:19 AM, Cindy Swearingen
wrote:
> Using beadm to migrate your BEs to another root pool (and then
> performing all the steps to get the system to boot) is different
> than just outright renaming your existing root pool on import.
Does beadm take care of all the other steps
My pool panic'd while updating to Lucid Lynx hosted inside an iSCSI LUN. And
now it won't come back up. I have dedup and compression on.
These are my current findings:
* iostat -En won't list 8 of my disks
* zdb lists all my disks except my cache device
* The following commands panics the box in
Perhaps the problem is that the old version of pool have shareiscsi, but new
version have not this option, and for share LUN via iscsi you need to make
lun-mapping.
2010/5/4 Przemyslaw Ceglowski
> Jim,
>
> On May 4, 2010, at 3:45 PM, Jim Dunham wrote:
>
> >>
> >> On May 4, 2010, at 2:43 PM, Ri
Jim,
On May 4, 2010, at 3:45 PM, Jim Dunham wrote:
>>
>> On May 4, 2010, at 2:43 PM, Richard Elling wrote:
>>
>> >On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
>> >
>> >> It does not look like it is:
>> >>
>> >> r...@san01a:/export/home/admin# svcs -a | grep iscsi
>> >> online
Brandon,
Using beadm to migrate your BEs to another root pool (and then
performing all the steps to get the system to boot) is different
than just outright renaming your existing root pool on import.
Since pool renaming isn't supported, I don't think we have identified
all the boot/mount-at-boot
On Mon, 3 May 2010, Richard Elling wrote:
This is not a problem on Solaris 10. It can affect OpenSolaris, though.
That's precisely the opposite of what I thought. Care to explain?
In Solaris 10, you are stuck with LiveUpgrade, so the root pool is
not shared with other boot environments.
R
On Mon, 3 May 2010, Edward Ned Harvey wrote:
That's precisely the opposite of what I thought. Care to explain?
If you have a primary OS disk, and you apply OS Updates ... in order to
access those updates in Sol10, you need a registered account and login, with
paid solaris support. Then, if yo
Przem,
> On May 4, 2010, at 2:43 PM, Richard Elling wrote:
>
>> On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
>>
>>> It does not look like it is:
>>>
>>> r...@san01a:/export/home/admin# svcs -a | grep iscsi
>>> online May_01 svc:/network/iscsi/initiator:default
>>> online
On May 4, 2010, at 2:43 PM, Richard Elling wrote:
>On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
>
>> It does not look like it is:
>>
>> r...@san01a:/export/home/admin# svcs -a | grep iscsi
>> online May_01 svc:/network/iscsi/initiator:default
>> online May_01 svc:/ne
On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
> It does not look like it is:
>
> r...@san01a:/export/home/admin# svcs -a | grep iscsi
> online May_01 svc:/network/iscsi/initiator:default
> online May_01 svc:/network/iscsi/target:default
This is COMSTAR.
> _
> Przem
On 05/04/2010 09:29 AM, Kyle McDonald wrote:
> On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
>> "valrh...@gmail.com" writes:
>>
>>
>>> I have been using DVDs for small backups here and there for a decade
>>> now, and have a huge pile of several hundred. They have a lot of
>>> overlapping co
On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
> "valrh...@gmail.com" writes:
>
>
>> I have been using DVDs for small backups here and there for a decade
>> now, and have a huge pile of several hundred. They have a lot of
>> overlapping content, so I was thinking of feeding the entire stack
Hi Matt,
Don't know if it's recommended or not, but I've been doing it for close
to 3 years on my OpenSolaris laptop, it saved me a few times like last
week when my internal drive died :)
/peter
On 2010-05-04 20.33, Matt Keenan wrote:
Hi,
Just wondering whether mirroring a USB drive with m
Hi,
Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.
Current setup, single root pool set up on 200GB internal laptop drive :
$ zpool status
pool: rpool
state: ONLINE
scrub: non requested
config :
NAMESTATE RE
It does not look like it is:
r...@san01a:/export/home/admin# svcs -a | grep iscsi
online May_01 svc:/network/iscsi/initiator:default
online May_01 svc:/network/iscsi/target:default
_
Przem
>
>
>
>From: Rick McNeal [ramcn...@gmail.com]
>
Hi,
I am posting my question to both storage-discuss and zfs-discuss as I am not
quite sure what is causing the messages I am receiving.
I have recently migrated my zfs volume from b104 to b134 and upgraded it from
zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and
'vol01/zvol02
On May 4, 2010, at 2:02 PM, Robert Milkowski wrote:
> On 16/02/2010 21:54, Jeff Bonwick wrote:
>>> People used fastfs for years in specific environments (hopefully
>>> understanding the risks), and disabling the ZIL is safer than fastfs.
>>> Seems like it would be a useful ZFS dataset parameter.
[...]
> To answer Richard's question, if you have to rename a
> pool during
> import due to a conflict, the only way to change it
> back is to
> re-import it with the original name. You'll have to
> either export the
> conflicting pool, or (if it's rpool) boot off of a
> LiveCD which
> doesn't use
On 16/02/2010 21:54, Jeff Bonwick wrote:
People used fastfs for years in specific environments (hopefully
understanding the risks), and disabling the ZIL is safer than fastfs.
Seems like it would be a useful ZFS dataset parameter.
We agree. There's an open RFE for this:
6280630 zil synch
I have some ZFS datasets that are shared through CIFS/NFS. So I created
them with sharenfs/sharesmb options.
I have full access from windows (through cifs) to the datasets, however,
all files and directories are created with (UNIX) permisions of
(--)/(d--). So, although I can access th
40 matches
Mail list logo