Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-15 Thread Khushil Dep
Could you not also pin process' to cores, preventing switching should help
too? I've done this for performance reasons before on a 24 core Linux box

Sent from my HTC Desire
On 16 Feb 2011 05:12, "Richard Elling"  wrote:
> On Feb 15, 2011, at 7:46 PM, ian W wrote:
>
>> Thanks..
>>
>> given this box runs 18 hours a day and is idle for maybe 17.5 hrs of
that, I'd rather have the best power management I can...
>>
>> I would have loved to have upgraded to a i3 or even SB but the solaris 11
express support for both is marginal. (h55 chipset issues, no sandybridge
support at all etc)
>
> I think there are options here, but there are few who will care enough to
spend the
> time required to optimize... it is less expensive to buy lower-power
processors than
> to spend even one man-hour trying to get savings out of a high-power
processor.
> But if you are up to the challenge :-) try disabling cores entirely and
leave the remaining
> two or three cores running without C-states. You will need to measure the
actual power
> consumption, but you might be surprised at how much better that works for
performance
> and power savings.
> -- richard
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-15 Thread Richard Elling
On Feb 15, 2011, at 7:46 PM, ian W wrote:

> Thanks..
> 
> given this box runs 18 hours a day and is idle for maybe 17.5 hrs of that, 
> I'd rather have the best power management I can...
> 
> I would have loved to have upgraded to a i3 or even SB but the solaris 11 
> express support for both is marginal. (h55 chipset issues, no sandybridge 
> support at all etc)

I think there are options here, but there are few who will care enough to spend 
the
time required to optimize... it is less expensive to buy lower-power processors 
than
to spend even one man-hour trying to get savings out of a high-power processor.
But if you are up to the challenge :-) try disabling cores entirely and leave 
the remaining
two or three cores running without C-states. You will need to measure the 
actual power
consumption, but you might be surprised at how much better that works for 
performance
and power savings.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-15 Thread ian W
Thanks..

given this box runs 18 hours a day and is idle for maybe 17.5 hrs of that, I'd 
rather have the best power management I can...

I would have loved to have upgraded to a i3 or even SB but the solaris 11 
express support for both is marginal. (h55 chipset issues, no sandybridge 
support at all etc)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One LUN per RAID group

2011-02-15 Thread Erik Trimble

On 2/15/2011 1:37 PM, Torrey McMahon wrote:


On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not 
a ZIL-like thing), it should be relatively simple to swamp one with 
write requests (most SANs have little more than 1GB of cache), at 
which point, the SAN will be blocking on flushing its cache to disk. 


Actually, most array controllers now have 10s if not 100s of GB of 
cache. The 6780 has 32GB, DMX-4 has - if I remember correctly - 256. 
The latest HDS box is probably close if not more.


Of course you still have to flush to disk and the cache flush 
algorithms of the boxes themselves come into play but 1GB was a long 
time ago.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



STK2540 and the STK6140 have at most 1GB.
STK6180 has 4GB.


The move to large GB caches is only recent - only large (i.e big array 
setups with a dedicated SAN head) have had multi-GB NVRAM cache for any 
length of time.


In particular, pretty much all base arrays still have 4GB or less on the 
enclosure controller - only in the SAN heads do you find big multi-GB 
caches. And, lots (I'm going to be brave and say the vast majority) of 
ZFS deployments use direct-attach arrays or internal storage, rather 
than large SAN configs. Lots of places with older SAN heads are also 
going to have much smaller caches. Given the price tag of most large 
SANs, I'm thinking that there are still huge numbers of 5+ year-old SANs 
out there, and practically all of them have only a dozen or less GB of 
cache.


So, yes, big SAN modern configurations have lots of cache. But they're 
also the ones most likely to be hammered with huge amounts of I/O from 
multiple machines. All of which makes it relatively easy to blow through 
the cache capacity and slow I/O back down to the disk speed.


Once you get back down to raw disk speed, having multiple LUNS per raid 
array is almost certainly going to perform worse than a single LUN, due 
to thrashing.  That is, it would certainly be better (i.e. faster) for 
an array to have to commit 1 128k slab than 4 32k slabs.



So, the original recommendation is interesting, but needs to have the 
caveat that you'd really only use it if you can either limit the amount 
of sustained I/O you have, or are using very-large-cache disk setups.


I would think it idea might also apply (i.e. be useful) for something 
like the F5100 or similar RAM/Flash arrays.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cfgadm MPxIO aware yet in Solaris 10 U9?

2011-02-15 Thread Ray Van Dolson
I just replaced a failing disk on one of my servers running Solaris 10
U9.  The system was MPxIO enabled and I now have the old device hanging
around in the cfgadm list.

I understand from searching around that cfgadm may not be MPxIO aware
-- at least not in Solaris 10.  I see a fix was pushed to OpenSolaris
but I'm hoping someone can confirm whether or not this is in Sol10U9
yet or what my other options are (short of rebooting) to clean this old
device out.

Maybe luxadm can do it...

FYI, my zpool replace triggered resilver completed, so the disk is no
longer tied to the zpool.

Thanks,
Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
Thanks Cindy.

Are you (or anyone else reading) aware of a way to disable MPxIO at
install time?

I imagine there's no harm* in leaving MPxIO enabled with single-pathed
devices -- we'll likely just keep this in mind for future installs.

Thanks,
Ray

* performance penalty -- we do see errors in our logs from time to time
  from mpathd letting us know disks have only one path

On Tue, Feb 15, 2011 at 01:50:47PM -0800, Cindy Swearingen wrote:
> Hi Ray,
> 
> MPxIO is on by default for x86 systems that run the Solaris 10 9/10
> release.
> 
> On my Solaris 10 9/10 SPARC system, I see this:
> 
> # stmsboot -L
> stmsboot: MPxIO is not enabled
> stmsboot: MPxIO disabled
> 
> You can use the stmsboot CLI to disable multipathing. You are prompted
> to reboot the system after disabling MPxIO. See stmsboot.1m for more
> info.
> 
> With an x86 whitebox, I would export your ZFS storage pools first,
> but maybe it doesn't matter if the system is rebooted.
> 
> ZFS should be able to identify the devices by their internal device
> IDs but I can't speak for unknown hardware. When you make hardware
> changes, always have current backups.
> 
> Thanks,
> 
> Cindy
> 
> On 02/15/11 14:32, Ray Van Dolson wrote:
> > Thanks Torrey.  I definitely see that multipathing is enabled... I
> > mainly want to understand whether or not there are installation
> > scenarios where multipathing is enabled by default (if the mpt driver
> > thinks it can support it will it enable mpathd at install time?) as
> > well as the consequences of disabling it now...
> > 
> > It looks to me as if disabling it will result in some pain. :)
> > 
> > Ray
> > 
> > On Tue, Feb 15, 2011 at 01:24:20PM -0800, Torrey McMahon wrote:
> >> in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that 
> >> mpathadm is the storage multipath admin tool. )
> >>
> >> If scsi_vhci is loaded in the kernel you have storage multipathing 
> >> enabled. (Check with modinfo.)
> >>
> >> On 2/15/2011 3:53 PM, Ray Van Dolson wrote:
> >>> I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
> >>> noticed its device names are extremely hair -- very similar to the
> >>> multipath device names: c0t5000C50026F8ACAAd0, etc, etc.
> >>>
> >>> mpathadm seems to confirm:
> >>>
> >>> # mpathadm list lu
> >>>  /dev/rdsk/c0t50015179591CE0C1d0s2
> >>>  Total Path Count: 1
> >>>  Operational Path Count: 1
> >>>
> >>> # ps -ef | grep mpath
> >>>  root   245 1   0   Jan 05 ?  16:38 
> >>> /usr/lib/inet/in.mpathd -a
> >>>
> >>> The system is SuperMicro based with an LSI SAS2008 controller in it.
> >>> To my knowledge it has no multipath capabilities (or at least not as
> >>> its wired up currently).
> >>>
> >>> The mpt_sas driver is in use per prtconf and modinfo.
> >>>
> >>> My questions are:
> >>>
> >>> - What scenario would the multipath driver get loaded up at
> >>>installation time for this LSI controller?  I'm guessing this is what
> >>>happened?
> >>>
> >>> - If I disabled mpathd would I get the shorter disk device names back
> >>>again?  How would this impact existing zpools that are already on the
> >>>system tied to these disks?  I have a feeling doing this might be a
> >>>little bit painful. :)
> >>>
> >>> I tried to glean the "original" device names from stmsboot -L, but it
> >>> didn't show any mappings...
> >>>
> >>> Thanks,
> >>> Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Cindy Swearingen

Hi Ray,

MPxIO is on by default for x86 systems that run the Solaris 10 9/10
release.

On my Solaris 10 9/10 SPARC system, I see this:

# stmsboot -L
stmsboot: MPxIO is not enabled
stmsboot: MPxIO disabled

You can use the stmsboot CLI to disable multipathing. You are prompted
to reboot the system after disabling MPxIO. See stmsboot.1m for more
info.

With an x86 whitebox, I would export your ZFS storage pools first,
but maybe it doesn't matter if the system is rebooted.

ZFS should be able to identify the devices by their internal device
IDs but I can't speak for unknown hardware. When you make hardware
changes, always have current backups.

Thanks,

Cindy

On 02/15/11 14:32, Ray Van Dolson wrote:

Thanks Torrey.  I definitely see that multipathing is enabled... I
mainly want to understand whether or not there are installation
scenarios where multipathing is enabled by default (if the mpt driver
thinks it can support it will it enable mpathd at install time?) as
well as the consequences of disabling it now...

It looks to me as if disabling it will result in some pain. :)

Ray

On Tue, Feb 15, 2011 at 01:24:20PM -0800, Torrey McMahon wrote:
in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that 
mpathadm is the storage multipath admin tool. )


If scsi_vhci is loaded in the kernel you have storage multipathing 
enabled. (Check with modinfo.)


On 2/15/2011 3:53 PM, Ray Van Dolson wrote:

I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
noticed its device names are extremely hair -- very similar to the
multipath device names: c0t5000C50026F8ACAAd0, etc, etc.

mpathadm seems to confirm:

# mpathadm list lu
 /dev/rdsk/c0t50015179591CE0C1d0s2
 Total Path Count: 1
 Operational Path Count: 1

# ps -ef | grep mpath
 root   245 1   0   Jan 05 ?  16:38 /usr/lib/inet/in.mpathd -a

The system is SuperMicro based with an LSI SAS2008 controller in it.
To my knowledge it has no multipath capabilities (or at least not as
its wired up currently).

The mpt_sas driver is in use per prtconf and modinfo.

My questions are:

- What scenario would the multipath driver get loaded up at
   installation time for this LSI controller?  I'm guessing this is what
   happened?

- If I disabled mpathd would I get the shorter disk device names back
   again?  How would this impact existing zpools that are already on the
   system tied to these disks?  I have a feeling doing this might be a
   little bit painful. :)

I tried to glean the "original" device names from stmsboot -L, but it
didn't show any mappings...

Thanks,
Ray

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One LUN per RAID group

2011-02-15 Thread Torrey McMahon


On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not 
a ZIL-like thing), it should be relatively simple to swamp one with 
write requests (most SANs have little more than 1GB of cache), at 
which point, the SAN will be blocking on flushing its cache to disk. 


Actually, most array controllers now have 10s if not 100s of GB of 
cache. The 6780 has 32GB, DMX-4 has - if I remember correctly - 256. The 
latest HDS box is probably close if not more.


Of course you still have to flush to disk and the cache flush algorithms 
of the boxes themselves come into play but 1GB was a long time ago.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
Thanks Torrey.  I definitely see that multipathing is enabled... I
mainly want to understand whether or not there are installation
scenarios where multipathing is enabled by default (if the mpt driver
thinks it can support it will it enable mpathd at install time?) as
well as the consequences of disabling it now...

It looks to me as if disabling it will result in some pain. :)

Ray

On Tue, Feb 15, 2011 at 01:24:20PM -0800, Torrey McMahon wrote:
> in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that 
> mpathadm is the storage multipath admin tool. )
> 
> If scsi_vhci is loaded in the kernel you have storage multipathing 
> enabled. (Check with modinfo.)
> 
> On 2/15/2011 3:53 PM, Ray Van Dolson wrote:
> > I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
> > noticed its device names are extremely hair -- very similar to the
> > multipath device names: c0t5000C50026F8ACAAd0, etc, etc.
> >
> > mpathadm seems to confirm:
> >
> > # mpathadm list lu
> >  /dev/rdsk/c0t50015179591CE0C1d0s2
> >  Total Path Count: 1
> >  Operational Path Count: 1
> >
> > # ps -ef | grep mpath
> >  root   245 1   0   Jan 05 ?  16:38 /usr/lib/inet/in.mpathd 
> > -a
> >
> > The system is SuperMicro based with an LSI SAS2008 controller in it.
> > To my knowledge it has no multipath capabilities (or at least not as
> > its wired up currently).
> >
> > The mpt_sas driver is in use per prtconf and modinfo.
> >
> > My questions are:
> >
> > - What scenario would the multipath driver get loaded up at
> >installation time for this LSI controller?  I'm guessing this is what
> >happened?
> >
> > - If I disabled mpathd would I get the shorter disk device names back
> >again?  How would this impact existing zpools that are already on the
> >system tied to these disks?  I have a feeling doing this might be a
> >little bit painful. :)
> >
> > I tried to glean the "original" device names from stmsboot -L, but it
> > didn't show any mappings...
> >
> > Thanks,
> > Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Torrey McMahon
in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that 
mpathadm is the storage multipath admin tool. )


If scsi_vhci is loaded in the kernel you have storage multipathing 
enabled. (Check with modinfo.)


On 2/15/2011 3:53 PM, Ray Van Dolson wrote:

I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
noticed its device names are extremely hair -- very similar to the
multipath device names: c0t5000C50026F8ACAAd0, etc, etc.

mpathadm seems to confirm:

# mpathadm list lu
 /dev/rdsk/c0t50015179591CE0C1d0s2
 Total Path Count: 1
 Operational Path Count: 1

# ps -ef | grep mpath
 root   245 1   0   Jan 05 ?  16:38 /usr/lib/inet/in.mpathd -a

The system is SuperMicro based with an LSI SAS2008 controller in it.
To my knowledge it has no multipath capabilities (or at least not as
its wired up currently).

The mpt_sas driver is in use per prtconf and modinfo.

My questions are:

- What scenario would the multipath driver get loaded up at
   installation time for this LSI controller?  I'm guessing this is what
   happened?

- If I disabled mpathd would I get the shorter disk device names back
   again?  How would this impact existing zpools that are already on the
   system tied to these disks?  I have a feeling doing this might be a
   little bit painful. :)

I tried to glean the "original" device names from stmsboot -L, but it
didn't show any mappings...

Thanks,
Ray
___
storage-discuss mailing list
storage-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send & tape autoloaders?

2011-02-15 Thread Ian Collins

 On 02/16/11 09:50 AM, David Strom wrote:

Up to the moderator whether this will add anything:

I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between 
SANs.  configured a private subnet & allowed rsh on the receiving V440.


command:  zfs send | (rsh  zfs receive ...)

Took a whole week (7 days) and brought the receiving host's networking 
down to unusable.  Could not ssh in to the first NIC, as the host 
would not respond before timing out.  Some Oracle db connections 
stayed up, but were horribly slow.  Gigabit Ethernet on a nice fast 
Cisco 4006 switch.  Solaris 10, update 5 sending to Solaris 10 update 
3, both V440s.


You were lucky to get away with that, the sending filesystems versions 
must have been old enough to be received by U3.


ZFS has come a long way since those releases, we had a lot of lock-up 
problems with update <6, none now.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] multipath used inadvertantly?

2011-02-15 Thread Ray Van Dolson
I'm troubleshooting an existing Solaris 10U9 server (x86 whitebox) and
noticed its device names are extremely hair -- very similar to the
multipath device names: c0t5000C50026F8ACAAd0, etc, etc.

mpathadm seems to confirm:

# mpathadm list lu
/dev/rdsk/c0t50015179591CE0C1d0s2
Total Path Count: 1
Operational Path Count: 1

# ps -ef | grep mpath
root   245 1   0   Jan 05 ?  16:38 /usr/lib/inet/in.mpathd -a

The system is SuperMicro based with an LSI SAS2008 controller in it.
To my knowledge it has no multipath capabilities (or at least not as
its wired up currently).

The mpt_sas driver is in use per prtconf and modinfo.

My questions are:

- What scenario would the multipath driver get loaded up at
  installation time for this LSI controller?  I'm guessing this is what
  happened?

- If I disabled mpathd would I get the shorter disk device names back
  again?  How would this impact existing zpools that are already on the
  system tied to these disks?  I have a feeling doing this might be a
  little bit painful. :)

I tried to glean the "original" device names from stmsboot -L, but it
didn't show any mappings...

Thanks,
Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send & tape autoloaders?

2011-02-15 Thread David Strom

Up to the moderator whether this will add anything:

I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between 
SANs.  configured a private subnet & allowed rsh on the receiving V440.


command:  zfs send | (rsh  zfs receive ...)

Took a whole week (7 days) and brought the receiving host's networking 
down to unusable.  Could not ssh in to the first NIC, as the host would 
not respond before timing out.  Some Oracle db connections stayed up, 
but were horribly slow.  Gigabit Ethernet on a nice fast Cisco 4006 
switch.  Solaris 10, update 5 sending to Solaris 10 update 3, both V440s.


I'm not going to do this ever again, I hope, so I'm not concerned with 
the why or how, but it was pretty bad.  Seems like a zfsdump would be a 
good thing.


--
David Strom


On 1/13/2011 11:46 AM, David Magda wrote:

On Thu, January 13, 2011 09:00, David Strom wrote:

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to tape
mechanism is broken for zfs send, unless someone knows otherwise or has
some trick?

I'm just going to try a tar to tape then (maybe using dd), then, as I
don't have any extended attributes/ACLs.  Would appreciate any
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes
(what I have).

Might send it across the (Gigabit Ethernet) network to a server that's
already on the new SAN, but I was trying to avoid hogging down the
network or the other server's NIC.

I've seen examples online for sending via network, involves piping zfs
send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable
it temporarily between the two hosts?


If you don't already have a backup infrastructure (remember: RAID !=
backup), this may be a good opportunity. Something like Amanda or Bacula
is gratis, and it could be useful for other circumstances.

If this is a one-off it may not be worth it, but having important data
without having (offline) backups is usually tempting fate.

If you're just going to go to tape, then suntar/gnutar/star can write
directly to it (or via rmt over the network), and there's no sense
necessarily going through dd; 'tar' is short for TApe aRchiver after all.

(However this is getting a bit OT for ZFS, and heading towards general
sysadmin related.)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental send/recv interoperability

2011-02-15 Thread Eric D. Mudama

On Tue, Feb 15 at 11:18, Erik ABLESON wrote:

Just wondering if an expert can chime in on this one.

I have an older machine running 2009.11 with a zpool at version
14. I have a new machine running Solaris Express 11 with the zpool
at version 31.

I can use zfs send/recv to send a filesystem from the older machine
to the new one without any difficulties. However, as soon as I try
to update the remote copy with an incremental send/recv I get back
the error of "cannot receive incremental stream: invalid backup
stream".

I was under the impression that the streams were backwards
compatible (ie a newer version could receive older streams) which
appears to be correct for the initial send/recv operation, but
failing on the incremental.


Sounds like you may need to force an older pool version on the
destination machine to use it in this fashion, since it's adding data
to a stream that has been converted to use the new pool when you recv
it.

I could be wrong though, we update our pools in lockstep and err on
the side of backwards compliance with our multi-system backup.


--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get rid of phantom pool ?

2011-02-15 Thread Casper . Dik

>I had a pool on external drive.Recently the drive failed,but pool still shows 
>up when run 'zpoll s
tatus'
>
>Any attempt to remove/delete/export pool ends up with unresponsiveness(The 
>system is still up/runn
ing perfectly,it's just this specific command kind of hangs so I have to open 
new ssh session)
>
>zpool status shows state: UNAVAIL
>
>When try zpool clear get "cannot clear errors for backup: I/O error"
>
>Please help me out to get rid of this phantom pool.

Remove the zfs cache file: /etc/zfs/zpool.cache.

Then reboot.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get rid of phantom pool ?

2011-02-15 Thread Cindy Swearingen

The best way to remove the pool is to reconnect the device and then
destroy the pool, but if the device is faulted or no longer available,
then you'll need a workaround.

If the external drive with the FAULTED pool remnants isn't connected to
the system, then rename the /etc/zfs/zpool.cache file and reboot the
system. The zpool.cache content will be rebuilt based on existing
devices with pool info.

Thanks,

Cindy



On 02/15/11 01:10, Alxen4 wrote:

I had a pool on external drive.Recently the drive failed,but pool still shows 
up when run 'zpoll status'

Any attempt to remove/delete/export pool ends up with unresponsiveness(The 
system is still up/running perfectly,it's just this specific command kind of 
hangs so I have to open new ssh session)

zpool status shows state: UNAVAIL

When try zpool clear get "cannot clear errors for backup: I/O error"

Please help me out to get rid of this phantom pool.


Many,many thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] smbd becomes unresponsive on snv_151a

2011-02-15 Thread Marcis Lielturks
Deduped dataset is 2.1TB, no L2ARC and server has 64GB RAM. We have currently 
ruled out possibility that this is related to dedup and ZFS and working to get 
fix for "6996574 smbd intermittently hangs".

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to get rid of phantom pool ?

2011-02-15 Thread Alxen4
I had a pool on external drive.Recently the drive failed,but pool still shows 
up when run 'zpoll status'

Any attempt to remove/delete/export pool ends up with unresponsiveness(The 
system is still up/running perfectly,it's just this specific command kind of 
hangs so I have to open new ssh session)

zpool status shows state: UNAVAIL

When try zpool clear get "cannot clear errors for backup: I/O error"

Please help me out to get rid of this phantom pool.


Many,many thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental send/recv interoperability

2011-02-15 Thread Erik ABLESON
Doh - 2008.11

On 15 févr. 2011, at 11:18, Erik ABLESON wrote:

> I have an older machine running 2009.11 with a zpool at version 14. I have a 
> new machine running Solaris Express 11 with the zpool at version 31.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-15 Thread Karl Wagner
Hi

I am no expert, but I have used several virtualisation environments, and I
am always in favour of passing iSCSI  straight through to the VM. It creates
a much more portable system, often able to be booted on a different
virtualisation environment, or even on a dedicated server, if you choose at
a later date (sometimes takes a little work, but it is easier than the
alternatives).

For ZFS, I would suggest this is even more useful. One could, theoretically,
export a pool from one VM, then easily import it on another, or on a random
machine.

If you are looking for a solution for this, I would suggest looking at gPXE
(http://etherboot.org/wiki/start). It allows booting from iSCSI fairly
easily, and they have a guide for booting opensolaris.

Just my 2p :)

Regards
Karl

> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> Sent: 14 February 2011 23:26
> To: 'Mark Creamer'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS and Virtual Disks
> 
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Mark Creamer
> >
> > 1. Should I create individual iSCSI LUNs and present those to the VMware
> > ESXi host as iSCSI storage, and then create virtual disks from there on
> each
> > Solaris VM?
> >
> >  - or -
> >
> > 2. Should I (assuming this is possible), let the Solaris VM mount the
> iSCSI
> > LUNs directly (that is, NOT show them as VMware storage but let the VM
> > connect to the iSCSI across the network.) ?
> 
> If you do #1 you'll have a layer of vmware in between your guest machine
> and
> the storage.  This will add a little overhead and possibly reduce
> performance slightly.
> 
> If you do #2 you won't have access to snapshot features in vmware.
> Personally I would recommend using #2 and rely on ZFS snapshots instead of
> vmware snapshots.  But maybe you have a good reason for using vmware
> snapshots... I don't want to make assumptions.
> 
> 
> > Part of the issue is I have no idea if having a hardware RAID 5 or 6
> disk
> set will
> > create a problem if I then create a bunch of virtual disks and then use
> ZFS to
> > create RAIDZ for the VM to use. Seems like that might be asking for
> trouble.
> 
> Where is there any hardware raid5 or raid6 in this system?  Whenever
> possible, you want to allow ZFS to manage the raid...  configure the
> hardware to just pass-thru single disk jbod to the guest...  Because when
> ZFS detects disk errors, if ZFS has the redundancy, it can correct them.
> But if there are disk problems on the hardware raid, the hardware raid
> will
> never know about it and it will never be correctable except by luck.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Incremental send/recv interoperability

2011-02-15 Thread Erik ABLESON
Just wondering if an expert can chime in on this one.

I have an older machine running 2009.11 with a zpool at version 14. I have a 
new machine running Solaris Express 11 with the zpool at version 31.

I can use zfs send/recv to send a filesystem from the older machine to the new 
one without any difficulties. However, as soon as I try to update the remote 
copy with an incremental send/recv I get back the error of "cannot receive 
incremental stream: invalid backup stream".

I was under the impression that the streams were backwards compatible (ie a 
newer version could receive older streams) which appears to be correct for the 
initial send/recv operation, but failing on the incremental.

Cheers,

Erik
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss