Re: [zfs-discuss] FC HBA for openindiana

2012-10-20 Thread Tim Cook
The built in drivers support Mpha so you're good to go.

On Friday, October 19, 2012, Christof Haemmerle wrote:

> Yep i Need. 4 Gig with multipathing if possible.
>
> On Oct 19, 2012, at 10:34 PM, Tim Cook  'cvml', 't...@cook.ms');>> wrote:
>
>
>
> On Friday, October 19, 2012, Christof Haemmerle wrote:
>
>> hi there,
>> i need to connect some old raid subsystems to a opensolaris box via fibre
>> channel. can you recommend any FC HBA?
>>
>> thanx
>> __
>>
>
>
> How old?  If its 1gbit you'll need a 4gb or slower hba. Qlogic would be my
> preference. You should be able to find a 2340 for cheap on eBay.  Or a 2460
> if you want 4gb.
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] portable zfs send streams (preview webrev)

2012-10-20 Thread Tim Cook
On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen  wrote:

> On 10/20/2012 01:10 AM, Tim Cook wrote:
> >
> >
> > On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen  > > wrote:
> >
> > On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
> > > On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen  > 
> > > >> wrote:
> > >
> > > We have finished a beta version of the feature. A webrev for it
> > > can be found here:
> > >
> > > http://cr.illumos.org/~webrev/sensille/fits-send/
> > >
> > > It adds a command 'zfs fits-send'. The resulting streams can
> > > currently only be received on btrfs, but more receivers will
> > > follow.
> > > It would be great if anyone interested could give it some
> testing
> > > and/or review. If there are no objections, I'll send a formal
> > > webrev soon.
> > >
> > >
> > >
> > > Please don't bother changing libzfs (and proliferating the
> copypasta
> > > there) -- do it like lzc_send().
> > >
> >
> > ok. It would be easier though if zfs_send would also already use the
> > new style. Is it in the pipeline already?
> >
> > > Likewise, zfs_ioc_fits_send should use the new-style API.  See the
> > > comment at the beginning of zfs_ioctl.c.
> > >
> > > I'm not a fan of the name "FITS" but I suppose somebody else
> already
> > > named the format.  If we are going to follow someone else's format
> > > though, it at least needs to be well-documented.  Where can we
> > find the
> > > documentation?
> > >
> > > FYI, #1 google hit for "FITS":  http://en.wikipedia.org/wiki/FITS
> > > #3 hit:  http://code.google.com/p/fits/
> > >
> > > Both have to do with file formats.  The entire first page of google
> > > results for "FITS format" and "FITS file format" are related to
> these
> > > two formats.  "FITS btrfs" didn't return anything specific to the
> file
> > > format, either.
> >
> > It's not too late to change it, but I have a hard time coming up with
> > some better name. Also, the format is still very new and I'm sure
> it'll
> > need some adjustments.
> >
> > -arne
> >
> > >
> > > --matt
> >
> >
> >
> > I'm sure we can come up with something.  Are you planning on this being
> > solely for ZFS, or a larger architecture for replication both directions
> > in the future?
>
> We have senders for zfs and btrfs. The planned receiver will be mostly
> filesystem agnostic and can work on a much broader range. It basically
> only needs to know how to create snapshots and where to store a few
> meta informations.
> It would be great if more filesystems would join on the sending side,
> but I have no involvement there.
>
> I see no basic problem in choosing a name that's already in use.
> Especially with file extensions most will be already taken. How about
> something with 'portable' and 'backup', like pib or pibs? 'i' for
> incremental.
>
> -Arne
>
>
Re-using names generally isn't a big deal, but in this case the existing
name is a technology that's extremely similar to what you're doing - which
WILL cause a ton of confusion in the userbase, and make troubleshooting far
more difficult when searching google/etc looking for links to documents
that are applicable.

Maybe something like far - filesystem agnostic replication?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ARC de-allocation with large ram

2012-10-20 Thread Chris Nagele
Hi. We're running OmniOS as a ZFS storage server. For some reason, our
arc cache will grow to a certain point, then suddenly drops. I used
arcstat to catch it in action, but I was not able to capture what else
was going on in the system at the time. I'll do that next.

read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size
 166   166 0   100   0   0   0   085G225G
5.9K  5.9K 0   100   0   0   0   085G225G
 755   7154094  40   0  40   084G225G
 17K   17K 0   100   0   0   0   067G225G
 409   3951496  14   0  14   049G225G
 388   3642493  24   0  24   041G225G
 37K   37K2099  20   6  14  3040G225G

For reference, it's a 12TB pool with 512GB SSD L2 ARC and 198GB RAM.
We have nothing else running on the system except NFS. We are also not
using dedupe. Here is the output of memstat at one point:

# echo ::memstat | mdb -k
Page SummaryPagesMB  %Tot
     
Kernel   19061902 74460   38%
ZFS File Data28237282110301   56%
Anon43112   1680%
Exec and libs1522 50%
Page cache  13509520%
Free (cachelist) 6366240%
Free (freelist)   2958527 115566%

Total5030196571
Physical 50322219196571

According to "prstat -s rss" nothing else is consuming the memory.

   592 root   33M   26M sleep   590   0:00:33 0.0% fmd/27
12 root   13M   11M sleep   590   0:00:08 0.0% svc.configd/21
   641 root   12M   11M sleep   590   0:04:48 0.0% snmpd/1
10 root   14M   10M sleep   590   0:00:03 0.0% svc.startd/16
   342 root   12M 9084K sleep   590   0:00:15 0.0% hald/5
   321 root   14M 8652K sleep   590   0:03:00 0.0% nscd/52

So far I can't figure out what could be causing this. The only other
thing I can think of is that we have a bunch of zfs send/receive
operations going on as backups across 10 datasets in the pool. I  am
not sure how snapshots and send/receive affect the arc. Does anyone
else have any ideas?

Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vm server storage mirror

2012-10-20 Thread Timothy Coalson
On Sat, Oct 20, 2012 at 7:39 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> > From: Timothy Coalson [mailto:tsc...@mst.edu]
> > Sent: Friday, October 19, 2012 9:43 PM
> >
> > A shot in the dark here, but perhaps one of the disks involved is taking
> a long
> > time to return from reads, but is returning eventually, so ZFS doesn't
> notice
> > the problem?  Watching 'iostat -x' for busy time while a VM is hung
> might tell
> > you something.
>
> Oh yeah - this is also bizarre.  I watched "zpool iostat" for a while.  It
> was showing me :
> Operations (read and write) consistently 0
> Bandwidth (read and write) consistently non-zero, but something small,
> like 1k-20k or so.
>
> Maybe that is normal to someone who uses zpool iostat more often than I
> do.  But to me, zero operations resulting in non-zero bandwidth defies
> logic.
>
> It might be operations per second, and is rounding down (I know this
happens in DTrace normalization, not sure about zpool/zfs), try an interval
of 1 (perhaps with -v) and see if you still get 0 operations.  I haven't
seen zero operations with nonzero bandwidth on my pools, I always see lots
of operations in bursts, so it sounds like you might be on to something.

Also, iostat -x shows device busy time, which is usually higher on the
slowest disk when there is an imbalance, while zpool iostat does not.  So,
if it happens to be a single device's fault, iostat -nx has a better chance
of finding it (the n flag translates the disk names to the device names
used by the system, so you can figure out which one is the problem).

Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send to older version

2012-10-20 Thread Jim Klimov
2012-10-20 3:59, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) пишет:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling


At some point, people will bitterly regret some "zpool upgrade" with no way
back.


uhm... and how is that different than anything else in the software world?


No attempt at backward compatibility, and no downgrade path, not even by going 
back to an older snapshot before the upgrade.



The way I understood feature flags, if your pool or dataset does
not have a feature enabled and/or used, another version of ZFS
should have no problem at least reading it properly, at least
those builds that know of the feature-flags concept. (Maybe the
versions limited to v28 would refuse to access a v5000 pool though).

Things written in an unknown on-disk format, be it some unknown
feature or an encrypted Solaris 11 dataset, seem like trash to
the uneducated reader - nothing new here (except that with FF
you can know in advance what particular features are used on
this dataset, that your system is missing for RO or RW access).
Hopefully, modules with new ZFS features are also easier to
distribute and include onto other systems as needed.

HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-20 Thread Jim Klimov
2012-10-20 16:30, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) пишет:

If you rm /etc/zfs/zpool.cache and reboot...The system is smart enough
(at least in my case) to re-import rpool, and another pool, but it
didn't figure out to re-import some other pool.

How does the system decide, in the absence of rpool.cache, which pools
it's going to import at boot?



Should only import those mentioned explicitly as parameters
for zpool import - imports rpool during boot, your another
pool probably from the SMF method you crafted for iscsi.

Nobody asked to import the third pool ;)

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vm server storage mirror

2012-10-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Timothy Coalson [mailto:tsc...@mst.edu]
> Sent: Friday, October 19, 2012 9:43 PM
> 
> A shot in the dark here, but perhaps one of the disks involved is taking a 
> long
> time to return from reads, but is returning eventually, so ZFS doesn't notice
> the problem?  Watching 'iostat -x' for busy time while a VM is hung might tell
> you something.

Oh yeah - this is also bizarre.  I watched "zpool iostat" for a while.  It was 
showing me :
Operations (read and write) consistently 0
Bandwidth (read and write) consistently non-zero, but something small, like 
1k-20k or so.

Maybe that is normal to someone who uses zpool iostat more often than I do.  
But to me, zero operations resulting in non-zero bandwidth defies logic.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What happens when you rm zpool.cache?

2012-10-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
If you rm /etc/zfs/zpool.cache and reboot...  The system is smart enough (at 
least in my case) to re-import rpool, and another pool, but it didn't figure 
out to re-import some other pool.

How does the system decide, in the absence of rpool.cache, which pools it's 
going to import at boot?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] portable zfs send streams (preview webrev)

2012-10-20 Thread Arne Jansen
On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
> On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen  > wrote:
> 
> On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
> > Please don't bother changing libzfs (and proliferating the copypasta
> > there) -- do it like lzc_send().
> >
> 
> ok. It would be easier though if zfs_send would also already use the
> new style. Is it in the pipeline already?
> 
> > Likewise, zfs_ioc_fits_send should use the new-style API.  See the
> > comment at the beginning of zfs_ioctl.c.
> 
> 
> I'm saying to use lzc_send() as an example, rather than zfs_send().
>  lzc_send() already uses the new style.  I don't see how your job would
> be made easier by converting zfs_send().

Yeah, but the zfs util still uses the old version.
> 
> It would be nice to convert ZFS_IOC_SEND to the new IOCTL format
> someday, but I don't think that the complexities of zfs_send() would be
> appropriate for libzfs_core.  Programmatic consumers typically know
> exactly what snapshots they want sent and would prefer the clean error
> handling of lzc_send().

What I meant was if you want the full-blown zfs send-functionality with
the ton of options, it would be much easier to reuse the existing logic
and only call *_send_fits instead of *_send when requested.
If you're content with just the -i option I've currently implemented,
it's certainly easy to convert. I on my part have mostly programmatic
consumers.

-Arne

> 
> --matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] portable zfs send streams (preview webrev)

2012-10-20 Thread Arne Jansen
On 10/20/2012 01:10 AM, Tim Cook wrote:
> 
> 
> On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen  > wrote:
> 
> On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
> > On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen  
> > >> wrote:
> >
> > We have finished a beta version of the feature. A webrev for it
> > can be found here:
> >
> > http://cr.illumos.org/~webrev/sensille/fits-send/
> >
> > It adds a command 'zfs fits-send'. The resulting streams can
> > currently only be received on btrfs, but more receivers will
> > follow.
> > It would be great if anyone interested could give it some testing
> > and/or review. If there are no objections, I'll send a formal
> > webrev soon.
> >
> >
> >
> > Please don't bother changing libzfs (and proliferating the copypasta
> > there) -- do it like lzc_send().
> >
> 
> ok. It would be easier though if zfs_send would also already use the
> new style. Is it in the pipeline already?
> 
> > Likewise, zfs_ioc_fits_send should use the new-style API.  See the
> > comment at the beginning of zfs_ioctl.c.
> >
> > I'm not a fan of the name "FITS" but I suppose somebody else already
> > named the format.  If we are going to follow someone else's format
> > though, it at least needs to be well-documented.  Where can we
> find the
> > documentation?
> >
> > FYI, #1 google hit for "FITS":  http://en.wikipedia.org/wiki/FITS
> > #3 hit:  http://code.google.com/p/fits/
> >
> > Both have to do with file formats.  The entire first page of google
> > results for "FITS format" and "FITS file format" are related to these
> > two formats.  "FITS btrfs" didn't return anything specific to the file
> > format, either.
> 
> It's not too late to change it, but I have a hard time coming up with
> some better name. Also, the format is still very new and I'm sure it'll
> need some adjustments.
> 
> -arne
> 
> >
> > --matt
> 
> 
> 
> I'm sure we can come up with something.  Are you planning on this being
> solely for ZFS, or a larger architecture for replication both directions
> in the future?

We have senders for zfs and btrfs. The planned receiver will be mostly
filesystem agnostic and can work on a much broader range. It basically
only needs to know how to create snapshots and where to store a few
meta informations.
It would be great if more filesystems would join on the sending side,
but I have no involvement there.

I see no basic problem in choosing a name that's already in use.
Especially with file extensions most will be already taken. How about
something with 'portable' and 'backup', like pib or pibs? 'i' for
incremental.

-Arne


> 
> --Tim
>  
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss