Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread Olli Lehtola
> Certainly there is a simpler option; although I don't
> think anybody 
> actually suggested a "good" 2-port SATA card for
> Solaris. Do you have 
> one in mind?  Pci-e, I've even got an x16 slot free
> (and slower ones). 
> (I haven't pulled the trigger on the order yet.)

Hi, you could always get a sil3124 based card from Ebay. There are pci-e 1x 
cards available, 4 port card comes to about $50. At least the pci 
versions(~$40) work with OpenSolaris(tried one yesterday, though checked just 
whether the disks show up and I could make a zpool).

Cheers,
Olli
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-01-26 Thread Mark Bennett
An update:

Well things didn't quite turn out as expected.
I decided to follow the path right to the disks for clues.
Digging into the adapter diags with LSIUTIL, revealed an Adapter Link issue.

Adapter Phy 5:  Link Down
  Invalid DWord Count   5,969,575
  Running Disparity Error Count 5,782,581
  Loss of DWord Synch Count 0
  Phy Reset Problem Count   0

After replacing cables, I eventually replaced the controller and then things 
really went pear shaped.
It turns out the backplane, that ran without major issues on the Supermicro 
controller, refused to operate with the LSI SAS3081E-R (with latest code)- card 
wouldn't initialise, links only ran at 1.5Mb/s, most disks offline etc. 
Replacing the backplane (whole jbod) fixed the Adapter Link problems, but 
timeouts still occur when scrubbing. 
Oh look, the dev names moved. they used to start at c4t8d0, but it has "made it 
right" all by itself. EYHOBG!

 iostat -X -e -n
s/w h/w trn tot device
  0   0   0   0 c4t0d0
  0   0   0   0 c4t1d0
  0   2   8  10 c4t2d0
  0   3  18  21 c4t3d0
  0   0   0   0 c4t4d0
  0   2  12  14 c4t5d0
  0   1   8   9 c4t6d0
  0   2  15  17 c4t7d0
  0   0   0   0 c4t8d0
  0   0   0   0 c4t9d0
  0   0   0   0 c4t10d0
  0   0   0   0 c4t11d0
  0   0   0   0 c4t12d0
  0   0   0   0 c4t13d0
  0  11  84  95 c4t41d0
  0   8  62  70 c4t42d0
  0  10  72  82 c4t43d0
  0  19 147 166 c4t44d0
  0  12 102 114 c4t45d0
  0  19 145 164 c4t46d0
  0  13 108 121 c4t47d0
  0   7  62  69 c4t48d0
  0  14 113 127 c4t49d0
  0  11  96 107 c4t50d0
  0  11  91 102 c4t51d0
  0   8  64  72 c4t52d0
  0  13 108 121 c4t53d0
  0  11 106 117 c4t54d0
  0  10  82  92 c4t55d0
  0  10  88  98 c4t56d0
  0  12  85  97 c4t57d0
  0   6  38  44 c4t58d0
and 
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
c4t2d0   ONLINE   0 0 1  25.5K repaired
c4t55d0  ONLINE   0 0 4  102K repaired

I do note that after these errors, there are no errors in the lsi adapter diag 
logs.

Data disks are all new WD10EARS.

If the OpenSolaris and ZFS combination wasn't so robust, this would have ended 
badly.

Next step will be trying different timeout settings on the controller and see 
if that helps.

P.S. I have a client with a "suspect", nearly full, 20Tb zpool to try to scrub, 
so this is a big issue for me. A resilver of a 1Tb disk takes up to 40 hrs., so 
I expect a scrub to be a week (or two), and at present, would probably result 
in multiple disk failures.

Mark.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread David Dyer-Bennet

On 1/26/2010 9:39 PM, Daniel Carosone wrote:

On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
   

Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never
done SAS; is it essentially a controller as flexible as SCSI that
then talks to SATA disks out the back?
 

Yes, or SAS disks.
   


Ah, so there's another level of complexity there.  Okay, interesting.  
Well, I'm definitely not interested in spending more than SATA prices on 
disks, so I'll be going that mode.



Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
have nothing).

And do I understand that it doesn't come with the cables it needs?
 

Because the cables you need depend on what you have at the other end.
   


Right, since there are multiple possibilities I wasn't sure about.  
Makes reasonable sense, though I cringe at the cable prices (and I've 
spent 40 years in this industry; you'd think I'd be somewhat 
desensitized by now).



And that what I need are SAS-to-4-SATA breakout cables?
 

Likely, yes - and yes, measurinng would be a good idea.
   


Glad I thought of it in time.



I'm up over $450 for a "simple" upgrade
 

well, no.  The "simple" upgrade would be a 2-port sata card to enable
your extra two hotswap bays, like i suggested, plus the extra disks
you already have.  By all means go for extra and better, at
corresponding cost, if you want.
   


Certainly there is a simpler option; although I don't think anybody 
actually suggested a "good" 2-port SATA card for Solaris. Do you have 
one in mind?  Pci-e, I've even got an x16 slot free (and slower ones). 
(I haven't pulled the trigger on the order yet.)


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread Daniel Carosone
On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
> Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never
> done SAS; is it essentially a controller as flexible as SCSI that
> then talks to SATA disks out the back?   

Yes, or SAS disks.

> Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
> have nothing).  
> 
> And do I understand that it doesn't come with the cables it needs?  

Because the cables you need depend on what you have at the other end.

> And that what I need are SAS-to-4-SATA breakout cables? 

Likely, yes - and yes, measurinng would be a good idea.

> I'm up over $450 for a "simple" upgrade 

well, no.  The "simple" upgrade would be a 2-port sata card to enable
your extra two hotswap bays, like i suggested, plus the extra disks
you already have.  By all means go for extra and better, at
corresponding cost, if you want.

--
Dan.



pgp4oHJFgAzB7.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread David Dyer-Bennet
Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never done SAS; 
is it essentially a controller as flexible as SCSI that then talks to SATA 
disks out the back?  

Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
have nothing).  

And do I understand that it doesn't come with the cables it needs?  And that 
what I need are SAS-to-4-SATA breakout cables?  And that those m*f* b*ds cost 
$30 or so each and I'll need two of them?  Bloody connector conspiracy.  So I'd 
better open the system out and measure a bunch of things, because I need to 
make sure I can reach everything I need to reach (this is an oversize case, 
it's actually a 4u rackmount up on end, and it's full rack depth, so the 
distance from motherboard to drives can be significant).  

I'm up over $450 for a "simple" upgrade (that includes the 4x2.5"-in-5.2"-bay 
box, controller, 2x 7200rpm enterprise 2.5" drives, controller, cables, and a 
2GB memory upgrade) at best-mainstream-retailer mailorder prices.  The dratted 
drives are four times the size I need, too; nobody carries the 80GB enterprise 
models, which are only two times the size I need.  The 160GB enterprise are 
only $7 each more expensive than the 80GB consumer models, though.  Weird 
world.)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Peter Jeremy
On 2010-Jan-27 05:38:57 +0800, "F. Wessels"  wrote:
>But that wasn't my point. Vibration, in the drive and excited by the
>drive, increases with the spindle speed.

There's also vibration caused by head actuator movements.  This is unlikely
to suffer from resonance amplification but may be higher amplitude than
spindle-related vibration.

And finally, there's vibration from the various fans in the case

-- 
Peter Jeremy


pgpzLzQFwbegr.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Simon Breden
On the subject of vibrations when using multiple drives in a case (tower), I'm 
using silicone grommets on all the drive screws to isolate vibrations. This 
does seem to greatly reduce the vibrations reaching the chassis, and makes the 
machine a lot quieter, and so I would expect that this minimises the vibrations 
transferred between drives via the chassis. In turn I would expect that this 
greatly reduces errors related to high vibration levels when reading and 
writing: less vertical head movement, leading to less variation in write signal 
strength.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Thomas Burgess
cool, question:

what is the significance of the group? will all users still be able to
attach to it?


On Tue, Jan 26, 2010 at 5:27 PM, Cindy Swearingen
wrote:

> Thomas,
>
> I think I've got sharemgr shares working in build 131 after all.
>
> See the steps below.
>
> Thanks,
>
> Cindy
>
>
> 1. Create a share group, like this:
>
> # sharemgr create -P smb myshare
>
> 2. Add the share and specify a resource name:
>
> sharemgr add-share -r mystuff -s /tank/cindys myshare
>
> 3. Confirm the share.
>
> # sharemgr show -vp
> default nfs=()
> zfs
> myshare smb=()
>  mystuff=/tank/cindys
>
> # cat /etc/dfs/sharetab
> /tank/cindys-...@myshare   smb ""
>
> On 01/26/10 13:30, Thomas Burgess wrote:
>
>
>>
>> On Tue, Jan 26, 2010 at 2:36 PM, Cindy Swearingen <
>> cindy.swearin...@sun.com > wrote:
>>
>>D'oh. I didn't test the workaround because I was running off to a
>> prezo.
>>
>>A quick test looks like you are correct. I apologize.
>>
>>Let me see if I can do some testing or get some more info.
>>
>>Thanks,
>>
>>Cindy
>>
>>
>>
>> hehe, it's cool.  I know how THAT goes.
>>
>> if you figure out a work around i'd LOVE to hear it, if not i will just
>> have to wait for 132 i guess...This isn't the ONLY problem i'm having but
>> it's one of the bigger ones.I'm also unable to get the gui for xen
>> working but i'm going to go to the xen discuss and see if that's a known
>> thing as well.
>>
>> Thanks for checking for me =)
>>
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-26 Thread Cindy Swearingen

Brad,

If you are referring to this thread that starting in 2006, then I would
review this updated section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes

Check to see if your array is described or let us know which array you
are referring to...

Thanks,

Cindy

On 01/25/10 16:12, Brad wrote:

Hi!  So after reading through this thread and checking the bug report...do we 
still need to tell zfs to disable cache flush?

set zfs:zfs_nocacheflush=1

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compression ratio

2010-01-26 Thread Brad
With the default compression scheme (LZJB ), how does one calculate the ratio 
or amount compressed ahead of time when allocating storage?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Cindy Swearingen

Thomas,

I think I've got sharemgr shares working in build 131 after all.

See the steps below.

Thanks,

Cindy


1. Create a share group, like this:

# sharemgr create -P smb myshare

2. Add the share and specify a resource name:

sharemgr add-share -r mystuff -s /tank/cindys myshare

3. Confirm the share.

# sharemgr show -vp
default nfs=()
zfs
myshare smb=()
  mystuff=/tank/cindys

# cat /etc/dfs/sharetab
/tank/cindys-...@myshare   smb ""

On 01/26/10 13:30, Thomas Burgess wrote:



On Tue, Jan 26, 2010 at 2:36 PM, Cindy Swearingen 
mailto:cindy.swearin...@sun.com>> wrote:


D'oh. I didn't test the workaround because I was running off to a prezo.

A quick test looks like you are correct. I apologize.

Let me see if I can do some testing or get some more info.

Thanks,

Cindy



hehe, it's cool.  I know how THAT goes.

if you figure out a work around i'd LOVE to hear it, if not i will just 
have to wait for 132 i guess...This isn't the ONLY problem i'm having 
but it's one of the bigger ones.I'm also unable to get the gui for 
xen working but i'm going to go to the xen discuss and see if that's a 
known thing as well.


Thanks for checking for me =)
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread R.G. Keen
Good observation. It seems that I'm only keeping ahead of the folks in this 
forum by running as hard as I can. 

I just bought the sheet aluminum for making my drive cages. I'm going for the 
drives-in-a-cage setup, but I'm also floating each drive on vinyl (and hence 
dissipative, not resonant) vibration dampers per drive. Stock item at 
McMaster-Carr. Lets each drive not only float from the chassis, but gets you 
both spring isolation and dissipative isolation from the supporting member. 

I'll see if I can take some pictures. A seven-drive cage version of an ATX 
case/corpse is donating itself for a drilling template for the drive mounting 
holes. 

This is complemented by my lucky purchase from craigslist of two 4U rackmount 
cases for $25. The cages in these are rubber mounted to the outer case, which 
will help further damp feed-in of air-borne vibration picked up from the large 
flat panels of the case. The vinyl dampers @ 4/drive will both help keep the 
individual drive's vibration in and the other drives' vibration out, while 
dissipating it as heat.

R.G.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread F. Wessels
@Bob, yes you're completely right. This kind of engineering is what you get 
when buying a 2540 for example. All parts are nicely matched. When you build 
your own whitebox the parts might not match. 

But that wasn't my point. Vibration, in the drive and excited by the drive, 
increases with the spindle speed. Despite fluid bearings or other measures, the 
platters are always imperfect. And at some point this imbalance can no longer 
be compensated for. Result vibration. The amount of energy stored in the 
platters increases, I presume squared, with the spindle speed. So at higher 
speeds the effect will get greater.
Now back to the resonance. If all drives are vibrating AND in sync than nice 
standing waves will ripple through your chassis. I've seen this, ages ago, in 
the extreme on arrays were the drives were synced by an external clock signal. 
This was specialty hardware with a HIPPI interface.
Certain modern drives have circuitry to prevent this. Still preventing the 
vibrations in the first is easier with lower speeds, a non revolving disk will 
emit zero vibrations. It's mechanically easier at 5400rpm than at 15000rpm. No 
vibrations equals no drive induced resonance.

Back to the topic. Since TLER/ERC/CCTL drives usually have this feature as 
well. And I know the difference between the drives with and without  I thought 
it would be relevant for the discussion.

Regards,

Frederik
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Thomas Burgess
On Tue, Jan 26, 2010 at 2:36 PM, Cindy Swearingen
wrote:

> D'oh. I didn't test the workaround because I was running off to a prezo.
>
> A quick test looks like you are correct. I apologize.
>
> Let me see if I can do some testing or get some more info.
>
> Thanks,
>
> Cindy
>
>
>
hehe, it's cool.  I know how THAT goes.

if you figure out a work around i'd LOVE to hear it, if not i will just have
to wait for 132 i guess...This isn't the ONLY problem i'm having but it's
one of the bigger ones.I'm also unable to get the gui for xen working
but i'm going to go to the xen discuss and see if that's a known thing as
well.

Thanks for checking for me =)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Miles Nordin
> "dc" == Daniel Carosone  writes:

dc> There's a family of platypus in the creek just down the bike
dc> path from my house.

yeah, happy australiaday. :)

What I didn't understand in school is that egg layers like echidnas
are not exotic but are pettingzoo/farm/roadkill type animals.  IMHO
there's a severe taxonomic bias among humans, like a form of OCD, that
brings us ridiculous things like ``the seven layer OSI model'' and
Tetris and belief in ``RAID edition'' drives.

dc> Typically, once a sector fails to read, this is true for that
dc> sector (or range of sectors).  However, if the sector is
dc> overwritten and can be remapped, drives often continue working
dc> flawlessly for a long time thereafter.

While I don't doubt you had that experience (I've had it too), I was
mostly thinking of the google paper:

 http://labs.google.com/papers/disk_failures.html

They focus on temperature, which makes sense because it's $$$: spend
on cooling, or on replacing drives?  and tehy find even >45C does not
increase failures until the third year, so basically just forget about
it, and forget also about MTBF estimates based on silly temperature
timewarp claims and pay attention to their numbers instead.

But the interesting result for TLER/ERC is on page 7 figure 7, where
you see within the first two years the effect of reallocation on
expected life is very pronounced, and they say ``after their first
reallocation, drives are over 14 times more likely to fail within 60
days than drives without reallocation counts, making the critical
thereshold for this parameter also '1'.''

It also says drives which fail the 'smartctl -t long' test (again,
this part of smartctl is broken on solaris :( plz keep in the back of
your mind :), which checks that every sector on the medium is
readable, are ``39 times more likely to fail within 60 days than
drives without scan errors.''  so...this suggests to me that read
errors are not so much things that happen from time to time even with
good drives, and therefore there is not much point in trying to write
data into an unreadable sector (to remap it) or to worry about
squeezing one marginal sector out of an unredundant desktop drive (the
drive's bad---warn OS, recover data, replace it).

One of the things that's known to cause bad sectors is high-flying
writes, and all the google-studied drives were in data centers, so
some of this might not be true of laptop drives that get knocked
around a fair bit.

dc> Once they've run out of remapped sectors, or have started
dc> consistently producing errors, then they're cactus.  Do pay
dc> attention to the smart error counts and predictors.

yes, well, you can't even read these counters on Solaris because
smartctl doesn't make it through the SATA stack, so ``do pay attention
to'' isn't very practical advice.  but if you have Linux, the advice
of the google paper is to look at the remapped sector count (is it
zero, or more than zero?), and IIRC that sometimes the ``seek error
rate'' can be compared among identical model drives but is useless
otherwise.  The ``overall health assessment'' is obviously useless,
but hopefully I don't need to tell anyone that.  The 'smartctl -t
long' test is my favorite, but it's proactive.  Anyway the main result
I'm interested here is what I just said, that unreadable sectors are
not a poisson process.  They're strong indicators of drives about to
fail, ``the critical threshhold is '1' '', and not things around which
you can usefully plan cargocult baroque spaghetti rereading strategies.

dc> The best practices of regular scrubs and sufficiently
dc> redundant pools and separate backups stand, in spite of and
dc> indeed because of such idiocy.

ok, but the new thing that I'm arguing is that TLER/ERC is a
completely useless adaptation to a quirk of RAID card firmware and has
nothing to do with ZFS, nor with best RAID practices in general.  I'm
not certain this statement is true, but from what I've heard so far
that's what I think.


pgpmRZbU3eobe.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Cindy Swearingen

D'oh. I didn't test the workaround because I was running off to a prezo.

A quick test looks like you are correct. I apologize.

Let me see if I can do some testing or get some more info.

Thanks,

Cindy

On 01/26/10 09:50, Thomas Burgess wrote:

i tried using sharemgr.it didnt' work either.

or maybe i just did it wrong, could you explain what i need to do?

let's say i was trying to do this command:

zfs set sharesmb=name=wonslung tank/nas/Wonslung

how do i do that in sharemgr?

when i tried i either got an error or it didn't change.i realize i 
may be doing it wrong.  When does 132 drop? if it's pretty soon i guess 
i could just wait.


Thanks for the reply.

On Tue, Jan 26, 2010 at 10:42 AM, Cindy Swearingen 
mailto:cindy.swearin...@sun.com>> wrote:


Hi Thomas,

Looks like a known problem in b131 that is fixed in b132:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6912791
Unable to set sharename using zfs set sharesmb=name=

The workaround is to use sharemgr instead.

Thanks,

Cindy


On 01/23/10 21:50, Thomas Burgess wrote:

I can't get sharesmb=name= to workit worked in b130i'm
not sure if it's broken in 131 or if my machine is being a pain.


anyways, when i try to do this:

zfs set sharesmb=name=wonslung tank/nas/Wonslung

i get this:


cannot set property for 'tank/nas/Wonslung': 'sharesmb' cannot
be set to invalid options


i've googled this...and it seems to pop up a lot but so far i
can't find any solutions...it's really driving me nuts.


Also, when i try to create a NEW share the same thing happens
when i use -o triggers.

Please help





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAW Device on ZFS

2010-01-26 Thread Francois Napoleoni

# zfs create -V  /

accessible under :

/dev/zvol/dsk*//

HTH.

F.

Tony MacDoodle wrote:

Is it possible to create a RAW device on a ZFS pool?

Thanks




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Francois Napoleoni / Sun Support Engineer
mail  : francois.napole...@sun.com
phone : +33 (0)1 3403 1707
fax   : +33 (0)1 3403 1114

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAW Device on ZFS

2010-01-26 Thread Darren J Moffat

On 26/01/2010 17:08, Tony MacDoodle wrote:

Is it possible to create a RAW device on a ZFS pool?


Do you mean a ZVOL ?

zfs create -V 1g tank/myvol

this will appear as a block and char device in:

/dev/zvol/dsk/tank/myvol
/dev/zvol/rdsk/tank/myvol

Is this what you mean by RAW ?  If not please explain what you do mean 
by RAW.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-01-26 Thread Mark Nipper
I would definitely be interested to see if the newer firmware fixes the problem 
for you.  I have a very similar setup to yours, and finally forcing the 
firmware flash to 1.26.00 of my on-board LSI 1068E on a SuperMicro H8DI3+ 
running snv_131 seemed to address the issue.  I'm still waiting to see if 
that's entirely the case, but so far so good (even with a LOT of disk activity 
to clean up the very messy zpool which had resulted from all of the disk 
timeouts/errors previously).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RAW Device on ZFS

2010-01-26 Thread Tony MacDoodle
Is it possible to create a RAW device on a ZFS pool?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-26 Thread Mark Nipper
> It may depend on the firmware you're running. We've
> got a SAS1068E based 
> card in Dell R710 at the moment, connected to an
> external SAS JBOD, and 
> we did have problems with the as shipped firmware.

Well, I may have misspoke.  I just spent a good portion of yesterday upgrading 
to the latest firmware myself (downloaded from SuperMicro's FTP, version 
1.26.00 also; after I figured out I had to pass the -o option to mptutil to 
force the flash since it was complaining about a mismatched card or some such) 
and I thought that the machine had locked up again later in the day yesterday 
because I couldn't ssh into the machine.

To my surprise though, I was able to log into the machine just fine this 
morning directly on the console from the command line.  It seems the snv_125 
bug with /dev/ptmx bit me (the "error: /dev/ptmx: Permission denied" problem 
that required me tracking down the release notes for snv_125 to figure out the 
problem) and the server was happy otherwise.  More importantly, the zpool 
activity had all finished and I have three clean spares again!  Normally this 
amount of I/O would have totally killed the machine!

So somewhere between upgrading the firmware to the latest version and upgrading 
to snv_131, it looks like the problem may have actually been addressed.  I'm 
guardedly optimistic at this point, given the previous problems I've had so far 
with this on-board controller.

Interesting to hear someone else with the same chip but on an expansion card 
has no problems (but was with the on-board chip).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Thomas Burgess
i tried using sharemgr.it didnt' work either.

or maybe i just did it wrong, could you explain what i need to do?

let's say i was trying to do this command:

zfs set sharesmb=name=wonslung tank/nas/Wonslung

how do i do that in sharemgr?

when i tried i either got an error or it didn't change.i realize i may
be doing it wrong.  When does 132 drop? if it's pretty soon i guess i could
just wait.

Thanks for the reply.

On Tue, Jan 26, 2010 at 10:42 AM, Cindy Swearingen  wrote:

> Hi Thomas,
>
> Looks like a known problem in b131 that is fixed in b132:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6912791
> Unable to set sharename using zfs set sharesmb=name=
>
> The workaround is to use sharemgr instead.
>
> Thanks,
>
> Cindy
>
>
> On 01/23/10 21:50, Thomas Burgess wrote:
>
>> I can't get sharesmb=name= to workit worked in b130i'm not sure if
>> it's broken in 131 or if my machine is being a pain.
>>
>>
>> anyways, when i try to do this:
>>
>> zfs set sharesmb=name=wonslung tank/nas/Wonslung
>>
>> i get this:
>>
>>
>> cannot set property for 'tank/nas/Wonslung': 'sharesmb' cannot be set to
>> invalid options
>>
>>
>> i've googled this...and it seems to pop up a lot but so far i can't find
>> any solutions...it's really driving me nuts.
>>
>>
>> Also, when i try to create a NEW share the same thing happens when i use
>> -o triggers.
>>
>> Please help
>>
>>
>> 
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Bob Friesenhahn

On Tue, 26 Jan 2010, F. Wessels wrote:

The "green" drives with their lower spindle speeds reduces this 
effect.


I don't agree that lowering the spindle speed necessarily reduces 
resonance.  Reducing resonance is accomplished by assuring that 
chassis components do not resonate (produce standing waves) at a 
harmonic of the frequency that the rotating media produces. 
Adjusting the size, length, and weight of chassis components to avoid 
the harmonics aleviates the problem.  A chassis designed for 
high-speed drives may not do so well with slow-speed drives, and 
vice-versa.


Anyone who has played with audio frequency sweeps and a large 
subwoofer soon becomes familiar with resonance and that the lower 
frequencies often cause more problems than the higher ones.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread F. Wessels
After following this topic the last days, and nearly everybody contributed to 
it, I think it's time to add a new factor.

Vibration.

First some prove how sensitive modern drives are:
http://blogs.sun.com/brendan/entry/unusual_disk_latency

Most "enterprise" drives also contain circuitry to handle the vibration 
resulting from multi drives setups. Resonance for example is avoided by 
adjusting the spindle speed. Enterprise chassis with drive sleds contain 
mechanical dampening. In a typical soho case all drives are screwed to a shared 
chasis. The vibration problem is much worse in such a setup.
The "green" drives with their lower spindle speeds reduces this effect.
Personally I've good experience with the WD RE2 1TB green drives. I've an 8 
drive pool with these and till today saw no problems. On another system with 6 
seagate consumer drives I've lost already two drives. Both running 24/7 for 
almost two years.

I would like thank the people who brought it under my attention the TLER and 
idle timeout CAN be configured on some drives. Although I'm only interested in 
these options when they survive a power cycle.

Yes the enterprise drives are expensive, but so is my time and data.

Regards,

Frederik
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Cindy Swearingen

Hi Thomas,

Looks like a known problem in b131 that is fixed in b132:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6912791
Unable to set sharename using zfs set sharesmb=name=

The workaround is to use sharemgr instead.

Thanks,

Cindy

On 01/23/10 21:50, Thomas Burgess wrote:
I can't get sharesmb=name= to workit worked in b130i'm not sure 
if it's broken in 131 or if my machine is being a pain.



anyways, when i try to do this:

zfs set sharesmb=name=wonslung tank/nas/Wonslung

i get this:


cannot set property for 'tank/nas/Wonslung': 'sharesmb' cannot be set to 
invalid options



i've googled this...and it seems to pop up a lot but so far i can't find 
any solutions...it's really driving me nuts.



Also, when i try to create a NEW share the same thing happens when i use 
-o triggers.


Please help




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb name not working

2010-01-26 Thread Thomas Burgess
so this is a bug in b131?

because this worked in b130 for SURE.


i have shares i made in b130 with different names but in b131 i can't do
it.it sucks because in b130 i can't use Xorg but in b131 i can't name
sharesoh well...how do i report this as an error so they can fix it for
b132?


On Mon, Jan 25, 2010 at 7:10 PM, Chris Du  wrote:

> I just tried to create a new share and got the same error.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss