Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-18 Thread Simon Breden
So are we all agreed then, that a vdev failure will cause pool loss ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-18 Thread Simon Breden
OK, thanks Freddie, that's pretty clear.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Simon Breden
OK, thanks Ian.

Another example:

Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a 
two drive mirror vdev, and three drives in the RAID-Z2 vdev failed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Simon Breden
I would just like to confirm or not whether a vdev failure would lead to 
failure of the whole pool or not.

For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail 
within the first vdev, is all the data within the whole pool unrecoverable?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] TLER and ZFS

2010-10-06 Thread Simon Breden
> Hi all
> 
> I just discovered WD Black drives are rumored not to
> be set to allow TLER.

Yep: http://opensolaris.org/jive/message.jspa?messageID=501159#501159

> Enterprise drives will cost
> about 60% more, and on a large install, that means a
> lot of money...

True, sometimes more than twice the price.

If these are for a business, personally I would invest in TLER-capable drives 
like the WD REx models (RAID Edition). These allow for fast fails on read/write 
errors so that the data can be remapped. This prevents the possibility of the 
drive being kicked from the array.

If these are for home and you don't have, or are not willing to spend a lot 
more on TLER-capable drives then go for something reliable. Forget WD Green 
drives (see links below). After WD removed TLER-setting on their non-enterprise 
drives, I have switched to Samsung HD203WI drives and so far these have been 
flawless. I believe it's a 4-platter model. Samsung have very recently (last 
month?) brought out a HD204UI model which is a 3-platter (667GB per platter) 
model, which should be even better -- check the newegg ratings for good/bad 
news etc.

http://opensolaris.org/jive/thread.jspa?threadID=121871&tstart=0
http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/#drives
http://jmlittle.blogspot.com/2010/03/wd-caviar-green-drives-and-zfs.html 

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to an aclmode-less world

2010-10-05 Thread Simon Breden
Hi Cindy,

That sounds very reassuring.

Thanks a lot.

Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to an aclmode-less world

2010-10-04 Thread Simon Breden
Any ideas anyone?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migrating to an aclmode-less world

2010-09-29 Thread Simon Breden
Currently I'm still using OpenSolaris b134 and I had used the 'aclmode' 
property on my file systems. However, the aclmode property has been dropped 
now: 
http://arc.opensolaris.org/caselog/PSARC/2010/029/20100126_mark.shellenbaum 

I'm wondering what will happen to the ACLs on these files and directories if I 
upgrade to a newer Solaris version (OpenIndiana b147 perhaps).

I'm sharing the file systems using CIFS.

I was using very simple ACLs like below for easy inheritance of ACLs, which 
worked OK for my needs.

# zfs set aclinherit=passthrough tank/home/fred/projects
# zfs set aclmode=passthrough tank/home/fred/projects
# chmod A=\
owner@:rwxpdDaARWcCos:fd-:allow,\
group@:rwxpdDaARWcCos:fd-:allow,\
everyone@:rwxpdDaARWcCos:fd-:deny \
/tank/home/fred/projects
# chown fred:fred /tank/home/fred/projects
# zfs set sharesmb=name=projects tank/home/fred/projects

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] drive speeds etc

2010-09-28 Thread Simon Breden
> described problems with WD aren't okay for
>  non-critical
> evelopment/backup/home use either.

Indeed. I don't use WD drives for RAID any longer.

>  The statement
> from WD is nothing
> but an attempt to upsell you, to differentiate the
> market so they can
> tap into the demand curve at multiple points

Yes, I'm quite aware of this.


> Don't let this
> stuff get a foothold inside your brain.

Ok, thanks, I'll try to ensure that never happens :P

> ``mixing'' drives within a stripe is a good idea
> because it protects
> you from bad batches and bad models/firmwares, which
> are not rare in
> recent experience!

Yep, that's one way, although you also multiply the risk of at least one type 
of drive being a lemon.

Another is to research good drives & firmwares and stick with those. Twice out 
of two drive choosing/buying occasions, this latter choice has served me well. 
Zero read/write/checksum errors so far in almost 3 years. I must be lucky, very 
lucky :)

>  I always mix drives and included
> WD in that mix up
> until this latest rash of problems.

I avoided WD (for RAID) as soon as these problems showed up and bought another 
manufacturer's drives.

I still buy their Caviar Black drives as scratch video editing drives though, 
as they're pretty good.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] drive speeds etc

2010-09-28 Thread Simon Breden
IIRC the currently available WD Caviar Black models no longer enable TLER to be 
set. For WD drives, to have TLER capability you will need to buy their 
enterprise models like REx models which cost mucho $$$.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] drive speeds etc

2010-09-28 Thread Simon Breden
Regarding vdevs and mixing WD Green drives with other drives, you might find it 
interesting that WD itself does not recommend them for 'business critical' RAID 
use - this quoted from the WD20EARS page here 
(http://www.wdc.com/en/products/Products.asp?DriveID=773):


Desktop / Consumer RAID Environments - WD Caviar Green Hard Drives are tested 
and recommended for use in consumer-type RAID applications (i.e., Intel Matrix 
RAID technology).*

*Business Critical RAID Environments – WD Caviar Green Hard Drives are not 
recommended for and are not warranted for use in RAID environments utilizing 
Enterprise HBAs and/or expanders and in multi-bay chassis, as they are not 
designed for, nor tested in, these specific types of RAID applications. For all 
Business Critical RAID applications, please consider WD’s Enterprise Hard 
Drives that are specifically designed with RAID-specific, time-limited error 
recovery (TLER), are tested extensively in 24x7 RAID applications, and include 
features like enhanced RAFF technology and thermal extended burn-in testing.


Further reading:
http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/#drives
http://opensolaris.org/jive/thread.jspa?threadID=121871&tstart=0
http://jmlittle.blogspot.com/2010/03/wd-caviar-green-drives-and-zfs.html 
(mixing WD Green & Hitachi)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestor

2010-05-05 Thread Simon Breden
Hi Euan,

You might find some of this useful:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/
http://breden.org.uk/2009/08/30/home-fileserver-zfs-boot-pool-recovery/

I backed up the rpool to a single file which I believe is frowned upon, due to 
the consequences of an error occurring within the sent stream, but sending to a 
file system instead will fix this aspect, and you may still find the rest of 
use.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-03 Thread Simon Breden
Thanks Miles, I'll take a look.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-03 Thread Simon Breden
I ran smbios and for the memory-related section I saw the following:

IDSIZE TYPE
6415   SMB_TYPE_MEMARRAY (physical memory array)

  Location: 3 (system board or motherboard)
  Use: 3 (system memory)
  ECC: 3 (none)
  Number of Slots/Sockets: 4
  Memory Error Data: Not Supported
  Max Capacity: 4294967296 bytes

IDSIZE TYPE
6562   SMB_TYPE_MEMDEVICE (memory device)

  Manufacturer: None
  Serial Number: None
  Asset Tag: None
  Location Tag: DIMM_B1
  Part Number: None

  Physical Memory Array: 64
  Memory Error Data: Not Supported
  Total Width: 72 bits
  Data Width: 64 bits
  Size: 1073741824 bytes
  Form Factor: 9 (DIMM)
  Set: None
  Memory Type: 18 (DDR)
  Flags: 0x0
  Speed: 1ns
  Device Locator: DIMM_B1
  Bank Locator: Bank0/1
...

>From this output it appears as if Solaris, via the BIOS I presume, it looks 
>like my BIOS thinks it doesn't have ECC RAM, even though all the memory 
>modules are indeed ECC modules.

Might be time to check (1) my current BIOS settings, even though I felt sure 
ECC was enabled in the BIOS already, and (2) check for a newer BIOS update. A 
pity, as the machine has been rock-solid so far, and I don't like changing 
stable BIOSes...

Here's the start of the SMBIOS output:

# smbios
IDSIZE TYPE
0 104  SMB_TYPE_BIOS (BIOS information)

  Vendor: Phoenix Technologies, LTD
  Version String: ASUS M2N-SLI DELUXE ACPI BIOS Revision 1502
  Release Date: 03/31/2008
  Address Segment: 0xe000
  ROM Size: 524288 bytes
  Image Size: 131072 bytes
  Characteristics: 0x7fcb9e80
SMB_BIOSFL_PCI (PCI is supported)
SMB_BIOSFL_PLUGNPLAY (Plug and Play is supported)
SMB_BIOSFL_APM (APM is supported)
SMB_BIOSFL_FLASH (BIOS is Flash Upgradeable)
SMB_BIOSFL_SHADOW (BIOS shadowing is allowed)
SMB_BIOSFL_CDBOOT (Boot from CD is supported)
SMB_BIOSFL_SELBOOT (Selectable Boot supported)
SMB_BIOSFL_ROMSOCK (BIOS ROM is socketed)
SMB_BIOSFL_EDD (EDD Spec is supported)
SMB_BIOSFL_525_360K (int 0x13 5.25" 360K floppy)
SMB_BIOSFL_525_12M (int 0x13 5.25" 1.2M floppy)
SMB_BIOSFL_35_720K (int 0x13 3.5" 720K floppy)
SMB_BIOSFL_35_288M (int 0x13 3.5" 2.88M floppy)
SMB_BIOSFL_I5_PRINT (int 0x5 print screen svcs)
SMB_BIOSFL_I9_KBD (int 0x9 8042 keyboard svcs)
SMB_BIOSFL_I14_SER (int 0x14 serial svcs)
SMB_BIOSFL_I17_PRINTER (int 0x17 printer svcs)
SMB_BIOSFL_I10_CGA (int 0x10 CGA svcs)
  Characteristics Extension Byte 1: 0x33
SMB_BIOSXB1_ACPI (ACPI is supported)
SMB_BIOSXB1_USBL (USB legacy is supported)
SMB_BIOSXB1_LS120 (LS-120 boot is supported)
SMB_BIOSXB1_ATZIP (ATAPI ZIP drive boot is supported)
  Characteristics Extension Byte 2: 0x5
SMB_BIOSXB2_BBOOT (BIOS Boot Specification supported)
SMB_BIOSXB2_ETCDIST (Enable Targeted Content Distrib.)
  Version Number: 0.0
  Embedded Ctlr Firmware Version Number: 0.0

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's the advantage of using multiple filesystems in a

2010-03-03 Thread Simon Breden
Hi Tom,

My input:
Create one file system per type of data to:
1. help organise your data logically
2. increase file system granularity which allows different file system 
properties to be set per filesystem: such as copies, compression etc
3. allow separate shares to be easily setup: via CIFS (SMB/Samba-like) or NFS
4. allow finer control over user access depending on data type

For example I split my file systems into logical types like:
1. user home directory file systems
2. media (music, photo, video etc)
3. software archive
4. backups
5. test area

Also, within each of these file systems, ZFS allows file system nesting, to 
allow better grouping.

If we assume the name 'tank' for the pool name then the default file system 
created when the pool is created is called 'tank'.

So for media file systems, I might create 'tank/media' as the base file system.
Within 'tank/media' I can create 'tank/media/music', 'tank/media/photo', 
'tank/media/video' etc.

For the home file systems, I might create 'tank/home' and then nest 
'tank/home/fred', 'tank/home/wilma' etc.

For easy, regular snapshotting of all the file systems, you can issue:
# zfs snapshot -r t...@20100303

This will give snapshot names for each nested file system under the root 'tank' 
file system like 'tank/h...@20100303', 'tank/home/f...@20100303', 
'tank/home/wi...@20100303', 'tank/me...@20100303', 'tank/media/mu...@20100303', 
'tank/media/ph...@20100303', 'tank/media/vi...@20100303' etc.

If you want some more stuff to read then try these:
http://breden.org.uk/2008/03/08/home-fileserver-zfs-setup/
http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/

You'll also find stuff on snapshots, backups there too.

Hope this helps :)

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
Probably 6 in a RAID-Z2 vdev.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Simon Breden
Hi Tonmaus,

> they are the new revision. 

OK.

> I got the impression as well that the complaints you
> reported were mainly related to embedded Linux
> systems probably running LVM / mda. (thecus, Qnap,
> ) Other reports I had seen related to typical HW
> raids. I don't think the situation is comparable to
> ZFS. 

That could be the case, but maybe I'll have to create a specific thread along 
the lines of "Anyone having success / problems with WD Green drives?" in order 
to know a bit more details. There were Mac users also complaining -- see the 
WDC links in the "Best 1.5TB drives" thread.

> I have also followed some TLER related threads here.
> I am not sure if there was ever a clear assertion if
> consumer drive related Error correction will affect a
> ZFS pool or not. Statistically we should have a lot
> of "restrictive TLER settings helped me to solve my
> ZFS pool issues" success reports here, if it were.

IIRC I think Richard said that he thought that a troublesome non-RAID drive 
would affect MTTR and not reliability. I.e. you'll have to manually intervene 
if a consumer drive causes the system to hang, and replace it, whereas the RAID 
edition drives will probably report the error quickly and then ZFS will rewrite 
the data elsewhere, and thus maybe not kick the drive.

So it sounds preferable to have TLER in operation, if one can find a 
consumer-priced drive that allows it, or just take the hit and go with whatever 
non-TLER drive you choose and expect to have to manually intervene if a drive 
plays up. OK for home user where he is not too affected, but not good for 
businesses which need to have something recovered quickly.

> That all rather points to singular issues with
> firmware bugs or similar than to a systematic issue,
> doesn't it?

I'm not sure.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
Sounds good.

I was taking a look at the 1TB Caviar Black drives which are WD1001FALS I think.
They seem to have superb user ratings and good reliability comments from many 
people.

I consider these "full fat" drives as opposed to the LITE (green) drives, as 
they spin at 7200 rpm instead of 5400 rpm, have higher performance  and burn 
more juice than the Green models, but they have superb reviews from almost 
everyone regarding behaviour and reliability, and at the end of the day, we 
need good, reliable drives that work well in a RAID system.

I can get them for around the same price as the cheapest 1.5TB green drives 
from Samsung.
Somewhere I saw people saying that WDTLER.EXE works to allow reduction of the 
error reporting time like the enterprise RE versions (RAID Edition). However I 
then saw another user saying on the newer revisions WD have disabled this. I 
need to check a bit more to see what's really the case.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
That's a pity about smartmontools not working. Which controllers are you using?

Good news about no sleeping though, although perhaps not so economical.

I think I'd rather burn a bit more power and have drives that respond properly 
than weird timeout issues some people seem to be experiencing with some of the 
green low power drives.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-02 Thread Simon Breden
IIRC the Black range are meant to be the 'performance' models and so are a bit 
noisy. What's your opinion? And the 2TB models are not cheap either for a home 
user. The 1TB seem a good price. And from what little I read, it seems you can 
control the error reporting time with the WDTLER.EXE utility :)

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-02 Thread Simon Breden
The thing that puts me off the 7K2000 is that it is a 5 platter model. The 
latest 2TB drives use 4 x 500GB platters. A bit less noise, vibration and heat, 
in theory :)

And the latest 1.5TB drives use only 3 x 500GB platters.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-02 Thread Simon Breden
If I'm not mistaken then the WD2002FYPS is an enterprise model: WD RE4-GP (RAID 
Edition, Green Power), so you almost certainly have the firmware that allows 
(1) the idle time before spindown to be modified with WDIDLE3.EXE and (2) the 
error reporting time to be modified with WDTLER.EXE. 

So I expect your drives are spinning down to save power as they are Green 
series drives. But if this spindown is causing odd things to happen you could 
see if it's possible to increase the spindown time with WDIDLE3.EXE.

Let us know if you get any news back from WD.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-02 Thread Simon Breden
Hi Tonmaus,

That's good to hear. Which revision are they: 00R6B0 or 00P8B0? It's marked on 
the drive top.

>From what I've seen elsewhere, people seem to be complaining about the newer 
>00P8B0 revision, so I'd be interested to hear from you. These revision numbers 
>are listed in the first post of the thread below, and refer to the 1.5TB model 
>(WD15EADS), but might also be applicable to the WD20EADS model too.

http://opensolaris.org/jive/thread.jspa?threadID=121871

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-02 Thread Simon Breden
> My timeout issue is definitely the WD10EARS disks.
> WD has chosen to cripple their consumer grade disks
> when used in quantities greater than one.
> 
> I'll now need to evaluate alternative supplers of low
> cost disks for low end high volume storage.
> 
> Mark.
> 
> typo ST32000542AS not NS

This was the conclusion I came to. I'm also on the hunt for some decent 
consumer-priced drives for use in a ZFS RAID setup, and I created a thread to 
try to find which ones people recommend. See here:
http://opensolaris.org/jive/thread.jspa?threadID=121871

So far, I'm inclined to think that the Samsung HD154UI 1.5TB, and possibly the 
Samsung HD203WI 2TB drives might be the most reliable choices at the moment, 
based on the data in that thread and checking user reports.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-30 Thread Simon Breden
Good to hear someone else confirming the greatness of this ION platform for an 
HTPC. BTW, how do you keep all those drives quiet? Do you use a lot of silicone 
grommets on the drive screws, or some other form of vibration damping?

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Simon Breden
Yep, you're right, the topic was media server build :)

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-29 Thread Simon Breden
> Aren't the EARS drives the first ones using 4k
> sectors? Does OpenSolaris support that properly yet?
> From what I've read using the 512-byte emulation mode
> in the drives is not good for performance (lots of
> read/modify/write), though I don't know whether that
> could cause these kind of problems.

Yes, I think the 'EARS' series are the first WD consumer drives using the new 
4K sector 'Advanced Formatting':
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf :
"Advanced Formatting
Technology being adopted by WD and
other drive manufacturers to increase
media format efficiencies, thus
enabling larger drive capacities.
(WDxxEARS and WDAARS
models only)"

I saw some discussion of performance and alignment, but it was in relation to 
Linux, perhaps it doesn't relate to Solaris?:
http://www.gossamer-threads.com/lists/linux/kernel/1039141

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Simon Breden
> Same here, although I use a normal modded XBOX. I am
> thinking of
> switching to a Mac Mini w/ Plex soon (a friend's
> setup is really
> awesome) - I want more horsepower under the hood. The
> XBOX is dated
> now, and won't even play certain DVDs.

Yes, a modded XBOX will play a lot of things but will struggle with highly 
compressed streams and will fail at HD etc. The ION platform is especially 
interesting as these boxes are really cheap, and you can slap Linux + XBMC on 
there for free. Yes, I also hear that Plex running on a Mac Mini is good, but 
they are more expensive than an ION-based box. ION can play HD apparently. And 
what about Plex? I think it's a fork of XBMC. Does it have the same level of 
development support as XBMC?

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Simon Breden
I have used OpenSolaris on the NAS and XBMC as the media player, and it works 
greatl. See:

http://breden.org.uk/2008/03/08/home-fileserver-zfs-setup/ and
http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/ and
http://breden.org.uk/2009/06/20/home-fileserver-media-center/

For the HTPC, media client computer, the NVidia ION platform looks good, 
running Linux and XBMC.

ASRock ION 330 + Linux + XBMC = A nice reasonably priced HTPC
This small, quiet HTPC based on a low-power Intel Atom 330 dual core processor, 
running XBMC on Linux looks like it should be a nice little Home Theatre PC. It 
uses the NVidia ION graphics platform and seems to have sufficient power for 
displaying most types of video. Price around £250 / €280 / $350 as of October 
2009. Includes 2 GB RAM and a 320 GB internal hard drive, but as it has built 
in wired GbE, this gizmo will hook up to your NAS and act as a video client. 
Looks good. Add a USB infra-red receiver dongle and remote control and you’re 
set!

See here for more info:
http://xbmc.org/blittan/2009/10/12/asrock-ion-330/

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for setting ACL

2010-01-28 Thread Simon Breden
I don't have a lot of time to help here, but this post of mine might possibly 
help with ACLs:

http://breden.org.uk/2009/05/10/home-fileserver-zfs-file-systems/

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Simon Breden
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and 
without checking mine using AOC-USAS-L8i which uses same controller)

Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in 
these posts, but it might be worth looking into -- especially the last link:

1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
 (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
 (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Simon Breden
Are you using the latest IT mode firmware? (1.26.00 I think, listed above and 
without checking mine using AOC-USAS-L8i which uses same controller)

Also, I noticed you're using 'EARS' series drives.
Again, I'm not sure if the WD10EARS drives suffer from a problem mentioned in 
these posts, but it might be worth looking into -- especially the last link:

1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
 (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
 (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
> On 1/25/2010 6:23 PM, Simon Breden wrote:
> > By mixing randomly purchased drives of unknown
> quality, people are 
> > taking unnecessary chances. But often, they refuse
> to see that, 
> > thinking that all drives are the same and they will
> all fail one day 
> > anyway...

My use of the word random was a little joke to refer to drives that are bought 
without checking basic failure reports made by users, and then the purchaser 
later says 'oh no, these drives are c**p'. A little checking goes a long way 
IMO. But each to his own.

> I would say, though, that buying different drives
> isn't inherently 
> either "random" or "drives of unknown quality".  Most
> of the time, I 
> know no reason other than price to prefer one major
> manufacturer to 
> another.

Price is an important choice driver I think we all use. But the 'drives of 
unknown quality' bit is still possible to mitigate by checking, if one is 
willing to spend the time and knows where to look. We're never going to be 100% 
certain, but if I read widely of numerous reports that drives of a particular 
revision number are seriously substandard then I am going to take that info 
onboard to help me steer away from purchasing them. That's all.

> And, over and over again, I've heard of bad batches
> of drives.  Small 
> manufacturing or design or component sourcing errors.
>  Given how the 
> esilvering process can be quite long (on modern large
> drives) and quite 
> stressful (when the system remains in production use
> during resilvering, 
> so that load is on top of the normal load), I'd
> rather not have all my 
> drives in the set be from the same bad batch!

Indeed. This is why it's good to research, buy what you think is a good drive & 
revision, then load your data onto them and test them out over a period of 
time. But one has to keep original data safely backed up.

> Google is working heavily with the philosophy that
> things WILL fail, so 
> they plan for it, and have enough redundance to
> survive it -- and then 
> save lots of money by not paying for premium
> components.  I like that 
> approach.

Yep, as mentioned elsewhere, Google have enormous resources to be hugely 
redundant and safe.
And yes, we all try to use our common sense to build in as much redundancy as 
we deem necessary and we are able to reasonably afford. And we have backups.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
If you choose the AOC-USAS-L8i controller route, don't worry too much about the 
exotic looking nature of these SAS/SATA controllers. These controllers drive 
SAS drives and also SATA drives. As you will be using SATA drives, you'll just 
get cables that plug into the card. The card has 2 ports. You buy a cable that 
plugs in to the port and fans out into 4 SATA connectors. Just buy 2 cables if 
you need to drive 8 drives, or at least more than 4.

SuperMicro sell a few different cable lengths for these cables, so once you've 
measured, you can choose. Take a look at this post of mine and look for the 
card, cables and text where I also remarked on the scariness factor of dealing 
with 'exotic' hardware.

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

And cables are here:
http://supermicro.com/products/accessories/index.cfm
http://64.174.237.178/products/accessories/index.cfm (DNS failed so I gave IP 
address version too)
Then select 'cables' from the list. From the cables listed, search for 'IPASS 
to 4 SATA Cable' and you will find they have a 23cm version (CBL-0118L-02) and 
a 50cm version (CBL-0097L-02). Sounds like your larger case will probably need 
the 50cm version.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread Simon Breden
On the subject of vibrations when using multiple drives in a case (tower), I'm 
using silicone grommets on all the drive screws to isolate vibrations. This 
does seem to greatly reduce the vibrations reaching the chassis, and makes the 
machine a lot quieter, and so I would expect that this minimises the vibrations 
transferred between drives via the chassis. In turn I would expect that this 
greatly reduces errors related to high vibration levels when reading and 
writing: less vertical head movement, leading to less variation in write signal 
strength.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> I got over the reluctance to do drive replacements in
> larger batches
> quite some time ago (well before there was zfs),
> though I can
> certainly sympathise.

Yep, it's not so much of a big deal. One has to think a moment to see what is 
needed, check out any possible gotchas in order to carry out the upgrade 
safely, and then go ahead and do the upgrade.

>  For me, drives bought
> incrementally never
> matched up (vendors change specs too often,
> especially for consumer
> units) and the previous matched set is still a useful
> matched backup
> set. 

I agree, better to research good drives, as far as is reasonably possible, and 
then buy a batch of them. Test them out for a period, and always keep your old 
data. And backups.

By mixing randomly purchased drives of unknown quality, people are taking 
unnecessary chances. But often, they refuse to see that, thinking that all 
drives are the same and they will all fail one day anyway...

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-25 Thread Simon Breden
> this sounds convincing to fetishists of an ordered
> world where
> egg-laying mammals do not exist, but it's utter
> rubbish.

Very insightful! :)

> As drives go bad they return errors frequently, and...

Yep, so have good regular backups, and move quickly once probs start.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> Well, they'll be in a space designated as a drive
> bay, so it should have
> some airflow.  I'll certainly check.

Yes, it's certainly worth checking.

> It's an OCZ Core II, I believe.  I've got an Intel -M
> waiting to replace
> it when I can find time (probably when I install
> Windows 7).

AFAIK the Intel ones should be good as they do serious amounts of testing and 
have huge R&D to develop great drives.

To cut it short, another idea is to:
1. build another box to make a new NAS using cheaper higher capacity drives
2. zfs send/recv the pool contents to the new NAS
3. use the old box as a backup machine containing as many old drives as you've 
got in a RAID-Z1 or RAID-Z2 vdev(s) so that you make efficient use of the 
capacity available in these 400GB drives.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-25 Thread Simon Breden
> In general, any system which detects and acts upon
> faults, would like
> to detect faults sooner rather than later.

Yes, it makes sense. I think my main concern was about loss - in question 2.
 
> > 2. Does having shorter error reporting times
> provide any significant data safety through, for
> example, preventing ZFS from kicking a drive from a
> vdev?
> 
> On Solaris, ZFS doesn't kick out drives, FMA does.

Thanks for the info. I'll take a look at those links to gain a better 
understanding of when a drive gets kicked.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> I've got at least one available 5.25" bay.  I hadn't
> considered 2.5" HDs;
> that's a tempting way to get the physical space I
> need.

Yes, it is an interesting option. But remember about any necessary cooling if 
moving them from a currently cooled area. As I used SSDs this turned out to be 
irrelevant as they don't seem to get hot, but for mechanical drives this is not 
the case.

> I'm running an SSD boot disk in my desktop box and so
> far I'm very
> disappointed (about half a generation too early, is
> my analysis).  And I
> don't need the theoretical performance for this boot
> disk.  I don't see
> the expense as buying me anything, and they're still
> pretty darned
> expensive.

Which model/capacity are you using?
Yes, they are not quite there yet, and I certainly should probably not have 
bothered buying these ones from the price perspective, as two 2.5" drives would 
have been fine. But for a desktop machine I'm quite surprised you're 
disappointed. But there is currently enormous variation in quality due to 
firmware making huge differences. They can only improve :)

> I've considered having the boot disks not hot-swap.
>  I could live with
> hat, although getting into the case is a pain (it
> lives on a shelf over
> my desk, so I either work on it in place or else I
> disconnect and
> reconnect all the external cabling; either way is
> ugly).

I think I would be tempted to maximise the available hot-swap bay space for 
data drives -- but only if it's required.

> Logging to flash-drives is slow, yes, and will wear
> them out, yes.  But if
> a $40 drive lasts two years, I'm very happy.  And the
> demise is
> write-based in this scenario, not random failure, so
> it should be fairly
> predictable.

Not an expert on this but I seem to remember that constant log-writing wore out 
these thumbdrives out, but don't quote me on that. 2.5" drives are very cheap 
too, and would be my personal choice in this case.

> I'm trying to simplify here!  But yeah, if nobody
> comes along with a
> significantly cheaper robust card of fewer ports,
> I'll probably do the
> same.

If you find the extra ports & capacity upgrade options useful then you won't go 
wrong with that card. It's worked flawlessly for me. Along with the 8-ports on 
the card, you have the 6 additional ones remaining on the mobo, so lack of SATA 
ports will never be a problem again :) It gives you lots of space to juggle 
things around if you want to.

One example, if one has a large case, is to make a backup pool from old drives 
within the same case. I haven't done this, but it has crossed my mind. As all 
the drives are local, the backup speed should be terrific, as there's no 
network involved... and if the drives were on a second PSU, which is only 
switched on to perform backups, no electricity needs to be wasted. I have to 
look into whether this is a workable idea though...

> 6 or 8 hot-swap bays and enough controllers gives me
> relatively few
> interesting choices.  6: 2 three-way, or three
> two-way; 8: four two-way,
> or...still only 2 three-way.  I don't think double
> redundancy is worth
> much to me in this case (daily backups to two or more
> external media sets,
> and hot-swap so I don't wait to replace a bad drive).

Indeed, and often forgotten by home builders, is that if you have dependable 
regular backups which employ redundancy in the backup pool, then you don't need 
to be so paranoid about your main storage pool, although I personally prefer to 
have double parity. Extra insurance is a good thing :)

> Actually, if I move the boot disks somewhere and have
> 8 hot-swap bays for
> data, I might well go with three two-way mirrors plus
> two hot spares. Or
> at least one.

Yep, it gives you a lot of options :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-25 Thread Simon Breden
Good news. Are those the HD154UI models?

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-25 Thread Simon Breden
> We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green
> drives.
> 
> Unfortunately, these drives have the "fixed" firmware
> and the 8 second idle timeout cannot be changed.
> Since we starting replacing these drives in our pool
> about 6 weeks ago (replacing 1 drive per week), the
> drives has registered almost 40,000 Load Cycles
> (head parking cycles).  At this rate, they won't
> last more than a year.  :(  Neither the wdidle3 nor
>  the wdtler utilities will work with these drives.

Thanks for posting your experiences here.

This could be where the attempt to use less energy too aggressively ends up 
making the drives fail prematurely, and thus costing you more in the long 
run... I wonder if someone has done the math of the cost of failed drives 
versus money saved due to drives using less energy?

How many load cycles are those drives quoted to be good for?

As the revision '00P8B0' was the one quoted in the initial post's WDC / 
Synology links, I would appreciate any further reliability / problem feedback 
you have regarding using these drives in a RAID array.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> One of those EIDE ports is running the optical drive,
> so I don't actually
> have two free ports there even if I replaced the two
> boot drives with IDE
> drives.

Yep, as I expected.

> I've given some though to booting from a thumb drive
> instead of disks. 
> That would free up two SATA ports AND two hot-swap
> disk bays, which would
> be nice.  And by simply keeping an image of the thumb
> drive contents, I
> could replace it very quickly if something died in
> it, so I could live
> without automatic failover redundancy in the boot
> disks.  Obviously thumb
> drives are slow, but other than boot time, it should
> slow down anything
> important too much (especially if I increase memory).

I've seen anecdotal evidence for not using thumb drives, speed, error-prone, 
logging etc, but maybe someone else can provide some more info. If you have a 
5.25" or 3.5" slot outside of your 8-drive drive cage, you could use two 2.5" 
HDs as a boot mirror, leaving all 8 bays free for drives for future expansion 
needs, as a possibility. But if six data drives are enough then this becomes 
less interesting. An possible option though. I chucked 2 SSDs into one 5.25" 
slot for my boot mirror, which worked out nicely, and a cheaper option is to 
use 2.5" HDs instead -- with a twin mounter here: 
http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

> My current chassis has 8 hot-swap bays, so unless I
> change that, nothing I
> can do will consume more than two additional
> controller ports.  Seems like
> a two-port card would be cheaper than an 8-port card
> (although as you say
> that 8-port card isn't that bad, around $150 last I
> looked it up).
> 
> But does anybody have a good 2-port card to recommend
> that's significantly
> cheaper?  If there is none, then future flexibility
> does start to look
> interesting.

Maybe others can recommend a 2 or 4 port card. When I looked mid-2009 I found 
some card but I didn't really feel like the hardware or possibly the driver was 
that robust, and I prefer not to lose my data or get more grey 
hairs/headaches... so I chose the 8-port known robust card/driver option :) And 
you just know that you'll need that extra port or two one day...

> I could have had more space initially by using the 4
> disks in RAIDZ
> instead of two mirror pairs.  I decided not to
> because that left me only
> very bad expansion options -- replacing all 4 drives
> at once and risking
> other drives failing during resilver 4 times in a row
> (and the removed
> drive isn't much use in recovery in that scenario I
> don't think).  Whereas
> with the mirror pairs I run much less risk of errors
> during resilver
> simply based on less time, two disks vs. four disks.
>  I actually started
> ith just one mirror pair, and then added a second
> mirror vdev to the pool
> when the first one started to get full.  I basically
> settled on mirror
> pairs as my building blocks for this fileserver.

Indeed, mirrors have a lot of interesting properties. But if you're upgrading 
now, you might want to consider using 3 way mirrors instead of 2 as this gives 
extra protection.

> Ooh, looks like there's lots of interesting detail
> there, too.

Yes, I documented most of my ZFS discoveries there so others can hopefully 
benefit from my headaches :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
Hi David,

I have the same motherboard and have been through this upgrade head-scratching 
before with my system, so hopefully I can give some useful tips.

First of all, unless the situation has changed, forget trying to get the extra 
2 SATA devices on the motherboard to work, as last time I looked, OpenSolaris 
had no JMicron JMB363 driver.

So, unless you add an extra SATA card, you'll be limited to using the existing 
6 SATA ports. There are also 2 EIDE ports you could use for your mirrored boot 
drives, but from what you say, it sounds like you have SATA devices for your 
two mirrored boot drives.

So like you say, if you don't add a new SATA controller card then you will have 
to replace each existing half of your 2 mirrors and resilver, which leaves your 
current 2-way mirrors a little vulnerable, although not too vulnerable, as 
you'll have removed a working drive from a working mirror presumably. So that 
is the mirror upgrade process.

Another possibility is to do what I did and add a SATA controller card. For 
this motherboard, to avoid restricting yourself too much, you might be better 
going for a PCIe-based 8-port SATA card, and the best I found is the SuperMicro 
AOC-USAS-L8i card, which is reasonably priced.

Using this card, you could move your existing mirrors to the card, then add 
your new larger disks to each of the mirrors to grow your pool, or just move 
the mirrors as they are, and add new drives as additional mirrors to your pool. 
Depending on your case space available, your choice might be dictated by the 
space available.

Anyway hope this helps.

Last thing, you could create a RAID-Z2 vdev with all those new drives, giving 
double-parity -- i.e. your data still survives even if any 2 drives die. With 
2-way mirrors, you lose all your data if 2 drives die in any of your mirrors. 
So another option could be to use 3-way mirrors with all of your new drives. So 
many options... :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-25 Thread Simon Breden
> Any comments on this Dec. 2005 study on disk failure
> and error rates?
> http://research.microsoft.com/apps/pubs/default.aspx?i
> d=64599

Will take a read...

> The OP originally asked "Best 1.5TB drives for
> consumer RAID?". Despite
> the entertainment value of the comments, it isn't
> clear that this has been
> answered. I suspect the OP was expecting a discussion
> of WD vs. Seagate
> vs. Hitachi, etc., but the discussion didn't go that
> way, perhaps because
> they are equally good (or bad) based on the TLER
> discussion? Has anyone
> actually experienced an extended timeout from one of
> these drives (from
> any manufacturer) causing a problem?

>From what I've managed to discover, rightly or wrongly, here is how I see it:
* it appears as if the most recent revisions of some models of the WD Green 
'EADS' and newer Advanced Format 'EARS' drives have some problem which puts me 
off using them for now (see links in first post + google). They also appear to 
have disabled user setting of TLER.
* Some Seagate 1.5TB models, like the one discussed in this discussion, appear 
to have low user ratings, and many of the user comments mention clicking noises 
& failures.
* Hitachi models I don't know enough about yet, but I would rather avoid using 
5-platter models like one of the 2TB models.
* Samsung have a 3-platter 1.5TB model (HD154UI), which seems to have quite 
high user satisfaction levels and you can set the error reporting time, but it 
will not persist after power off.
* Samsung also have a 4-platter 2TB model (HD203WI), which appears to have 
excellent user  ratings, and no DOAs listed, but as there are only a small 
number of ratings left (<20), it is too early to make a judgement, but early 
data seems to be very promising. 

Based on the above, and with further reading required, at this stage, I will 
almost certainly be choosing the Samsung HD154UI.

But let's keep an eye on the HD203WI, because when the price drops a bit 
further and if more positive data appears, this might be a great model to 
consider for those people replacing / upgrading drives.

And regarding your reply here to a comment from Bob on the Seagate model 
discussed:

> You seem to have it in for Seagate :-). Newegg by
> default displays reviews
> worst to best.

Bob was joking around about the Seagates :)

And newegg don't list reviews/ratings by default in worst to best order -- I 
posted that link using that order so that it was easy to see the kind of 
problems people were commonly listing. The things one wants to see before 
choosing.

> > Be sure to mark any failed drive using a
> sledgehammer so that you don't
> > accidentally use it again by mistake.

Again, humour alert from Bob :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-25 Thread Simon Breden
> Extended timeouts lead to manual intervention, not a
> change in the 
> probability of data loss.  In other words, they
> affect the MTTR, not
> the reliability. For a 7x24x365 deployments, MTTR is
> a concern because
> it impacts availability. For home use, perhaps not so
> much.
>  -- richard

As a home user, not running 24/7, replacing a problematic drive manually is not 
a problem.
My main concern would be not to lose the whole array due to additional failing 
drives during the recovery process.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-24 Thread Simon Breden
> Thank you for the effort Simon.

Thank you too Dusan, for creating this post that made me aware of this new card 
-- it looks like a good one, and doesn't have the unnecessary RAID stuff 
included :)

> Good to know from the feedback in your thread that the mpt_sas(7d) driver is 
> actually responsible for the SuperMicro AOC-USAS2-L8e support.

Yes, the next step will be to find out from people using the mpt_sas driver how 
well it's been working for them, although nobody has volunteered any feedback 
yet, probably because the card is so new. Maybe we'll see some posts appear on 
this card and the mpt_sas driver over the next few weeks and months?

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
Hey Dan,

Thanks for the reply.

Yes, I'd forgotten that it's often the heads that degrade -- something like 
lubricant buildup, IIRC.
As well as SMART data, which I must admit to never looking at, presumably scrub 
errors are also a good indication of looming trouble due to head problems etc? 
As I've seen zero read/write/checksum errors after regular scrubs over 2 years, 
hopefully this is a reasonably good sign of r/w head health.

Good to see you're already using a backup solution I have envisaged using. It 
seems to make sense: making use of old kit for backups to help preserve ROI on 
drive purchases -- even, no especially, for home users.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
How does a previously highly rated drive that costed >$100 suddenly become 
substandard when it costs <$100 ?

I can think of possible reasons, but they might not be printable here ;-)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
Ha ha -- regarding the drive comments, it looks like my humour detector was 
working just fine ;-)

And regarding mirror vdevs etc, I can see the usefulness of being able to build 
a mirror vdev of multiple drives for cases where you have really critical data 
-- e.g. a single 4-drive mirror vdev. I suppose regular backups can help with 
critical data too.

I use a 2-drive mirror vdev for ZFS boot, but prefer RAID-Z2 for the main data 
pool, although I may consider RAID-Z3 in future.

For 2-drive mirror vdevs, 2 drives die and data=toast. A RAID-Z2 vdev would 
still be readable, although whether you'd have enough time/luck to survive the 
required two resilvers is debateable. With mirrors though, I suppose resilver 
time would be quicker. I expect you would have some insightful comments in this 
area.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
Hi Bob,

Why do you consider that model a good drive?

Why do you like to use mirrors instead of something like RAID-Z2 / RAID-Z3?

And how many drives do you (recommend to) use within each mirror vdev?

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
Reading through your post brought back many memories of how I used to manage my 
data.

I also found SuperDuper and Carbon Copy Cloner great for making a duplicate of 
my Mac's boot drive, which also contained my data.

After juggling around with cloning boot/data drives and using non-redundant 
Time Machine backups etc, plus some manual copies here and there, I said 'there 
must be a better way' and so the long search ended up with the idea of having 
fairly 'dumb' boot drives containing OS and apps for each desktop PC and moving 
the data itself onto a redundant RAID NAS using ZFS. I won't bore you with the 
details any more -- see the link below if it's interesting. BTW, I still use 
SuperDuper for cloning my boot drive and it IS terrific.

Regardless of where the data is, one still needs to do backups, like you say. 
Indeed, I know all about scrub and do that regularly and that is a great tool 
to guard against silent failure aka bit rot.

Once your data is centralised, making data backups becomes easier, although 
other problems like the human factor still come into play :)

If I left my backup system switched on 24/7 it would in theory be fairly easy 
to (1) automate NAS snapshots and then (2) automate zfs sends of the 
incremental differences between snapshots, but I don't want to spend the money 
on electricity for that.

And when buying drives every few years, I always try to take advantage of 
Moore's law.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
In general I agree completely with what you are saying. Making reliable large 
capacity drives does appear to have become very difficult for the drive 
manufacturers, judging by the many sad comments from drive buyers listed on 
popular, highly-trafficked sales outlets' websites, like newegg.

And I think your 750GB choice should be a good one. I'm currently using 750GB 
drives (WD7500AAKS) and they have worked flawlessly over the last 2 years. But 
knowing that drives don't last forever, it's time I looked for some new ones, 
assuming they can be reasonably assumed to be reliable from customer ratings 
and reports.

If there's one manufacturer that *may* possibly have proved the exception, it 
might be Samsung with their 1.5TB and 2TB drives -- see my post just a little 
further up.

And using triple parity RAID-Z3 does seem a good idea now when using these 
higher capacity drives. Or perhaps RAID-Z2 with a hot spare? I don't know which 
is better -- I guess RAID-Z3 is better, AND having a spare available ready to 
replace a failed drive when it happens. But I think I read that unused drive 
bearings seize up if unused so I don't know. Any comments?

For resilvering to be required, I presume this will occur mostly in the event 
of a mechanical failure. Soft failures like bad sectors will presumably not 
require resilvering of the whole drive to occur, as these types of error are 
probably easily fixable by re-writing the bad sector(s) elsewhere using 
available parity data in redundant arrays. So in this case larger capacities 
and resilvering time shouldn't become an issue, right?

And there's one big item of huge importance here, which is often overlooked by 
people, and that is the fact that one should always have a reasonably current 
backup available. Home RAID users often pay out the money for a high-capacity 
NAS and then think they're safe from failure, but a backup is still required to 
guard against loss.

I do have a separate Solaris / ZFS machine dedicated to backups, but I do admit 
to not using it enough -- something I should improve. It contains a backup but 
an old one. Part of the reason for that is that to save money, I filled it with 
old drives of varying capacity in a *non-redundant* config to maximise 
available space from smaller drives mixed with larger drives. Being 
non-redundant, I shouldn't depend on its integrity, as there is a high 
likelihood of it containing multiple latent errors (bit rot).

What might be a good idea for a backup box, is to use a large case to house all 
your old drives using multiple matched drive-capacity redundant vdevs. This 
way, each time you upgrade, you can still make use of your old drives in your 
backup machine, without disturbing the backup pool - i.e. simply adding a new 
vdev each time...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Simon Breden
I just took a look at customer feedback on this drive here. 36% rate with 
one star, which I would consider alarming. Take a look here, ordered from 
lowest rating to highest rating. Note the recency of the comments and the 
descriptions:

http://www.newegg.com/Product/ProductReview.aspx?Item=22-148-412&SortField=3&SummaryType=0&Pagesize=10&SelectedRating=-1&PurchaseMark=&VideoOnlyMark=False&VendorMark=&Page=1&Keywords=%28keywords%29";>Seagate
 Barracuda LP ST31500541AS 1.5TB 5900 RPM

Is this the model you mean? If so, I might look at some other alternative 
possibilities.

So, we have apparently problematic newest revision WD Green 'EADS' and 
'EARS' models, and an apparently problematic Seagate model described here.

That leaves Hitachi and Samsung.

I had past 'experiences' with post IBM 'deathstar' Hitachi drives, so I 
think for now I shall be looking into the Samsungs, as from the customer 
reviews it seems these could be the most reliable consumer-priced high-capacity 
drives available right now.

It does seem that it is proving to be a big challenge for the drive 
manufacturers to produce reliable high-capacity consumer-priced drives. Maybe 
this is Samsung's opportunity to prove how good they are?

http://www.newegg.com/Product/Product.aspx?Item=N82E16822152175&Tpk=HD154UI";>Samsung
 1.5TB HD154UI 3-platter drive
http://www.newegg.com/Product/Product.aspx?Item=N82E16822152202&Tpk=HD203WI";>Samsung
 2TB HD203WI 4-platter drive
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-23 Thread Simon Breden
Thanks a lot.

I'd looked at SO many different RAID boxes and never had a good feeling about 
them from the point of data safety, that when I read the 'A Conversation with 
Jeff Bonwick and Bill Moore – The future of file systems' article 
(http://queue.acm.org/detail.cfm?id=1317400), I was convinced that ZFS sounded 
like what I needed, and thought I'd try to help others see how good ZFS was and 
how to make their own home systems that work. Publishing the notes as articles 
had the side-benefit of allowing me to refer back to them when I was 
reinstalling a new SXCE build etc afresh... :)

It's good to see that you've been able to set the error reporting time using 
HDAT2 for your Samsung HD154UI drives, but it is a pity that the change does 
not persist through cold starts.

From a brief look, it looks like like the utility runs under DOS, so I wonder 
if it would be possible to convert the code into C and run it immediately after 
OpenSolaris has booted? That would seem a reasonable automated workaround. I 
might take a little look at the code.

However, the big questions still remain:
1. Does ZFS benefit from shorter error reporting times?
2. Does having shorter error reporting times provide any significant data 
safety through, for example, preventing ZFS from kicking a drive from a vdev?

Those are the questions I would like to hear somebody give an authoritative 
answer to.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-22 Thread Simon Breden
OK, gotcha.

Relating to my request for robustness feedback of the other driver, I was 
referring in fact to the mpt_sas driver that James says is used for the 
non-RAID LSI SAS2008-based cards like the SuperMicro AOC-USAS2-L8e (as opposed 
to the RAID-capable AOC-USAS2-L8i & LSI SAS 9211-8i cards, which use the mr_sas 
driver).

As far as I'm aware, the standard mpt driver is used for the card I already 
own, the LSI SAS1068E-based AOC-USAS-L8i etc.

Cheers,
Simon

http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-22 Thread Simon Breden
Thanks for your reply Miles.

I think I understand your points, but unfortunately my historical knowledge of 
the the need for TLER etc solutions is lacking.

How I've understood it to be (as generic as possible, but possibly inaccurate 
as a result):

1. In simple non-RAID single drive 'desktop' PC scenarios where you have one 
drive, if your drive is experiencing read/write errors, as this is the only 
drive you have, and therefore you have no alternative redundant source of data 
to help with required reconstruction/recovery, you REALLY NEED your drive to 
try as much as possible to try to recover from the error, therefore a long 
'deep recovery' process may be kicked off to try to fix/recover the problematic 
data being read/written. 

2. Historically, hardware RAID arrays, where redundant data *IS* available, you 
really DON'T want any drive with trivial occasional block read errors to be 
kicked from the array, so the idea was to have drives experiencing read errors 
report quickly to the hardware RAID controller that there's a problem, so that 
the hardware RAID controller can then quickly reconstruct the missing data by 
using the redundant parity data.

3. With ZFS, I think you're saying that if, for example, there's a block read 
error, then even with a RAID EDITION (TLER) drive, you're still looking at a 
circa 7 second delay before the error is reported to ZFS, and if you're using a 
cheapo standard non-RAID edition drive then you're looking at a likely circa 
60/70 second delay before ZFS is notified. Either way, you say that ZFS won't 
kick the drive, yes? And worst case is that depending on arbitrary 'unknowns' 
relating to the particular drive's firmware chemistry/storage stack, relating 
to the storage array's repsonsiveness, 'some time' could be 'mucho time' if 
you're unlucky.

And to summarise, you don't see any point in spending a high premium on 
RAID-edition drives if using with ZFS, yes? And also, you don't think that 
using non-RAID edition drives presents a significant additional data loss risk?

Cheers,
Simon

http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-21 Thread Simon Breden
Thanks!

Yep, I was about to buy six or so WD15EADS or WD15EARS drives, but it looks 
like I will not be ordering them now.

The bad news is that after looking at the Samsungs it too seems that they have 
no way of changing the error reporting time in the 'desktop' drives. I hope I'm 
wrong though. I refuse to pay silly money for 'raid editions' of these drives.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-21 Thread Simon Breden
Ouch. Was that on the original 2009.06 vanilla install, or a later updated 
build? Hopefully a lot of the original bugs have been fixed by now, or soon 
will be.

Has anyone got any "from the trenches" experience of using the mpt_sas driver? 
Any comments?

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-21 Thread Simon Breden
+1

I agree 100%

I have a website whose ZFS Home File Server articles are read around 1 million 
times a year, and so far I have recommended Western Digital drives 
wholeheartedly, as I have found them to work flawlessly within my RAID system 
using ZFS.

With this recent action by Western Digital of disabling the ability to 
time-limit the error reporting period, thus effectively forcing consumer RAID 
users to buy their RAID-version drives at 50%-100% price premium, I have 
decided not to use Western Digital drives any longer, and have explained why 
here:

http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/ (look in the 
Drives section)

Like yourself, I too am searching for consumer-priced drives where it's still 
possible to set the error reporting period.

I'm also looking at the Samsung models at the moment -- either the HD154UI 
1.5TB drive or the HD203WI 2TB drives... and if it's possible to set the error 
reporting time then these will be my next purchase. They have quite good user 
ratings at newegg.com...

If WD lose money over this, they might rethink their strategy. Until then, bye 
bye WD.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-21 Thread Simon Breden
> Correct. I only know the internal chip code names, not what the actual 
> shipping products are called :|

Now 'knew' ;-)

It's reassuring to hear your points a thru d regarding the development/test 
cycle.

I could always use the 'try before you buy' approach: others try it, and if it 
works, I buy it ;-)

Thanks a lot.
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-01-21 Thread Simon Breden
Thanks a lot for the info James.

For the benefit of myself and others then:
1. mpt_sas driver is used for the SuperMicro AOC-USAS2-L8e
2. mr_sas driver is used for the SuperMicro AOC-USAS2-L8i and LSI SAS 9211-8i

And how does the maturity/robustness of the mpt_sas & mr_sas drivers compare to 
the mpt driver which I'm currently using for my LSI 1068-based AOC-USAS-L8i 
card? (in the default IT mode)

It might be hard to answer that one, but I thought I'd ask anyway, as it would 
make choosing new kit for OpenSolaris + ZFS a bit easier.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-21 Thread Simon Breden
That looks promising.

As the main thing here is that OpenSolaris supports the LSI SAS2008 controller, 
I have created a new post to ask for confirmation of driver support -- see here:
http://opensolaris.org/jive/thread.jspa?threadID=122156&tstart=0

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller ?

2010-01-21 Thread Simon Breden
Does anyone know if the current OpenSolaris mpt driver supports the recent LSI 
SAS2008 controller?

This controller/ASIC is used in the next generation SAS-2 6Gbps PCIe cards from 
LSI and SuperMicro etc, e.g.:
1. SuperMicro AOC-USAS2-L8e and the AOC-USAS2-L8i
2. LSI SAS 9211-8i

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-20 Thread Simon Breden
Hi Constantin,

It's good to hear your setup with the Samsung drives is working well. Which 
model/revision are they?

My personal preference is to use drives of the same model & revision.

However, in order to help ensure that the drives will perform reliably, I 
prefer to do a fair amount of research first, in order to find drives that are 
reported by many users to be working reliably in their systems. I did this for 
my current WD7500AAKS drives and have never seen even one read/write or 
checksum error in 2 years - they have worked flawlessly.

As a crude method of checking reliability of any particular drive, I take a 
look at newegg.com and see the percentage of users rating the drives with 4 or 
5 stars, and read the problems listed to see what kind of problems the drives 
may have.

If you read the WDC links I list in the first post above, there does appear to 
be some problem that many users are experiencing with the most recent revisions 
of the WD Green 'EADS' drives and also the new Green models in the 'EARS' 
range. I don't know the cause of the problem though.

I did wonder if the problems people are experiencing might be caused by 
spindown/power-saving features of the drives, which might cause a long delay 
before data is accessible again after spin-up, but this is just a guess.

For now, I am looking at the 1.5TB Samsung HD154UI (revision 1AG01118 ?), or 
possibly the 2TB Samsung HD203WI when more user ratings are available.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-20 Thread Simon Breden
I see also that Samsung have very recently released the HD203WI 2TB 4-platter 
model.

It seems to have good customer ratings so far at newegg.com, but currently 
there are only 13 reviews so it's a bit early to tell if it's reliable.

Has anyone tried this model with ZFS?

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-20 Thread Simon Breden
Yes, this model looks to be interesting.

SuperMicro seem to have produced two models new models that satisfy the SATA 
III requirements of 6Gbps per channel:

1. AOC-USAS2-L8e: 
http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=E
2. AOC-USAS2-L8i: 
http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=I

The main difference appears to be that the L8i model has RAID capabilities, 
whereas the L8e model does not.

As ZFS does its own RAID calculations in software it needs JBOD, and doesn't 
need the adapter to have RAID capabilities, so the AOC-USAS2-L8e model looks to 
be ideal. If we're lucky maybe it's also a little cheaper too.

Sorry I can't help you with your questions though. Hopefully someone else will 
be able to help. I will also be interested to hear any further info on this 
card.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-18 Thread Simon Breden
Thanks. Newegg shows quite a good customer rating for that drive: 70% rated it 
with 5 stars, and 11% with four stars, with 240 ratings.

Seems like some people have complained about them sleeping - presumable to save 
power, although others report they don't, so I'll need to look into that more. 
Did yours sleep?

Also, someone reported some issues with smartctl and understanding some of the 
attributes. Does checking your drive temperatures using smartctl work? Like 
with this script: http://breden.org.uk/2008/05/16/home-fileserver-drive-temps/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-17 Thread Simon Breden
Good to hear about the Samsungs working for you. Which model/revision are you 
using?

I think your cautious method of not trusting drives, and making an initial 
trial mirror is a good one, and I might well do what you've done for the next 
batch I buy.

I also use RAID-Z2 vdevs and it feels a lot safer than having only single 
parity.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which drive model/revision number are you using?
I presume you are using the 4-platter version: WD15EADS-00R6B0, but perhaps I 
am wrong.

Also did you run WDTLER.EXE on the drives first, to hasten error reporting 
times?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which consumer-priced 1.5TB drives do people currently recommend?

I had zero read/write/checksum errors so far in 2 years with my trusty old 
Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of 
drives that are big, reliable and cheap.

As of Jan 2010 it seems the price sweet spot is the 1.5TB drives.

As I had a lot of success with Western Digital drives I thought I would stick 
with WD.

However, this time I might have to avoid Western Digital (see below), so I 
wondered which other recent drives people have found to be decent drives.

WD15EADS:
The model I was looking at was the WD15EADS.
The older 4-platter WD15EADS-00R6B0 revision seems to work OK, from what I 
found, but I prefer fewer platters from noise, vibration, heat & reliability 
perspectives.
The newer 3-platter WD15EADS-00P8B0 revision seems to have serious problems - 
see links below.

WD15EARS:
Also, very recently WD brought out a 3-platter WD15EARS-00Z5B1 revision, based 
on 'Advanced format' where it uses 4KB sector sizes instead of the old 
traditional 512 byte sector sizes.
Again, these drives seem to have serious issues - see links below.
Does ZFS handle this new 4KB sector size automatically and transparently, or 
does something need to be done for it work?

Reference:
1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151&t=19131&sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
  (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
  (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-03 Thread Simon Breden
Thanks Gaëtan.

What's the bug id for this iommu bug on Intel platforms?

In my case, I have an AMD processor with ECC RAM, so probably not related to 
the Intel iommu bug.

I'm seeing the checksum errors in a mirrored rpool using SSDs so maybe it could 
be something like cosmic rays causing occasional random bits to flip? After 
clearing the errors and scrubbing the pool a couple of times until the errors 
were fixed, I have not seen any new checksum errors, and I'm using 121 at the 
moment, though I should probably drop back to 117 to avoid the RAID-Z bug, 
although I have a RAID-Z2 vedev and not a RAID-Z1 vdev so I should not 
encounter the more serious problem mentioned.

>After the errors reported during the scrub on snv 121, I run a scrub on snv 
>118 and find the same
> amount of error, all on rpool/dump. I dropped that zvol, rerun the scrub 
> again still on snv 118 
>without any error. After a reboot on snv 121 and a new scrub, no checksum 
>error are reported.

You did #zfs destroy rpool/dump ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with RAID-Z in builds snv_120 - snv_123

2009-09-03 Thread Simon Breden
OK, thanks Adam.
I'll look elsewhere for the mirror checksum error issue. In fact there's 
already a response here, which I shall check up on:
http://opensolaris.org/jive/thread.jspa?messageID=413169#413169

Thanks again, and I look forward to grabbing 124 soon.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-03 Thread Simon Breden
So what's the consensus on checksum errors appearing within mirror vdevs?
Is it caused the same bug announced by Adam, or is something else causing it?
If so, what's the bug id?

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with RAID-Z in builds snv_120 - snv_123

2009-09-03 Thread Simon Breden
Hi Adam,

Thanks for the info on this. Some people, including myself, reported seeing 
checksum errors within mirrors too. Is it considered that these checksum errors 
within mirrors could also be related to this bug, or is there another bug 
related to checksum errors within mirrors that I should take a look at?
Search for 'mirror' here:
http://opensolaris.org/jive/thread.jspa?threadID=111316&tstart=0

Cheers,
Simon

And good luck with the fix for build 124. Are talking days or weeks for the fix 
to be available, do you think? :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Simon Breden
And in addition to which solaris version people are using, is it relevant which 
ZFS level their pool is using?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Simon Breden
Hi Richard, I just took at that link and it only mentions problems with RAID-Z 
vdevs, but some people here, including myself, have checksum errors with 
mirrors too, so maybe the link could be updated with this info?

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Simon Breden
Cheers Frank, I'll give it a try... also, doesn't sound good if the problem 
goes back pre snv_100...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Simon Breden
Thanks Markus, I'll give that a try.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Simon Breden
I too see checksum errors ocurring for the first time using OpenSolaris 2009.06 
on the /dev package repository at version snv_121.

I see the problem occur within a mirrored boot pool (rpool) using SSDs.

Hardware is AMD BE-2350 (ECC) processor with 4GB ECC memory on MCP55 chipset, 
although SATA is using mpt driver on a SuperMicro AOC-USAS-L8i controller card.

More here:
http://breden.org.uk/2009/09/02/home-fileserver-handling-pool-errors/

So I'm going to check my other boot environments to see if a rollback makes 
sense (< snv_121).

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-29 Thread Simon Breden
BTW, if you're interested in seeing my attempts to migrate from a 160 GB IDE 
drive-based root boot pool to a pair of mirrored 30 GB SSDs, then take a look 
here:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-29 Thread Simon Breden
Yes, setting the Boot Environment repository URL to 
http://pkg.opensolaris.org/dev/ worked.

My pool had been upgraded to ZFS version 16 previously using the dev repo.
'zpool get all tank' shows the ZFS version. But you can't use this command 
unless the pool is imported, so when you encounter problems like I did, you 
can't see which version the pool's using.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Looks like my last IDE-based boot environment may have been pointing to the 
/dev package repository, so that might explain how the data pool version got 
ahead of the official 2009.06 one.

Will try to fix the problem by pointing the SSD-based BE towards the dev repo 
and see if I get success.

Will update this thread with my findings, although I expect it will work :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Some more info that might help:

I have the old IDE boot drive which I can reconnect if I get no help with this 
problem. I just hope it will allow me to import the data pool, as this is not 
guaranteed.

Way back, I was using SXCE and the pool was upgraded to the latest ZFS version 
at the time.
Then around May 2009 I installed "OpenSolaris 2009.06 preview", which appeared 
a couple of weeks before the release of the final OpenSolaris 2009.06. I used 
Package Manager to update all packages etc. At some point I ran "zpool upgrade" 
on the data pool to bring it up to the latest ZFS version, as it was saying 
that it was not using the latest ZFS version when I did a "zpool status" on the 
pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
I was using OpenSolaris 2009.06 on an IDE drive, and decided to reinstall onto 
a mirror (smaller SSDs).

My data pool was a separate pool and before reinstalling onto the new SSDs I 
exported the data pool.

After rebooting and installing OpenSolaris 2009.06 onto the first SSD I tried 
to import my data pool and saw the following message:

# zpool import tank
cannot import 'tank': pool is formatted using a newer ZFS version

I then used Package Manager to do an "update all" to bring the OS upto the 
latest version, and so hopefully also the ZFS version, then I retried the 
import with the same result -- i.e. it won't import.

Here's some additional info:
SunOS zfsnas 5.11 snv_111b i86pc i386 i86pc Solaris

# zpool upgrade -v
This system is currently running ZFS pool version 14.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit support
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

If the OS is at ZFS version 14, which I assume is the latest version, then my 
data pool presumably can't be using a newer version.

So is there a bug, workaround or simple solution to this problem?

If I could query the ZFS version of the unimported data pool that would be 
handy, but I suspect this is a bug anyway...

Here's hoping for a quick reply as right now, I cannot access my data :(((

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best controller card for 2 to 4 SATA drives ?

2009-06-27 Thread Simon Breden
Hi,

Does anyone know of a reliable 2 or 4 port SATA card with a solid driver, that 
plugs into a PCIe slot, so that I can benefit from the high read speeds 
available from adding a couple of SSDs to form my ZFS root/boot pool?
 (Each SSD is capable of reading at around 150-200 MBytes/sec)

After initially thinking I would move my existing 6-drive RAID-Z2 array to a 
new 8-port SATA controller, I finally decided to leave the drives connected to 
the motherboard SATA ports, and instead to get an additional smaller SATA card 
to allow me to connect 2 boot drives to form a mirror.

For anyone considering a controller card to support 8 SATA drives, see this 
thread which has got some great comments from people experienced with using 
these larger cards. No doubt I will refer to it again when I build another 
storage system one day :)
See: http://www.opensolaris.org/jive/thread.jspa?threadID=106210

Thanks,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
OK, thanks James.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
That sounds even better :)

So what's the procedure to create a zpool using the 1068?

Also, any special 'tricks /tips' / commands required for using a 1068-based 
SAS/SATA device?

Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
> I think the confusion is because the 1068 can do "hardware" RAID, it
can and does write its own labels, as well as reserve space for replacements
of disks with slightly different sizes. But that is only one mode of 
operation.

So, it sounds like if I use a 1068-based device, and I *don't* want it to write 
labels to the drives to allow easy portability of drives to a different 
controller, then I need to avoid the "RAID" mode of the device and instead 
force it to use JBOD mode. Is this easily selectable? I guess you just avoid 
the "Use RAID mode" option in the controller's BIOS or something?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
Miles, thanks for helping clear up the confusion surrounding this subject!

My decision is now as above: for my existing NAS to leave the pool as-is, and 
seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs 
that I want to add.

For the next NAS build later on this summer, I will go for an LSI 1068-based 
SAS/SATA configuration based on a PCIe expansion slot, rather than the ageing 
PCI-X slots.

Using PCIe instead of PCI-X also opens up a load more possible motherboards, 
although as I want ECC support this still limits choices for mobos. I was 
thinking of using something like a Xeon E5504 (Nehalem) in the new NAS, and 
I've been hunting for a good, highly compatible mobo that will give the least 
aggro (trouble) with OpenSolaris, and this one looks good as it's pretty much 
totally Intel chipsets, and it has an LSI SAS1068E, which I trust should be 
supported by Solaris, and it also has additional PCIe slots for additional 
future expansion, and basic onboard graphics chip, and dual Intel GbE NICs:
SuperMicro X8STi-3F: 
http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STi-3F.cfm

Any comments on this mobo welcome, plus suggestions for a possible PCIe-based 
2+ port SATA card that is reliable and has a solid driver.

Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
The situation regarding lack of open source drivers for these LSI 
1068/1078-based cards is quite scary.

And did I understand you correctly when you say that these LSI 1068/1078 
drivers write labels to drives, meaning you can't move drives from an LSI 
controlled array to another arbitrary array due to these labels?

If this is the case then surely my best bet would be to go for the non-LSI 
controllers -- e.g. the AOC-SAT2-MV8 instead, which I presume does not write 
labels to the array drives?

Please correct me if I have misunderstood.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migration: 1 x 160GB IDE boot drive ---> 2 x 30GB SATA SSDs

2009-06-24 Thread Simon Breden
Hi,

I have OpenSolaris 2009.06 currently installed on a 160 GB IDE drive.
I want to replace this with a 2-way mirror 30 GB SATA SSD boot setup.

I found these 2 threads which seem to answer some questions I had, but I still 
have some questions.
http://opensolaris.org/jive/thread.jspa?messageID=386577
http://opensolaris.org/jive/thread.jspa?threadID=104656

FIRST QUESTION:
Although, it seems possible to add a drive to form a mirror for the ZFS boot 
pool 'rpool', the main problem I see is that in my case, I would be attempting 
to form a mirror using a smaller drive (30GB) than the initial 160GB drive.
Is there an easy solution to this problem, or would it be simpler to just do a 
reinstall of OpenSolaris 2009.06 onto 2 brand new 30GB SSDs? I have the option 
of the fresh install, as I haven't invested much time in configuring this 
OS2009.06 boot environment yet.

SECOND QUESTION:
I also want the possibility to have multiple boot environments within 
OpenSolaris 2009.06 to allow easy rollback to a working boot environment in 
case of an IPS update problem. I presume this will not cause any additional 
complications?


THIRD QUESTION:
This is for a home fileserver so I don't want to spend too much, but does 
anyone see any problem with having the OS installed on MLC SSDs, which are 
cheaper than SLC SSDs. I'm thinking here specifically about wearing out the SSD 
if the OS does too many writes to the SSDs.

I agree SSDs are a bit overkill, and using standard spinning metal would be 
cheaper, but the case is vibrating like crazy as I ran out of drive slots and 
had to use non-grommeted attachments for the boot drive. But the SSDs should be 
silent and should certainly speed up boot and shutdown times dramatically :)

Thanks,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread Simon Breden
Hi Matt!

As kim0 says, that s/w PhotoRec looks like it might work, if it can work with 
ZFS... would be interested to hear if it works.

Good luck,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread Simon Breden
Yep, normally you can't get the data back, especially if new files have been 
written to the drives AND the files were written over the old ones.

You have a slight chance, or big chance, depending on how many files have been 
written since deletion of files, and if ZFS tries to use space that was just 
freed or not...

You could maybe try appealing to someone who knows these things well, as you 
might be able to find some software that can scan sectors and search for a 
known pattern of bytes.

Example: If your gf knows that one of her needed docs contains an expression 
such as 'blah blah blah'... then you need to scan the blocks for that byte 
pattern.

It's a long shot, and I don't know how to do that with ZFS file systems, but 
using this technique I managed to recover all my photo library from a failed 
Windows drive a few years ago using some software called GetDataBackNT.

Good luck!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Narrow escape!

2009-06-22 Thread Simon Breden
Lucky one there Ross!

Makes me glad I also upgraded to RAID-Z2 ;-)

Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-22 Thread Simon Breden
Also, is anybody using the AOC-USAS-L8i?

If so, what's your experience of it, and identifying drives and replacing 
failed drives with it?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-22 Thread Simon Breden
Thanks guys, keep your experiences coming.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Things I Like About ZFS

2009-06-21 Thread Simon Breden
OK, my turn:

- combining file system + volume manager + RAID + pool + scrub + resilvering + 
snapshots + rollback + end-to-end integrity + 256-but block checksums + 
on-the-fly healing of blocks with checksum errors on read
- one liners that are mostly remembered, and simple to guess if forgotten
- with RAID-Z2, superb protection + hot spares
- easy sharing via CIFS and NFS
- iSCSI as a block device target
- open source software RAID & no proprietary hardware RAID card required
- free
- did I forget something? :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Simon Breden
Hey Kebabber, long time no hear! :)

It's great to hear that you've had good experiences with the card. It's a great 
pity to have throughput drop from a potential 1GB/s to 150MB/s, but as most of 
my use of the NAS is across the network, and not local intra-NAS transfers, 
this should not be a problem. Of course, with a single GbE connection speeds 
are normally limited to around 50MB/s or so anyway...

Tell me, have you had any drives fail and had to figure out how to identify the 
failed drive and replace it & resilver using the AOC-SAT2-MV8, or have you 
tried any experiments to test resilvering ? I'm just curious as to how easy it 
is to do this with this controller card.

Like yourself, I was toying with the idea of upgrading and buying a shiny new 
mobo with dual 64-bit PCI-X slots and socket LGA1366 for Xeon 5500 series 
(Nehalem) processors -- the SuperMicro X8SAX here: 
http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm

Then I added up the price of all the components and decided to try to make do 
with the existing kit and just do an upgrade.

So I narrowed down possible SATA controllers to the above choices and I'm 
interested in people's experiences of using these controllers to help me decide.

Seems like the cheapest and simplest choice will be the AOC-SAT2-MV8, and I 
just take a hit on the reduced speed -- but that won't be a big problem.

However, as I have 2 x PCIe x16 slots available, if the AOC-USAS-L8i is 
reliable and doesn't have issues now with identifying drive ids, and supports 
JBOD mode, then it looks like the better choice. It is uses the more modern PCI 
Express (PCIe) interface, rather than the ageing PCI-X interface, fine as I'm 
sure it is.

Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Simon Breden
After checking some more sources, it seems that if I used the AOC-SAT2-MV8 with 
this motherboard, I would need to run it on the standard PCI slot. Here is the 
full listing of the motherboard's expansion slots:

2 x PCI Express x16 slot at x16, x8 speed 
2 x PCI Express x1 
3 x PCI 2.2   <<--- this one
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >