Re: [zfs-discuss] Identifying drives (SATA)

2011-02-05 Thread rwalists

On Feb 5, 2011, at 2:43 PM, David Dyer-Bennet wrote:

> Is there a clever way to figure out which drive is which?  And if I have to 
> fall back on removing a drive I think is right, and seeing if that's true, 
> what admin actions will I have to perform to get the pool back to safety?  
> (I've got backups, but it's a pain to restore of course.) (Hmmm; in 
> single-user mode, use dd to read huge chunks of one disk, and see which 
> lights come on?  Do I even need to be in single-user mode to do that?)

Obviously this depends on your lights working to some extent (the right light 
doing something when the right disk is accessed), but I've used:

dd if=/dev/rdsk/c8t3d0s0 of=/dev/null bs=4k count=10

which someone mentioned on this list.  Assuming you can actually read from the 
disk (it isn't completely dead), it should allow you to direct traffic to each 
drive individually.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored drive

2010-11-29 Thread rwalists

On Nov 29, 2010, at 8:05 AM, Dick Hoogendijk wrote:

> OK, I've got a proble I can't solve by myself. I've installed solaris 11 
> using just one drive.
> Now I want to create a mirror by attached a second one tot the rpool.
> However, the first one has NO partition 9 but the second one does. This way 
> the sizes differ if I create a partiotion 0 (needed because it's a boot 
> disk)..
> 
> How can I get the second disk look exactly the same like the first?
> Or can't that be done.

I haven't done this on Solaris 11 Express, but this worked on OpenSolaris 
2009-06:

 prtvtoc /dev/rdsk/c5t0d0s0 | fmthard -s - /dev/rdsk/c5t1d0s0

Where the first disk is the current root and the second one is the new mirror.

This is taken from here:

http://blogs.warwick.ac.uk/chrismay/entry/opensolaris_adventure_part_1

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-29 Thread rwalists

On Jul 28, 2010, at 3:11 PM, sol wrote:

> A partial workaround was to turn off access time on the share and to mount 
> with 
> noatime,actimeo=60
> 
> But that's not perfect because when left along the VM got into a "stuck" 
> state. 
> I've never seen that state before when the VM was hosted on a local disk. 
> Hosting VMs on NFS is not working well so far...

We host a lot of VMs on NFS shares (from a 7000 series) on ESXi with no issues 
other than an occasional Ubuntu machine that would do something similar to what 
you describe.  For us it was this:

http://communities.vmware.com/thread/237699?tstart=30

when the timeout is set to 180 the issue has been completely eliminated.  The 
current ESXi (and I think all versions of 4) VMWare tools does this properly.

Also, EMC and NetApp have this description of using NFS shares for VMWare:

http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

which by and large applies to any NFS server rather than just their equipment.  
We found it helpful, but nothing is that surprising in it.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread rwalists
On May 24, 2010, at 4:28 AM, Erik Trimble wrote:

> yes, both the X25-M (both G1 and G2) plus the X25-E have a DRAM buffer on the 
> controller, and neither has a supercapacitor (or other battery) to back it 
> up, so there is the potential for data loss (but /not/ data corruption) in a 
> power-loss scenario.
> 
> Sadly, we're pretty much at the point where no current retail-available SSD 
> has battery backup for it's on-controller DRAM cache (and, they /all/ use 
> DRAM caches).

I haven't seen where anyone has tested this, but the MemoRight SSD (sold by 
RocketDisk in the US) seems to claim all the right things:

http://www.rocketdisk.com/vProduct.aspx?ID=1

pdf specs:

http://www.rocketdisk.com/Local/Files/Product-PdfDataSheet-1_MemoRight%20SSD%20GT%20Specification.pdf

They claim to support the cache flush command, and with respect to DRAM cache 
backup they say (p. 14/section 3.9 in that pdf):

> The MemoRight’s NSSD have an on-drive backup power system. It saves energy 
> when the power supply is applied to drive. When power-off occurring, the 
> saved energy will be released to keep the drive working for a while. The 
> saved energy ensures the data in the cache can be flushed to the nonvolatile 
> flash media, which prevents the data loss to happen.
> It will take about 5 seconds to save enough energy for discharge at lease 1 
> second. The write cache will be disabled automatically before the backup 
> power system saved enough energy.

Which certainly sounds like an on-board capacitor to flush the cache and that 
the cache is disabled while charging the capacitor.  But I can't see where 
anyone has tested this on ZFS.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-09 Thread rwalists
On Mar 8, 2010, at 7:55 AM, Erik Trimble wrote:

> Assume your machine has died the True Death, and you are starting with new 
> disks (and, at least a similar hardware setup).
> 
> I'm going to assume that you named the original snapshot 
> 'rpool/ROOT/whate...@today'
> 
> (1)   Boot off the OpenSolaris LiveCD
> 
> 
...
> 
> (10)  Activate the restored BE:
>   # beadm activate New
> 
> 
> You should now be all set.   Note:  I have not /explicitly/ tried the above - 
> I should go do that now to see what happens.  :-)

If anyone is going to implement this, much the same procedure is documented at 
Simon Breden's blog:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

which walks through the commands for executing the backup and the restore.

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread rwalists
On Mar 8, 2010, at 12:05 AM, Dedhi Sujatmiko wrote:

> 2. OpenSolaris (and EON) does not have proper implementation of SMART 
> monitoring. Therefore I cannot get to know the temperature of my hard disks. 
> Since they are DIY storage without chassis environment monitoring, I consider 
> this an important regression
> 3. OpenSolaris (and EON) does not have proper serial number display of the 
> Seagate hard disks I am using. If I use the format to read the serial number, 
> I always miss the last character. If I read them using the "hd" or "hdparm" 
> utility, I will miss the first character

Both of these can be handled via smartctl 
(http://smartmontools.sourceforge.net/) as described here:

http://breden.org.uk/2008/05/16/home-fileserver-drive-temps/

As to the serial number, at least for Western Digital drives it was accurate.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists

On Mar 1, 2010, at 12:05 AM, Daniel Carosone wrote:

>> Is there anything that is safe to use as a ZIL, faster than the
>> Mtron but more appropriate for home than a Stec?  
> 
> ACARD ANS-9010, as mentioned several times here recently (also sold as
> hyperdrive5) 

You are right.  I saw that in a recent thread.  In my case I don't have a spare 
bay for it.  I'm similarly constrained on some of the PCI solutions that have 
either battery backup or external power.

But this seems like a good solution if someone has the space.

Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists

On Feb 28, 2010, at 11:51 PM, rwali...@washdcmail.com wrote:

> And what won't work are:
> 
> - Intel X-25M
> - Most/all of the consumer drives prices beneath the X-25M
> 
> all because they use capacitors to get write speed w/o respecting cache flush 
> requests. 

Sorry, meant to say "they use cache to get write speed w/o respecting cache 
flush requests."

--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] suggested ssd for zil

2010-02-28 Thread rwalists
If anyone has specific SSD drives they would recommend for ZIL use would you 
mind a quick response to the list?  My understanding is I need to look for:

1) Respect cache flush commands (which is my real question...the answer to this 
isn't very obvious in most cases)
2) Fast on small writes

It seems even the smallest sizes should be sufficient.  This is for a home NAS 
where most write work is for iSCSI volumes hosting backups for OS X Time 
Machine.  There is also some small amount of MySQL (InnoDB) shared via NFS.

>From what I can gather workable options would be:

- Stec which are in the 7000 series and extremely expensive

- Mtron Pro 7500 16GB SLC which seem to respect the cache flush but aren't 
particularly fast doing it
http://opensolaris.org/jive/thread.jspa?messageID=459872&tstart=0

- Intel X-25E with the cache turned off which seems to be like the Mtron

- Seagate's marketing page for their new SSD implies it has a capacitor to 
protect data in cache like I believe the Stec does.  But I don't think they are 
available at retail yet.
"Power loss data protection to ensure against data loss upon power failure"
http://www.seagate.com/www/en-us/products/servers/pulsar/pulsar/

And what won't work are:

- Intel X-25M
- Most/all of the consumer drives prices beneath the X-25M

all because they use capacitors to get write speed w/o respecting cache flush 
requests.  Is there anything that is safe to use as a ZIL, faster than the 
Mtron but more appropriate for home than a Stec?  Maybe the answer is to wait 
on Seagate, but I thought maybe someone has other ideas.

Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-10 Thread rwalists
On Feb 9, 2010, at 1:55 PM, matthew patton wrote:

>> It might help people to understand how ridiculous they
>> sound going on and on
>> about buying a premium storage appliance without any
>> storage.
> 
> Since I started this, let me explain to those who can't begin to understand 
> why I proposed something so "stupid". At work (branch of a federal gov't 
> big-5 Department) I need 40TB but have next to nothing in budget. (For some 
> reason all you damn citizens think you're entitled to keep most of your 
> paychecks to yourself instead living off what I decide to give you in 
> foodstamps and rent-controled housing.) Therefore, I can't afford let alone 
> justify the preposterous premium demanded by "enterprise" EMC/Sun/IBM/NetApp. 
> I can really use dedup, (integrity would be nice), and reasonable rack and 
> power footprint since I'm out of that too.
> 
> I can't exactly march into my boss' office and propose that we build my own 
> at-home special which is 16 WD RE2/3 drives $(60) in a $70 case, $100 power 
> supply, four 4-in-3 modules ($30) and a Chenbro SAS expander ($250) now can 
> I...
> 
> Aside: I find it laughable for anyone to claim a J4500 is "premium" anything. 
> IBM DS800, EMC Symetrix, NetApp FAS5xxx, sure. But a glorified JBOD 
> enclosure? Put down the damn cool-aid!

I don't disagree with any of the facts you list, but I don't think the 
alternatives are fully described by "Sun vs. much cheaper retail parts."

We face exactly this same decision with buying RAM for our servers (maybe more 
so since it is probably even more difficult to argue there is a difference in 
RAM chip quality when the same manufacturer's part is sourced from Sun vs. 
elsewhere).

Sun's RAM prices are much higher than retail, exactly as you describe here.  
The first thing to do is negotiate...they discount RAM heavily when threatened 
with sourcing it elsewhere.  But you'll still wind up with a difference, and 
not necessarily a tiny one.

The thing we consider is how we'll live with a failure.  If it's 100% Sun RAM 
there's no question.  We call, they come out and we're back in business.  If it 
isn't Sun will blame the 3rd party RAM and insist we try without the third 
party RAM.  Sometimes that's not possible (we don't have enough Sun RAM to run 
the application).  The 3rd party vendor might blame the Sun RAM (which is 
generally easier to test without).  We will have to spend time testing and/or 
debating with different warranty providers to get it resolved.  That increases 
our hours to fix it and/or the downtime (or we just throw out all the RAM and 
don't use the warranty, buying 3rd party).

Sometimes that makes sense (older server being repurposed for non-critical 
stuff), sometimes it doesn't (our most critical processes).

So we really don't view it as RAM vs. RAM comparison.  It's more RAM + easy 
warranty service vs. RAM + more difficult warranty service.  Both can provide 
the same service to us, but under different cost/restoration conditions down 
the road.  I think it's the same thing here.  If you want a fully supported 
product down the road where everything that goes wrong is Sun's fault, then buy 
that from Sun.  If you want something much cheaper but where you will need to 
negotiate future fixes, assemble it from various sources.  If you want a hybrid 
buy as much hardware as you can from a single vendor (preferably 
pre-integrated/as a single product vs. assemble yourself) and then run 
OpenSolaris on it (maybe w/Sun support contract).

In some ways its nice to have the option.  You can get similar services at 
hugely different price points...7410 cluster all the way down to white box home 
NAS.

Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharemgr

2009-11-25 Thread rwalists
On Nov 24, 2009, at 3:41 PM, dick hoogendijk wrote:

>> I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
>> use the sharemgr command.
> 
> Then you prefere wrong. ZFS filesystems are not shared this way.
> Read up on ZFS and NFS.

It can also be done with sharemgr.  Shaving via ZFS creates a sharemgr group 
called 'zfs', but you can also share things directly via the sharemgr commands. 
 It is fairly well spelled out in the manpage:

http://docs.sun.com/app/docs/doc/819-2240/sharemgr-1m?a=view

Basically you want to create a group, set the group's properties and add a 
share to the group.


--Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-11 Thread rwalists

On Nov 11, 2009, at 12:01 AM, Tim Cook wrote:


On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook  wrote:

  One thing I'm
noticing is a lot of checksum errors being generated during the  
resilver.

Is this normal?


Anyone?  It's up to 7.35M checksum errors and it's rebuilding  
extremely
slowly (as evidenced by the 10 hour time).  The errors are only  
showing on

the "replacing-9" line, not the individual drive.


I've only replaced a drive once, but it didn't show any checksum  
errors during the resilver.  This was a 2 TB WD Green drive in a  
mirror pool that had started to show write errors.  It was attached to  
a SuperMicro AOC-SAT2-MV8.


Good luck,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread rwalists

On Sep 29, 2009, at 2:41 AM, Eugen Leitl wrote:


On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:

personally i like this case:


http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021

it's got 20 hot swap bays, and it's surprisingly well built.  For  
the money,

it's an amazing deal.


You don't like http://www.supermicro.com/products/nfo/chassis_storage.cfm 
 ?

I must admit I don't have a price list of these.

When running that many hard drives I would insist on redundant
power supplies, and server motherboards with ECC memory. Unless
it's for home use, where a downtime of days or weeks is not critical.


I hadn't thought of going that way because I was looking for at least  
a somewhat pre-packaged system, but another posted pointed out how  
many more drives I could get by choosing case/motherboard separately.   
I agree, with this much trouble it doesn't make sense to settle for  
fewer drive slots than I can get.


I agree completely with the ECC.  It's for home use, so the power  
supply issue isn't huge (though if it's possible that's a plus).  My  
concern with this particular option is noise.  It will be in a closet,  
but one with louvered doors right off a room where people watch TV.   
Anything particularly loud would be an issue.  The comments on Newegg  
make this sound pretty loud.  Have you tried one outside of a server  
room environment?


Thanks,
Ware
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss