Re: [zfs-discuss] ZFS HW RAID

2009-09-19 Thread Scott Lawson



Bob Friesenhahn wrote:

On Fri, 18 Sep 2009, David Magda wrote:


If you care to keep your pool up and alive as much as possible, then 
mirroring across SAN devices is recommended.


One suggestion I heard was to get a LUN that's twice the size, and 
set copies=2. This way you have some redundancy for incorrect 
checksums.


This only helps for block-level corruption.  It does not help much at 
all if a whole LUN goes away.  It seems best for single disk rpools.
I second this. In my experience you are more likely to have a single LUN 
go missing for some reason or another and it seems most
prudent to support any production data volume with at the very minimum a 
mirror. This also give you 2 copies in a far more resilient
way generally. (and per my other post, there can be other niceties that 
come with it as well when couple with SAN based LUNS.)


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, 
http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS HW RAID

2009-09-19 Thread Erik Trimble
All this reminds me:   how much work (if any) has been done on the 
asyncronous mirroring option?   That is, for supporting mirrors with 
radically different access times?  (useful for supporting a mirror 
across a WAN, where you have hundred(s)-millisecond latency to the other 
side of the mirror)?


-Erik




Scott Lawson wrote:



Bob Friesenhahn wrote:

On Fri, 18 Sep 2009, David Magda wrote:


If you care to keep your pool up and alive as much as possible, 
then mirroring across SAN devices is recommended.


One suggestion I heard was to get a LUN that's twice the size, and 
set copies=2. This way you have some redundancy for incorrect 
checksums.


This only helps for block-level corruption.  It does not help much at 
all if a whole LUN goes away.  It seems best for single disk rpools.
I second this. In my experience you are more likely to have a single 
LUN go missing for some reason or another and it seems most
prudent to support any production data volume with at the very minimum 
a mirror. This also give you 2 copies in a far more resilient
way generally. (and per my other post, there can be other niceties 
that come with it as well when couple with SAN based LUNS.)


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, 
http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS HW RAID

2009-09-19 Thread Orvar Korvar
I asked the same question about one year ago here, and the posts poured in. 
Search for my user id? There is more info in that thread about which is best: 
ZFS vs ZFS+HWraid
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread Victor Latushkin

On 18.09.09 22:18, Dave Abrahams wrote:

I just did a fresh reinstall of OpenSolaris and I'm again seeing
the phenomenon described in 
http://article.gmane.org/gmane.os.solaris.opensolaris.zfs/26259

which I posted many months ago and got no reply to.

Can someone *please* help me figure out what's going on here?


Can you provide output of

zdb -l /dev/rdsk/c8t1d0p0
zdb -l /dev/rdsk/c8t1d0s0

zdb -l /dev/rdsk/c9t0d0p0
zdb -l /dev/rdsk/c9t0d0s0

zdb -l /dev/rdsk/c9t1d0p0
zdb -l /dev/rdsk/c9t1d0s0

as a starter?

I suspect there's some stale labels accessible through ...p0 devices (may be 
back labels only that unfortunately allow to open some pools that existed before.


So let's start finding this out.

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Persistent errors - do I believe?

2009-09-19 Thread Victor Latushkin

On 17.09.09 21:44, Chris Murray wrote:

Thanks David. Maybe I mis-understand how a replace works? When I added disk
E, and used 'zpool replace [A] [E]' (still can't remember those drive names),
I thought that disk A would still be part of the pool, and read from in order
to build the contents of disk E?


Exactly. Disks A and E will be arranged into special vdev of type 'replacing' 
beneath the raidz vdev which behaves like a mirror. As soon as resilvering is 
complete, disk A will be removed from this 'replacing' mirror making disk E to 
stay alone in the raidz vdev.


victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread David Abrahams

Hey, thanks for following up.

on Sat Sep 19 2009, Victor Latushkin Victor.Latushkin-AT-Sun.COM wrote:

 Can you provide output of

 zdb -l /dev/rdsk/c8t1d0p0
 zdb -l /dev/rdsk/c8t1d0s0

 zdb -l /dev/rdsk/c9t0d0p0
 zdb -l /dev/rdsk/c9t0d0s0

 zdb -l /dev/rdsk/c9t1d0p0
 zdb -l /dev/rdsk/c9t1d0s0

 as a starter?

 I suspect there's some stale labels accessible through ...p0 devices (may be 
 back
 labels only that unfortunately allow to open some pools that existed before.

 So let's start finding this out.

d...@hoss:~# zdb -l /dev/rdsk/c8t1d0p0

LABEL 0

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 1

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 2

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 3

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
d...@hoss:~# zdb -l /dev/rdsk/c8t1d0s0


LABEL 0

version=14
name='tank'
state=0
txg=321059
pool_guid=18040158237637153559
hostid=674932
hostname='hoss'
top_guid=17370712873548817583
guid=5539950970989281033
vdev_tree
type='raidz'
id=0
guid=17370712873548817583
nparity=2
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4000755744768
is_log=0
children[0]
type='disk'
id=0
guid=17720655760296015906
path='/dev/dsk/c8t0d0s0'
devid='id1,s...@sata_st3500641as_3pm0j4rw/a'
phys_path='/p...@0,0/pci10f1,2...@7/d...@0,0:a'
whole_disk=1
DTL=35
children[1]
type='disk'
id=1
guid=5539950970989281033
path='/dev/dsk/c8t1d0s0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/a'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:a'
whole_disk=1
DTL=34
children[2]
type='disk'
id=2
guid=11100368085398512076
path='/dev/dsk/c9t0d0s0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu4279114/a'
phys_path='/p...@0,0/pci10f1,2...@8/d...@0,0:a'
whole_disk=1
DTL=33
children[3]
type='disk'
id=3
guid=6967063319981472993
path='/dev/dsk/c9t1d0s0'

Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread David Abrahams

on Fri Sep 18 2009, Cindy Swearingen Cindy.Swearingen-AT-Sun.COM wrote:


 Not much help, but some ideas:

 1. What does the zpool history -l output say for the phantom pools?

d...@hoss:~#  zpool history -l Xc8t1d0p0
History for 'Xc8t1d0p0':
2009-05-14.06:00:20 zpool create Xc8t1d0p0 c8t1d0p0 [user root on 
hydrasol:global]
2009-06-07.21:42:44 zpool export Xc8t1d0p0 Xc9t0d0p0 [user root on hoss:global]

d...@hoss:~# zpool history -l Xc9t0d0p0
History for 'Xc9t0d0p0':
2009-05-14.06:00:24 zpool create Xc9t0d0p0 c9t0d0p0 [user root on 
hydrasol:global]

d...@hoss:~# zpool history -l Xc9t1d0p0
History for 'Xc9t1d0p0':
2009-05-14.06:00:26 zpool create Xc9t1d0p0 c9t1d0p0 [user root on 
hydrasol:global]
2009-06-07.21:30:42 zpool import -a -f [user root on hoss:global]
2009-06-07.21:42:51 zpool export Xc8t1d0p0 Xc9t1d0p0 [user root on hoss:global]
2009-09-17.15:04:23 zpool import -a [user root on hoss:global]

d...@hoss:~# 

 Were they created at the same time as the root pool or the same time
 as tank?

No, earlier apparently, and they've been through a few OS reinstalls

 2. The phantom pools contain the c8t1* and c9t1* fdisk partitions (p0s) that 
 are in
 your tank pool as whole disks. A strange coincidence.

 Does zdb output or fmdump output identify the relationship, if
 any, between the c8 and c9 devices in the phantom pools and tank?

I don't know how to read that stuff, but I've attached my zdb output.
fmdump is essentially empty.

 3. I can file a bug for you. Please provide the system information,
 such as hardware, disks, OS release.

Thanks.  The hardware is all described at
http://techarcana.net/hydra/hardware/.  The OS release is OpenSolaris
0906 with latest updates.

d...@hoss:~# zdb
Xc8t1d0p0
version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=799109629249470450
children[0]
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
Xc9t0d0p0
version=14
name='Xc9t0d0p0'
state=0
txg=66
pool_guid=12655905567020654415
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=12655905567020654415
children[0]
type='disk'
id=0
guid=611575587582790566
path='/dev/dsk/c9t0d0p0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu4279114/q'
phys_path='/p...@0,0/pci10f1,2...@8/d...@0,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
Xc9t1d0p0
version=14
name='Xc9t1d0p0'
state=0
txg=67
pool_guid=13088732420232844728
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=13088732420232844728
children[0]
type='disk'
id=0
guid=5881429924050167143
path='/dev/dsk/c9t1d0p0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu3010505/q'
phys_path='/p...@0,0/pci10f1,2...@8/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
rpool
version=14
name='rpool'
state=0
txg=3151
pool_guid=8480802010740526288
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=8480802010740526288
children[0]
type='disk'
id=0
guid=10153492011253799981
path='/dev/dsk/c7d0s0'
devid='id1,c...@awdc_wd1600aajb-00j3a0=_wd-wcav30909252/a'
phys_path='/p...@0,0/pci-...@6/i...@0/c...@0,0:a'
whole_disk=0
metaslab_array=23
metaslab_shift=30
ashift=9
asize=160001425408
is_log=0
tank
version=14
name='tank'
state=0
txg=321059
pool_guid=18040158237637153559
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=18040158237637153559
children[0]
type='raidz'
id=0
guid=17370712873548817583
nparity=2
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4000755744768
 

Re: [zfs-discuss] Adding new disks and ditto block behaviour

2009-09-19 Thread Joseph Toppi
Yeah, after I learned that ditto blocks don't protect against failed
drives, I started working on a plan to move to raidz. I couldn't find
any good documentation on setting multiple filesystems systems in one
pool, though I know it is possible. I think I have enough storage to
work this together somehow, but I think I need to read more and plan
more.

Thanks for your response

On Thu, Sep 17, 2009 at 8:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Thu, 17 Sep 2009, Joe Toppi wrote:

 I filled this with data. So I added a 1.5 TB drive to the pool. Where will
 my ditto blocks and checksums go? Will it migrate data from the other drives
 automatically? Will it migrate data if I scrub or re-silver? will it never
 migrate data and just store all the new blocks and checksums on the new
 drive?

 Zfs does not automatically migrate data just because you added more drives.
  Scrub will only migrate failing data blocks.  Resilver clones a failing
 disk.  When a vdev becomes very full, more writes will be directed to the
 empty devices.

 If you have enough free disk space to store everything you had before, plus
 lots of space to spare, you could try creating a new filesystem in the pool
 and using zfs send to send from the existing filesystem to the new
 filesystem, and then destroy the old filesystem once you are satisified with
 the new one.  This would only work if there is considerably more free space
 than existing data and the result will still be lop-sided.

 If you have a whole lot of reliable storage space elsewhere, you could use
 zfs send to send to a file in that other storage space, destroy the old
 filesystem, and then recreate it with your zfs send file.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/




-- 
- Joe Toppi
(402) 714-7539
top...@gmail.com
http://www.assuredts.com/toppij/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and HW RAID

2009-09-19 Thread Lloyd H. Gill

Hello folks,

I am sure this topic has been asked, but I am new to this list. I have read
a ton of doc¹s on the web, but wanted to get some opinions from you all.
Also, if someone has a digest of the last time this was discussed, you can
just send that to me. In any case, I am reading a lot of mixed reviews
related to ZFS on HW RAID devices.

The Sun docs seem to indicate it possible, but not a recommended course. I
realize there are some advantages, such as snapshots, etc. But, the h/w raid
will handle Œmost¹ disk problems, basically reducing the great capabilities
of the big reasons to deploy zfs. One suggestion would be to create the h/w
RAID LUNs as usual, present them to the OS, then do simple striping with
ZFS. Here are my two applications, where I am presented with this
possibility:

Sun Messaging Environment:
We currently use EMC storage. The storage team manages all Enterprise
storage. We currently have 10x300gb UFS mailstores presented to the OS. Each
LUN is a HW RAID 5 device. We will be upgrading the application and doing a
hardware refresh of this environment, which will give us the chance to move
to ZFS, but stay on EMC storage. I am sure the storage team will not want to
present us with JBOD. It is there practice to create the HW LUNs and present
them to the application teams. I don¹t want to end up with a complicated
scenario, but would like to leverage the most I can with ZFS, but on the EMC
array as I mentioned.

Sun Directory Environment:
The directory team is running HP DL385 G2, which also has a built-in HW RAID
controller for 5 internal SAS disks. The team currently has DS5.2 deployed
on RHEL3, but as we move to DS6.3.1, they may want to move to Solaris 10. We
have an opportunity to move to ZFS in this environment, but am curious how
to best leverage ZFS capabilities in this scenario. JBOD is very clear, but
a lot of manufacturers out there are still offering HW RAID technologies,
with high-speed caches. Using ZFS with these is not very clear to me, and as
I mentioned, there are very mixed reviews, not on ZFS features, but how it¹s
used in HW RAID settings.

Thanks for any observations.

Lloyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-19 Thread michael schuster

Victor Latushkin wrote:


I think you need to get a closer look at your another disk.

Is it possible to get result of (change controller/target numbers as 
appropriate if needed)


dd if=/dev/rdsk/c8t0d0p0 bs=1024k count=4 | bzip2 -9  c8t0d0p0.front.bz2

while booted off OpenSolaris CD?


not anymore - I realised I had no relevant data on the box, so I 
re-installed to get going again.


thx

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-19 Thread Blake
On Fri, Sep 18, 2009 at 1:51 PM, Steffen Weiberle
steffen.weibe...@sun.com wrote:
 I am trying to compile some deployment scenarios of ZFS.

 # of systems
3

 amount of storage
10 TB on storage server (can scale to 30)

 application profile(s)
NFS and CIFS

 type of workload (low, high; random, sequential; read-only, read-write,
 write-only)
Boot drives, Nearline backup, Postgres DB (OpenNMS)

 storage type(s)
SATA

 industry
Software

 whether it is private or I can share in a summary
 anything else that might be of interest
You can share my info :)


 Thanks in advance!!

 Steffen
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-19 Thread Erik Trimble

In the Eat-Your-Own-Dogfood mode:

Here in CSG at Sun (which is mainly all Java-related things):


Steffen Weiberle wrote:

I am trying to compile some deployment scenarios of ZFS.

If you are running ZFS in production, would you be willing to provide 
(publicly or privately)?


# of systems
All our central file servers plus the various Source Code repositories 
(of particular note:  http://hg.openjdk.java.net  which holds all of the 
OpenJDK and related source code).  Approximately 20 major machines, plus 
several dozen smaller ones.  And that's only what I know about (maybe 
about 50% of the total organization).



amount of storage

45TB+ just on the fileservers and Source Code repos

application profile(s)
NFS servers, Mercurial Source Code Repositories, Teamware Source Code 
Repositories, Lightweight web databases (db, postgresql, MySql), Web 
twikis, flat file data profile storage, Virtualized Host centralized 
storage.  Starting with roll-your-own VTLs.




type of workload (low, high; random, sequential; read-only, 
read-write, write-only)
NFS servers:  high load (100s of clients per server), random read  
write, mostly small files. 
Hg  TW source code repos:  low load (only on putbacks), huge amounts of 
small file read/writes (i.e. mostly random)

Testing apps:  mostly mid-size sequential writes
VTL (disk backups):  high load streaming writes almost exclusively.
xVM systems:  moderate to high load, heavy random read, modest random write.


storage type(s)
Almost exclusively FC-attached SAN.  Small amounts of dedicated FC 
arrays (STK2540 / STK6140), and the odd iSCSI thing here and there.  NFS 
servers are pretty much all T2000.  Source Code repos are X4000-series 
Opteron systems (usually X4200, X4140, or X4240).  Thumpers (X4500) are 
scattered around, and  the  rest is a total mishmash of both Sun and others.



industry

Software development

whether it is private or I can share in a summary

I can't see any reason not to summarize.

anything else that might be of interest

right now, we're not using SSDs hardly at all, and we unfortunately 
haven't done much with the Amber Road storage devices (7000-series).   
Our new interest is the Thumper/Thor (x4500 / x4540 ) machines being 
used as a disk backup device:  we're moving our backups to disk (i.e. 
client backup goes to disk first, then to tape as needed).  This is 
possible due to ZFS.  We're replacing virtually all our VxFS systems 
with ZFS.


Also, the primary development build/test system depends heavily on ZFS 
for storage, and will lean even more on it as we convert to xVM-based 
virtualization. I plan on using snapshots to radically reduce disk space 
required by multiple identical clients, and to make adding and retiring 
clients simpler.  In the case of our Windows clients, I expect ZFS 
snapshotting to enable me to automaticlly wipe the virtual client after 
every test run.  Which is really nice, considering the flakiness that 
testing on Windows causes.




Thanks in advance!!

Steffen


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: ZFS HW RAID

2009-09-19 Thread Al Hopper
-- Forwarded message --
From: Al Hopper a...@logical-approach.com
Date: Sat, Sep 19, 2009 at 5:55 PM
Subject: Re: [zfs-discuss] ZFS  HW RAID
To: Scott Lawson scott.law...@manukau.ac.nz




On Fri, Sep 18, 2009 at 4:38 PM, Scott Lawson scott.law...@manukau.ac.nzwrote:

. snip ..


  Sun Directory Environment:
 The directory team is running HP DL385 G2, which also has a built-in HW
 RAID controller for 5 internal SAS disks. The team currently has DS5.2
 deployed on RHEL3, but as we move to DS6.3.1, they may want to move to
 Solaris 10. We have an opportunity to move to ZFS in this environment, but
 am curious how to best leverage ZFS capabilities in this scenario. JBOD is
 very clear, but a lot of manufacturers out there are still offering HW RAID
 technologies, with high-speed caches. Using ZFS with these is not very clear
 to me, and as I mentioned, there are very mixed reviews, not on ZFS
 features, but how it’s used in HW RAID settings.

 Sun Directory environment generally isn't very IO intensive, except for in
 massive data reloads or indexing operations. Other than this it is an ideal
 candidate for ZFS
 and it's rather nice ARC cache. Memory is cheap on a lot of boxes and it
 will make read only type file systems fly. I imagine your actual living LDAP
 data set on disk
 probably won't be larger than 10 Gigs or so? I have around 400K objects in
 mine and it's only about 2 Gigs or so including all our indexes. I tend to
 tune DS up
 so that everything it needs is in RAM anyway. As far as diectory server
 goes, are you using the 64 bit version on Linux? If not you should be as
 well.


It would make sense  IMHO to spend your budget on enterprise grade SSDs for
this use case than to use EMC based storage.  Imagine a 3-way or 4-way
mirror of SSDs and the I/O  Ops/Sec you'd get for it!!

Bear in mind that Intel will soon release their 2nd generation E series
SSD products - which is based on SLC flash.

I know that politics may get in the way - but for certain workloads, the
price/performance of EMC is difficult to justify IMHO.

Thanks for any observations.

 Lloyd

 --

 ___
 zfs-discuss mailing 
 listzfs-disc...@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 --
 _

 Scott Lawson
 Systems Architect
 Information Communication Technology Services

 Manukau Institute of Technology
 Private Bag 94006
 South Auckland Mail Centre
 Manukau 2240
 Auckland
 New Zealand

 Phone  : +64 09 968 7611
 Fax: +64 09 968 7641
 Mobile : +64 27 568 7611
 mailto:sc...@manukau.ac.nz sc...@manukau.ac.nz
 http://www.manukau.ac.nz

 __

 perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 __




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Al Hopper  Logical Approach Inc,Plano,TX a...@logical-approach.com
  Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/



-- 
Al Hopper  Logical Approach Inc,Plano,TX a...@logical-approach.com
  Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] backup disk of rpool on solaris

2009-09-19 Thread Jeremy Kister

I added a disk to the rpool of my zfs root:
# zpool attach rpool c1t0d0s0 c1t1d0s0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0

I waited for the resilver to complete, then i shut the system down.

then i physically removed c1t0d0 and put c1t1d0 in it's place.

I tried to boot the system, but it panics:

SunOS Release 5.10 Version Generic_141415-10 64-bit 

Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved. 

Use is subject to license terms. 

NOTICE: 

spa_import_rootpool: error 6 




Cannot mount root on /p...@0,0/pci1022,7...@a/pci17c2,1...@4/s...@0,0:a 
/p...@0,0/pci1022,7...@a/pci17c2,1...@4/s...@1,0:a fstype zfs 




panic[cpu0]/thread=fbc283a0: vfs_mountroot: cannot mount root 




fbc4ab50 genunix:vfs_mountroot+323 () 

fbc4ab90 genunix:main+af () 

fbc4aba0 unix:_start+95 () 




skipping system dump - no dump device configured 


þebooting...

I've googled plenty, but don't see what's going on.

Can anyone tell me how to make this work ?

--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss