Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-09-01 Thread James Andrewartha

Jorgen Lundman wrote:
The mv8 is a marvell based chipset, and it appears there are no 
Solaris drivers for it.  There doesn't appear to be any movement from 
Sun or marvell to provide any either.


Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 
and AOC-SAT2-MV8, which use Marvell MV88SX and works very well in 
Solaris. (Package SUNWmv88sx).


They're PCI-X SATA cards, the AOC-SASLP-MV8 is a PCIe SAS card and has no 
(Open)Solaris driver.


--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-09-01 Thread Henrik Bjornstrom - Sun Microsystems

Thanks for tha answers.

Lori Alt wrote:

On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:

Hi !

Have anyone given an answer to this that I have missed ? I have a 
customer that have the same question and I want to give him a correct 
answer.


/Henrik

Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now i 
'm not able to remove the snapshot it gives me following error

zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has 
dependent clones

use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the 
dependent clone , is there a way to destroy just the snapshot 
leaving the clone and original volume intact ?

no. As long as a clone exists, its origin snapshot must exist as well.

lori





--
Henrik Bjornstrom

Sun MicrosystemsEmail:  henrik.bjornst...@sun.com
Box 51  Phone:  +46 8  631 1315
164 94  KISTA   
SWEDEN 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-01 Thread Andrew Robert Nicols
On Sat, Aug 29, 2009 at 10:09:00AM +1200, Ian Collins wrote:
 I have a case open for this problem on Solaris 10u7.

Interesting. One of our thumpers was previously running snv_112 and
experiencing these issues. Switching to 10u7 has cured it and it's been
stable now for several months.

 The case has been identified and I've just received an IDR,which I will  
 test next week.  I've been told the issue is fixed in update 8, but I'm  
 not sure if there is an nv fix target.

 I'll post back once I've abused a test system for a while.

Cheers, much appreciated!

Andrew

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 4311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW


signature.asc
Description: Digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ub_guid_sum and vdev guids

2009-09-01 Thread P. Anil Kumar
14408718082181993222 + 4867536591080553814 - 2^64 + 4015976099930560107 = 
484548669948327 

there was an overflow inbetween, that I overlooked.

pak
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Per Öberg
When I check 
--
# pfexec zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8t0d0s0  ONLINE   0 0 0

errors: No known data errors
--
Shouldn't the image-update take care of that ?
And is it safe to do an upgrade ?

/Per
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Casper . Dik

When I check 
--
# pfexec zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8t0d0s0  ONLINE   0 0 0

errors: No known data errors
--
Shouldn't the image-update take care of that ?
And is it safe to do an upgrade ?


No, that would make your other BEs unbootable.

This should really be automatic; there needs to be a way where some 
utility can determine what the minimal ZFS version is.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Peter Dennis - Sustaining Engineer



Per Öberg wrote:
When I check 
--

# pfexec zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8t0d0s0  ONLINE   0 0 0

errors: No known data errors
--
Shouldn't the image-update take care of that ?


No. The 'issue' here is that if you find that the new boot environment
does not work for you for whatever reason you will need to revert to
the previous environment. However if  you have upgraded the pool
then this will not be possible as the older version of the software
will have no knowledge about the new version.


And is it safe to do an upgrade ?


As long as you are happy with the new environment.

Thanks
pete



/Per

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Darren J Moffat

Per Öberg wrote:
When I check 
--

# pfexec zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8t0d0s0  ONLINE   0 0 0

errors: No known data errors
--
Shouldn't the image-update take care of that ?


Most definitely not.  Upgrading the pool could actually mean you can't 
boot back into your older BE.



And is it safe to do an upgrade ?


That depends.

1) Do you actually need to use the features (you get the bug fixes 
regardless) of the newer pool version


2) You have no older BE's left that you may wish to boot into that don't 
support the pool version you will be running if you do the upgrade.




--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Per Öberg
Thanks for all the answers, I've now cleared out the old BEs and upgraded the 
pools and everything just works as expected.

/Per
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread John-Paul Drawneek
i did not migrate my disks.

I now have 2 pools - rpool is at 60% as is still dog slow.

Also scrubbing the rpool causes the box to lock up.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] order bug, legacy mount and nfs sharing

2009-09-01 Thread kurosan
 Hi kurosan,

 I met the same but probably it cannot work.
 check zfs get all your_pool_mounted_/pathname
 you can see 'mountpoint' is 'legacy'
 so you have to use zfs sharenfs=on again to try

Hi,
thanks for the reply... I've had time only today to retry.
I've re-enabled zfs sharenfs=on but nfs server doesn't start because it can't 
find the shared directories... if I go back to automount the zfs filesystem the 
nfs server works again...
Can't go around this... maybe I'll simply wait for the mount order bug to be 
fixed.
Tnx for your help.

P.S.: I'm italian and not japanese ;)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS incremental backup restore

2009-09-01 Thread Amir Javanshir

Hi all;

I'm currently working on a small cookbook to showcase the backup and 
restore capabilities of ZFS using snapshots.
I chose to backup the data directory of a MySQL 5.1 server for the 
example purpose, using several backup/restore scenarios. The most simple 
is to simple snapshot the file system where the data directory resides, 
and use a zfs rollback to come back to a previous state.


But a better scenario would be to backup the data on a remote server. I 
thus have two servers, both running OpenSolaris 2009.06 (ZFS version 
14). One running the MySQL server (Server A) and the second for the 
remote backup (Server B).


1) Do the snapshots:

[Server A] # zfs snapshot -r rpool/mysql-d...@mysql-snap1
[Server A] # zfs snapshot -r rpool/mysql-d...@mysql-snap2
   ...
[Server A] # zfs snapshot -r rpool/mysql-d...@mysql-snap4

Between each snapshot I have changed the contend of my database (drop 
tables,  creante new ones, update or delete rows etc.)


2) Send the snapshots on the remote server:

[Server A] # zfs send -R rpool/mysql-d...@mysql-snap1 | ssh -l mysql 
ServerB /usr/sbin/zfs receive -du npool

This will send the full data stream for the first time.

Here is my first question. Concerning the incremental sends are the two 
following solutions equivalent or is there a nuance I missed between -i 
and -I ?


   [Server A] # zfs send -R*I *mysql-snap1 rpool/mysql-d...@mysql-snap4 
| ssh -l mysql ServerB /usr/sbin/zfs receive -du npool
  
 or


   [ServerA] # zfs send -R*i* mysql-snap1 rpool/mysql-d...@mysql-snap2 
| ssh -l mysql ServerB /usr/sbin/zfs receive -du npool
  [ServerA] # zfs send -R*i* mysql-snap2 rpool/mysql-d...@mysql-snap3 | 
ssh -l mysql ServerB /usr/sbin/zfs receive -du npool
  [ServerA] # zfs send -R*i* mysql-snap3 rpool/mysql-d...@mysql-snap4 | 
ssh -l mysql ServerB /usr/sbin/zfs receive -du npool


3) Restore from the remote server

But here is my main question: I can obviously restore my database data 
directory from the remote server:


[ServerB] # zfs send -R npool/mysql-d...@mysql-snap4 | ssh -l mysql 
ServerA /usr/sbin/zfs receive -d rpool


However this only works if I recreate from scratch my data directory. I 
must either destroy the entire file system on ServerA or rename it 
before the restore.
The problem is that I need to resend the entire dataset over the network 
to do this full restore. It's ok for small databases but I'm more 
thinking of the real life with multi gigabyte databases.


Is there a way to do simple an incremental receive, the same way we did 
an incremental send ?

Let me give a precise scenario:

Imagine I have destroyed on my pool/mysql-d...@mysql-snap4 ServerA, and 
a user deleted a table by mistake on the DB. I wuold like to go back to 
the state of snap4. Since the snapshot is not on ServerA I can not just 
do a rollback. However the snapshot still is on ServerB. I would just 
want to restore snap4, but it looks as if this is impossible. ZFS asks 
me te first destroy (or rename) the existing file system on server A. So 
the restore will be a complete restore not incremental.


Can you please let me know If I got something wrong ? Is it possible to 
do what I want to do with ZFS ?



Regards
Amir
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn

On Tue, 1 Sep 2009, John-Paul Drawneek wrote:


i did not migrate my disks.

I now have 2 pools - rpool is at 60% as is still dog slow.

Also scrubbing the rpool causes the box to lock up.


This sounds like a hardware problem and not something related to 
fragmentation.  Probably you have a slow/failing disk.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
So aside from the NFS debate, would this 2 tier approach work?  I am a bit 
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to 
the VMware host as a raw device.  Is that possible or is my understanding 
wrong?  Also could it be defined as a clustered resource?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling

On Sep 1, 2009, at 11:45 AM, Jason wrote:

So aside from the NFS debate, would this 2 tier approach work?  I am  
a bit fuzzy on how I would get the RAIDZ2 redundancy but still  
present the volume to the VMware host as a raw device.  Is that  
possible or is my understanding wrong?  Also could it be defined as  
a clustered resource?


The easiest and proven method is to use shared disks, two heads,
ZFS, and Open HA Cluster to provide highly available NFS or iSCSI
targets. This the fundamental architecture for most HA implementations.

An implementation, which does not use Open HA Cluster, is available
in appliance form as the Sun Storage 7310 or 7410 Cluster System.
But if you are building your own, Open HA Cluster may be a better
choice than rolling your own cluster software.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn

On Tue, 1 Sep 2009, Jpd wrote:


Thanks.

Any idea on how to work out which one.

I can't find smart in ips, so what other ways are there?


You could try using a script like this one to find pokey disks:

#!/bin/ksh

# Date: Mon, 14 Apr 2008 15:49:41 -0700
# From: Jeff Bonwick jeff.bonw...@sun.com
# To: Henrik Hjort hj...@dhs.nu
# Cc: zfs-discuss@opensolaris.org
# Subject: Re: [zfs-discuss] Performance of one single 'cp'
# 
# No, that is definitely not expected.
# 
# One thing that can hose you is having a single disk that performs

# really badly.  I've seen disks as slow as 5 MB/sec due to vibration,
# bad sectors, etc.  To see if you have such a disk, try my diskqual.sh
# script (below).  On my desktop system, which has 8 drives, I get:
# 
# # ./diskqual.sh

# c1t0d0 65 MB/sec
# c1t1d0 63 MB/sec
# c2t0d0 59 MB/sec
# c2t1d0 63 MB/sec
# c3t0d0 60 MB/sec
# c3t1d0 57 MB/sec
# c4t0d0 61 MB/sec
# c4t1d0 61 MB/sec
# 
# The diskqual test is non-destructive (it only does reads), but to

# get valid numbers you should run it on an otherwise idle system.

disks=`format /dev/null | grep ' c.t' | nawk '{print $2}'`

getspeed1()
{
ptime dd if=/dev/rdsk/${1}s0 of=/dev/null bs=64k count=1024 21 |
nawk '$1 == real { printf(%.0f\n, 67.108864 / $2) }'
}

getspeed()
{
# Best out of 6
for iter in 1 2 3 4 5 6
do
getspeed1 $1
done | sort -n | tail -2 | head -1
}

for disk in $disks
do
echo $disk `getspeed $disk` MB/sec
done


--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
True, though an enclosure for shared disks is expensive.  This isn't for 
production but for me to explore what I can do with x86/x64 hardware.  The idea 
being that I can just throw up another x86/x64 box to add more storage.  Has 
anyone tried anything similar?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Tim Cook
On Tue, Sep 1, 2009 at 2:17 PM, Jason wheelz...@hotmail.com wrote:

 True, though an enclosure for shared disks is expensive.  This isn't for
 production but for me to explore what I can do with x86/x64 hardware.  The
 idea being that I can just throw up another x86/x64 box to add more storage.
  Has anyone tried anything similar?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



I still don't understand why you need this two layer architecture.  Just add
a server to the mix, and add the new storage to vmware.  If you're doing
iSCSI, you'll hit the LUN size limitations long before you'll need a second
box.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling

On Sep 1, 2009, at 12:17 PM, Jason wrote:

True, though an enclosure for shared disks is expensive.  This isn't  
for production but for me to explore what I can do with x86/x64  
hardware.  The idea being that I can just throw up another x86/x64  
box to add more storage.  Has anyone tried anything similar?


You mean something like this?
   disk  server ---+
  +-- server --- network --- client
   disk  server ---+

I'm not sure how that can be less expensive in the TCO sense.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
I guess I should come at it from the other side:

If you have 1 iscsi target box and it goes down, you're dead in the water.

If you have 2 iscsi target boxes that replicate and one dies, you are OK but 
you then have to have a 2:1 total storage to usable ratio (excluding expensive 
shared disks).

If you have 2 tiers where you have n cheap back-end iSCSI targets that have the 
physical disks in them and present them to 2 clustered virtual iSCSI target 
servers (assuming this can be done with disks over iSCSI) that are presenting 
the iSCSI targets to the VMware hosts, then any one server could go down but 
everything would keep running.  It would create a virtual clustered pair that 
is basically doing RAID over the network (iSCSI).  Since you already have the 
VMware hosts, the 2 virtual ones are free.  None of the back-end servers 
would need redundant components because any one can fail, so you should be able 
to build them with inexpensive parts.  

This would also allow you to add/replace storage easily (I hope).  Perhaps 
you'd have to RAIDZ the backend disks together and then present them to the 
front-end which would RAIDZ all the back-ends together.  For example, if you 
had 5 backend boxes with 8 drives each you'd have a 10:7 ratio.  I'm sure the 
RAID combinations could be played with to get the balance of redundancy and 
capacity that you need.  I don't know what kind of performance hit you would 
take doing that over iSCSI but I thought it might work as long as you have 
gigabit speeds.  Or I could be completely off my rocker. :) Am I?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Scott Meilicke
You are completely off your rocker :)

No, just kidding. Assuming the virtual front-end servers are running on 
different hosts, and you are doing some sort of raid, you should be fine. 
Performance may be poor due to the inexpensive targets on the back end, but you 
probably know that. A while back I thought of doing similar stuff using local 
storage on my ESX hosts, and abstracting that with an OpenSolaris VM and 
iSCSI/NFS.

Perhaps consider inexpensive but decent NAS/SAN devices from Synology. They are 
not expensive, offer NFS and iSCSI, and you can also replicate/backup between 
two of them using rsync. Yes, you would be 'wasting' the storage space by 
having two, but like I said, they are inexpensive. Then you would not have the 
two layer architecture.  

I just tested a two disk model, using ESXi 3.5u4 and a Windows VM. I used 
iometer, realworld test, and IOs were about what you would expect from mirrored 
7200 SATA drives - 138 IOPS, about 1.1 Mbps. The internal CPU was around 20%, 
RAM usage was 128MB out of the 512MB on board, so it was disk limited. 

The Dell 2950 that I have 2009.06 installed on (16GB of RAM and an LSI HBA with 
an external SAS enclosure) with a single mirror using two 7200 drives gave me 
about 200 IOPS using the same test, presumably because of the large amounts of 
RAM for the L2ARC cache.

-Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling

On Sep 1, 2009, at 1:28 PM, Jason wrote:


I guess I should come at it from the other side:

If you have 1 iscsi target box and it goes down, you're dead in the  
water.


Yep.

If you have 2 iscsi target boxes that replicate and one dies, you  
are OK but you then have to have a 2:1 total storage to usable ratio  
(excluding expensive shared disks).


Servers cost more than storage, especially when you consider power.

If you have 2 tiers where you have n cheap back-end iSCSI targets  
that have the physical disks in them and present them to 2 clustered  
virtual iSCSI target servers (assuming this can be done with disks  
over iSCSI) that are presenting the iSCSI targets to the VMware  
hosts, then any one server could go down but everything would keep  
running.  It would create a virtual clustered pair that is basically  
doing RAID over the network (iSCSI).  Since you already have the  
VMware hosts, the 2 virtual ones are free.  None of the back-end  
servers would need redundant components because any one can fail, so  
you should be able to build them with inexpensive parts.


This will certainly work. But it is, IMHO, too complicated to be  
effective

at producing high availability services.  Too many parts means too many
opportunities for failure (yes, even VMWare fails). The problem with  
your

approach is that you seem to only be considering failures of the type
its broke, so it is completely dead. Those aren't the kind of  
failures that

dominate real life.

When we design highly available systems for the datacenter, we spend
a lot of time on rapid recovery. We know things will break, so we try  
to build
systems and processes that can recover as quickly as possible. This  
leads

to the observation that reliability trumps redundancy -- though we build
fast recovery systems, it is better to not need to recover. Hence we  
developed

dependability benchmarks which expose the cost/dependability trade-offs.
More reliable parts tend to cost more, but the best approach is to have
fewer reliable parts rather than more unreliable parts.

This would also allow you to add/replace storage easily (I hope).   
Perhaps you'd have to RAIDZ the backend disks together and then  
present them to the front-end which would RAIDZ all the back-ends  
together.  For example, if you had 5 backend boxes with 8 drives  
each you'd have a 10:7 ratio.  I'm sure the RAID combinations could  
be played with to get the balance of redundancy and capacity that  
you need.  I don't know what kind of performance hit you would take  
doing that over iSCSI but I thought it might work as long as you  
have gigabit speeds.  Or I could be completely off my rocker. :) Am I?


Don't worry about bandwidth. It is the latency that will kill  
performance.

Adding more stuff between your CPU and the media means increasing
latency.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] high speed at 7,200 rpm

2009-09-01 Thread Richard Elling

FYI,
Western Digital shipping high-speed 2TB hard drive
http://news.cnet.com/8301-17938_105-10322886-1.html?tag=newsEditorsPicksArea.0

I'm not sure how many people think 7,200 rpm is high speed
but, hey, it is better than 5,900 rpm :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread Adam Leventhal

Hi James,

After investigating this problem a bit I'd suggest avoiding deploying  
RAID-Z

until this issue is resolved. I anticipate having it fixed in build 124.

Apologies for the inconvenience.

Adam

On Aug 28, 2009, at 8:20 PM, James Lever wrote:



On 28/08/2009, at 3:23 AM, Adam Leventhal wrote:

There appears to be a bug in the RAID-Z code that can generate  
spurious checksum errors. I'm looking into it now and hope to have  
it fixed in build 123 or 124. Apologies for the inconvenience.


Are the errors being generated likely to cause any significant  
problem running 121 with a RAID-Z volume or should users of RAID-Z*  
wait until this issue is resolved?


cheers,
James




--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread James Lever


On 02/09/2009, at 9:54 AM, Adam Leventhal wrote:

After investigating this problem a bit I'd suggest avoiding  
deploying RAID-Z
until this issue is resolved. I anticipate having it fixed in build  
124.


Thanks for the status update on this Adam.

cheers,
James

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] order bug, legacy mount and nfs sharing

2009-09-01 Thread Masafumi Ohta


On 2009/09/01, at 22:15, kurosan wrote:


Hi kurosan,



I met the same but probably it cannot work.
check zfs get all your_pool_mounted_/pathname
you can see 'mountpoint' is 'legacy'
so you have to use zfs sharenfs=on again to try


Hi,
thanks for the reply... I've had time only today to retry.
I've re-enabled zfs sharenfs=on but nfs server doesn't start because  
it can't find the shared directories... if I go back to automount  
the zfs filesystem the nfs server works again...
Can't go around this... maybe I'll simply wait for the mount order  
bug to be fixed.

Tnx for your help.



Hi,

sorry I did'nt explain in detail.
have you set again zfs set mountpoint?
if you still set your mountpoint 'legacy' ,you have to change zfs set  
mountpoint=$MOUNTPOINT your_pool_mounted_/pathname

and then zfs set sharenfs=on

ex.  zfs set mountpoint=/tank rpool/tank
zfs set sharenfs=on rpool/tank



P.S.: I'm italian and not japanese ;)


UUps! sorry :(



--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss