Re: [zfs-discuss] iscsitadm local_name in ZFS

2007-05-11 Thread Adam Leventhal
That would be a great RFE. Currently the iSCSI Alias is the dataset name
which should help with identification.

Adam

On Fri, May 04, 2007 at 02:02:34PM +0200, cedric briner wrote:
> cedric briner wrote:
> >hello dear community,
> >
> >Is there a way to have a ``local_name'' as define in iscsitadm.1m when 
> >you shareiscsi a zvol. This way, it will give even easier 
> >way to identify an device through IQN.
> >
> >Ced.
> >
> 
> Okay no reply from you so... maybe I didn't make myself well understandable.
> 
> Let me try to re-explain you what I mean:
> when you use zvol and enable shareiscsi, could you add a suffix to the 
> IQN (Iscsi Qualified Name). This suffix will be given by myself and will 
> help me to identify which IQN correspond to which zvol : this is just a 
> more human readable tag on an IQN.
> 
> Similarly, this tag is also given when you do an iscsitadm. And in the 
> man page of iscsitadm it is called a .
> 
> iscsitadm iscsitadm create target -b  /dev/dsk/c0d0s5  tiger
> or
> iscsitadm iscsitadm create target -b  /dev/dsk/c0d0s5  hd-1
> 
> tiger and hd-1 are 
> 
> Ced.
> 
> -- 
> 
> Cedric BRINER
> Geneva - Switzerland
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Adam Leventhal, Solaris Kernel Development   http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Matthew Ahrens

Jason J. W. Williams wrote:

Hi Mark,

Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again. 


Note, when writing such a script, you will get the best performance by 
destroying the snapshots in order from oldest to newest.  (And FYI, 'zfs 
destroy -R' does the snapshot destroys in this order too.)


--matt



Best Regards,
Jason

On 5/11/07, Mark J Musante <[EMAIL PROTECTED]> wrote:

On Fri, 11 May 2007, Jason J. W. Williams wrote:

> Is it possible (or even technically feasible) for zfs to have a 
"destroy

> to" feature? Basically destroy any snapshot older than a certain date?

Sorta-kinda.  You can use 'zfs get' to get the creation time of a
snapshot.  If you give it -p, it'll provide the seconds-since-epoch time
so, with a little fancy footwork, this is scriptable.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is this a workable ORACLE disaster recovery solution?

2007-05-11 Thread Matthew Ahrens

Bruce Shaw wrote:

Mark J Musante [EMAIL PROTECTED] wrote:


Maybe I'm misunderstanding what you're saying, but 'zfs clone' is

exactly
the way to mount a snapshot.  Creating a clone uses up a negligible
amount
of disk space, provided you never write to it.  And you can always set
readonly=on if that's a concern.

So something like:

zfs snapshot fastsan/[EMAIL PROTECTED]
zfs clone fastsan/[EMAIL PROTECTED] fastsan/zfs3/night
zfs set readonly=on fastsan/zfs3/night

...do backup...

zfs destroy fastsan/zfs3/night


Yep.  Don't forget to destroy the snapshot as well, if you want your 
space back ('zfs destroy fastsan/[EMAIL PROTECTED]').


That said, if it works to point Legato at the .zfs/snapshot/nightly 
directory, then that seems like less steps.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Jason J. W. Williams

Hi Mark,

Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again.

Best Regards,
Jason

On 5/11/07, Mark J Musante <[EMAIL PROTECTED]> wrote:

On Fri, 11 May 2007, Jason J. W. Williams wrote:

> Is it possible (or even technically feasible) for zfs to have a "destroy
> to" feature? Basically destroy any snapshot older than a certain date?

Sorta-kinda.  You can use 'zfs get' to get the creation time of a
snapshot.  If you give it -p, it'll provide the seconds-since-epoch time
so, with a little fancy footwork, this is scriptable.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Mark J Musante
On Fri, 11 May 2007, Jason J. W. Williams wrote:

> Is it possible (or even technically feasible) for zfs to have a "destroy
> to" feature? Basically destroy any snapshot older than a certain date?

Sorta-kinda.  You can use 'zfs get' to get the creation time of a
snapshot.  If you give it -p, it'll provide the seconds-since-epoch time
so, with a little fancy footwork, this is scriptable.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Jason J. W. Williams

Hey All,

Is it possible (or even technically feasible) for zfs to have a
"destroy to" feature? Basically destroy any snapshot older than a
certain date?

Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost in boot loop..

2007-05-11 Thread Lori Alt

zfs boot should work on b62, but does not work on b63 (see bug
6553537).  This bug is supposed to be fixed in b65 (I'm testing the
most recent nevada bits today to verify the fix).

I'm not sure what's up with the build 62 problem that Steffen
is having.  Steffen, if you'll send me more information I'll try
to help figure it out.

Lori


Matthew B Sweeney - Sun Microsystems Inc. wrote:

Hey Steve,

Not that I can help you out but I'm in the same boat. I'm using nv63 
with the zfsbootkit.  I've build a DVD after patching the netinstall.  
The instructions work fine, but I get the boot loop you describe. I 
haven't been able to catch the error yet, even with a boot 
-allthezfsstuff  -avr.  I can see it reads the disk and it loads the 
ata driver and then something goes by very quickly and I get into the 
loop.


I'm running nv B63 n a Toshiba tecra M3.  Previously I had a windoze 
partition in addition to the Solaris partition.  I' tried with the 
whole disk to see if that improves the situation,but alas, I'm still 
looping.

Matt



Steffen Weinreich wrote:

Hi!

I have installed svn_62 a few days ago and thought to give zfs boot a 
try. I followed the manual instructions and was able to get 
everything up and running with a zfs boot/root environment.
After some tweaking (adding 2. Disk to the rootpool, moving some 
files from some old zfs fs to the rootpool etc ) I've tried a reboot 
to see if everything comes up again, I got boot loop. The error 
message isn't detectable, but I suppose it is a kernel panic since he 
does not find his root fs System is a Intel PiV based NN system with 
2 PATA Disks @ 250GB on c0d0 and c0d1  and 2 SATA Disks @80 GB on 
c2d0 and c3d0. The root pool is on mirror c0d0s0 and c0d1s0 and I am 
able to boot into the failsafe mode and import the rootpool there. 
Any hints how I can tell the boot process to find it's root fs?


cheerio
   Steve
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost in boot loop..

2007-05-11 Thread Matthew B Sweeney - Sun Microsystems Inc.

Hey Steve,

Not that I can help you out but I'm in the same boat. I'm using nv63 
with the zfsbootkit.  I've build a DVD after patching the netinstall.  
The instructions work fine, but I get the boot loop you describe. I 
haven't been able to catch the error yet, even with a boot 
-allthezfsstuff  -avr.  I can see it reads the disk and it loads the ata 
driver and then something goes by very quickly and I get into the loop.


I'm running nv B63 n a Toshiba tecra M3.  Previously I had a windoze 
partition in addition to the Solaris partition.  I' tried with the whole 
disk to see if that improves the situation,but alas, I'm still looping. 


Matt



Steffen Weinreich wrote:

Hi!

I have installed svn_62 a few days ago and thought to give zfs boot a try. I followed the manual instructions and was able to get everything up and running with a zfs boot/root environment. 

After some tweaking (adding 2. Disk to the rootpool, moving some files from some old zfs fs to the rootpool etc ) I've tried a reboot to see if everything comes up again, I got boot loop. The error message isn't detectable, but I suppose it is a kernel panic since he does not find his root fs 
System is a Intel PiV based NN system with 2 PATA Disks @ 250GB on c0d0 and c0d1  and 2 SATA Disks @80 GB on c2d0 and c3d0. The root pool is on mirror c0d0s0 and c0d1s0 and I am able to boot into the failsafe mode and import the rootpool there.  


Any hints how I can tell the boot process to find it's root fs?

cheerio
   Steve
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: How does ZFS write data to disks?

2007-05-11 Thread Neil . Perrin

lonny wrote:

On May 11, 2007, at 9:09 AM, Bob Netherton wrote:

**On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
**I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
** around 5 seconds. I assume it's just something to do with caching?

^Yep - the ZFS equivalent of fsflush.  Runs more often so the pipes don't
^get as clogged.   We've had lots of rain here recently, so I'm sort of
^sensitive to stories of clogged pipes.
^
**Is this behavior ok? seems it would be better to have the disks writing
** the whole time instead of in bursts.
^
^Perhaps - although not in all cases (probably not in most cases).
^Wouldn't it be cool to actually do some nice sequential writes to
^the sweet spot of the disk bandwidth curve, but not depend on it
^so much that a single random I/O here and there throws you for
^a loop ?
^
^Human analogy - it's often more wise to work smarter than harder :-)
^
^Directly to your question - are you seeing any anomalies in file
^system read or write performance (bandwidth or latency) ?

^Bob


No performance problems so far, the thumper and zfs seem to handle everything 
we throw at them. On the T2000 internal disks we were seeing a bottleneck when 
using a single disk for our apps but moving to a 3 disk raidz alleviated that.

The only issue is when using iostat commands the bursts make it a little harder 
to gauge performance. Is it safe to assume that if those bursts were to reach 
the upper performance limit that it would spread the writes out a bit more?


The burst of activity every 5 seconds is when the transaction group is 
committed.
Batching up the writes in this way can lead to a number of efficiencies (as Bob 
hinted).
With heavier activity the writes will not get spread out, but will just takes 
longer.
Another way to look at the gaps of IO inactivity is that they indicate 
underutilisation.


Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How does ZFS write data to disks?

2007-05-11 Thread lonny
On May 11, 2007, at 9:09 AM, Bob Netherton wrote:

**On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
**I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
** around 5 seconds. I assume it's just something to do with caching?

^Yep - the ZFS equivalent of fsflush.  Runs more often so the pipes don't
^get as clogged.   We've had lots of rain here recently, so I'm sort of
^sensitive to stories of clogged pipes.
^
**Is this behavior ok? seems it would be better to have the disks writing
** the whole time instead of in bursts.
^
^Perhaps - although not in all cases (probably not in most cases).
^Wouldn't it be cool to actually do some nice sequential writes to
^the sweet spot of the disk bandwidth curve, but not depend on it
^so much that a single random I/O here and there throws you for
^a loop ?
^
^Human analogy - it's often more wise to work smarter than harder :-)
^
^Directly to your question - are you seeing any anomalies in file
^system read or write performance (bandwidth or latency) ?

^Bob


No performance problems so far, the thumper and zfs seem to handle everything 
we throw at them. On the T2000 internal disks we were seeing a bottleneck when 
using a single disk for our apps but moving to a 3 disk raidz alleviated that.

The only issue is when using iostat commands the bursts make it a little harder 
to gauge performance. Is it safe to assume that if those bursts were to reach 
the upper performance limit that it would spread the writes out a bit more?

thanks
lonny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Optimal strategy (add or replace disks) to build a cheap and raidz?

2007-05-11 Thread Pål Baltzersen
Just my problem too ;) And ZFS disapointed me big time here!
I know ZFS is new and every desired feature isn't implemented yet. I hope and 
beleive more features are comming "soon", so I think I'll stay with ZFS and 
wait..

My idea was to start out with just as many state-of-the-art size disks I really 
needed and could afford and add disks as price dropped and the zpool grew near 
full.
So I bougth 4 Seagate 500GB. Now they are full and meanwhile price has dropped 
to ~1/3 and will continue to drop to ~1/5 I expect (I've just seen 1TB disks in 
stock at the same price the 500GB started when released ~ 2 years ago).
I thought I could buy one at a time and expand the raidz. I have reallized that 
is (currently) *not* an option! -- You can't (currently) add (attach) disks to 
a raidz vdev - period!

What you can do; i.e. the only thing you can do (currently), is adding new 
raidz to an existing pool, somewhat like concatinating two or more raidz vdevs 
into a logical volume.
So, ZFS does not help you here; You'll have to buy an economically optimal 
bunch of disks each time you run out of space and group them into a new raidz 
vdev each time. The new raidz vdev may be part of an/the existing pool (volume) 
(most likely), or a new one.
So with 8-port controller(s) you'd buy 4+4+4+4 or 4+4+8 or 8+8+8 or any number 
> 3 that fits your need and wallet at the time. For each set you lose one for 
redundancy Buying 5+1+1+1+1... is not an option (yet).
Alternatively of course you coul buy 2+2+2+2... and add mirrored pairs to the 
poll, but then you loose 50% to redundancy which is no a budget approach..
Buying 4+4+4+4 gives you at best 75% usable space for your money (N-1/N for 
each set, i.e. 3/4); that is when your pool is 100% full. But if your usage 
grows slowly from nothing, then adding mirror-pairs could atually be more 
economic, and if it accelerates you could later add groups of raidz1 or raidz2.

Note! You can't even regret what you have added to a pool. Being able to 
evacuate a vdev and replace it by a bigger one would have helped. But this 
isn't possible either (currently).

I'd really like to see adding disks to a raidz vdev implemented. I'm no expert 
on the details but having read a bit about ZFS, I think it shouldn't be that 
hard to implement (just extremely I/O intensive while in progress - like scrub 
where every stripe needed correction). It should be possible to read 
stipe-by-stripe and recalculate/reshape to a more narrow but longer stripe 
spanning the added disk(s), and as more space is added and the recalculated 
stripes would be narrower (at least not wider) everything shoud fit as a 
sequential process. One would need some percistent way to keep track on the 
progress in a way that would survive and resume after power loss, panic etc. A 
bitmap could do. Labeling the stripes with a version could be a way that would 
make it possible to having a mix of old short and new longer stripes coexisting 
for a while, say write new stripes (i.e. files) with the new size and 
recalculate and reshape everything as (an optional) part of next scrub. A 
constraint would probably be that you would each time have to add at least as 
much space as your biggest file (file|metadata=stripe as far as I have 
understood) -- at least true if the reshape-process could save the 
biggest/temporarily non fitting stripes to the end of the process, to make sure 
there is allways one good copy of every stipe on disk at any time, which is 
much of the point with ZFS.
An implementation of something like this is *very* welcome!
I would then also like to be able to convert my initial raidz1 to raidz2 so I 
could, ideally, start with a 2-disk raidz1 and end up with a giant raidz2, and 
split it in a reasonable number of disks per group and start a new raidz1 
growing from 2 disks at every 10 disk or so, and probably at the same time step 
up to then new state-of-the-art disk size for each new vdev (and just before I 
run out of slots start replacing the by the time ridiculously small disks (and 
slow (controller)) in then first raidz and thus grow for ever not necessarily 
needing bigger chassis or rack units)

Backing up the whole thing, destroying and recreating the pool, and restore 
everything every couple of months isn't really an option for me..
Actually I have no clue how to back-up such a thing on a private budget. 
Tapedrives that could cope are way to expensive and tapes aren't that cheap 
compared to mid-range SATA-disks.. Best thing I can come up with is rsync to a 
clone-system (build around your old PC/server but with similar disk-capasity, 
with less or no redundancy cold do (since this is budget HW there is no 
significantly cheaper way to build a downscaled clone except reducing/reusing 
old CPU and RAM and so)

-- And by the way, yes, I think this applies to professional use too. It could 
give substantial savings on any scale. Buying things you don't really need till 
next year have usually been a 50%

Re: [zfs-discuss] Re: How does ZFS write data to disks?

2007-05-11 Thread Bob Netherton
On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
> I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
>  around 5 seconds. I assume it's just something to do with caching? 

Yep - the ZFS equivalent of fsflush.  Runs more often so the pipes don't
get as clogged.   We've had lots of rain here recently, so I'm sort of
sensitive to stories of clogged pipes.

> Is this behavior ok? seems it would be better to have the disks writing
>  the whole time instead of in bursts.

Perhaps - although not in all cases (probably not in most cases). 
Wouldn't it be cool to actually do some nice sequential writes to
the sweet spot of the disk bandwidth curve, but not depend on it
so much that a single random I/O here and there throws you for
a loop ?

Human analogy - it's often more wise to work smarter than harder :-)

Directly to your question - are you seeing any anomalies in file
system read or write performance (bandwidth or latency) ?

Bob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How does ZFS write data to disks?

2007-05-11 Thread lonny
I've noticed a similar behavior in my writes. ZFS seems to write in bursts of 
around 5 seconds. I assume it's just something to do with caching? I was 
watching the drive lights on the T2000s with 3 disk raidz and the disks all 
blink a couple seconds then are solid for a few seconds. 

Is this behavior ok? seems it would be better to have the disks writing the 
whole time instead of in bursts.

On my thumper
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
vault1  10.7T  8.32T108561  7.23M  24.8M
vault1  10.7T  8.32T108152  2.68M  5.90M
vault1  10.7T  8.32T143177  6.49M  11.4M
vault1  10.7T  8.32T147429  6.59M  27.0M
[b]vault1  10.7T  8.32T111  3.89K  2.84M   131M[/b]
vault1  10.7T  8.32T 74151   460K  6.72M
vault1  10.7T  8.32T103180  1.71M  7.21M
vault1  10.7T  8.32T119144   832K  5.69M
vault1  10.7T  8.32T110185  2.51M  4.75M
[b]vault1  10.7T  8.32T 94  2.17K  1.07M   137M
vault1  10.7T  8.32T 36  2.87K   354K  24.9M[/b]
vault1  10.7T  8.32T 69140  3.36M  6.00M
vault1  10.7T  8.32T 60177  4.78M  12.9M
vault1  10.7T  8.32T 90198  2.82M  5.22M
[b]vault1  10.7T  8.32T 94  1.12K  2.22M  18.1M
vault1  10.7T  8.32T 37  3.79K  2.06M   130M[/b]
vault1  10.7T  8.32T 88254  2.43M  10.2M
vault1  10.7T  8.32T137147  3.64M  7.05M
vault1  10.7T  8.32T307415  5.84M  9.38M
[b]vault1  10.7T  8.32T132  4.13K  2.26M   158M
vault1  10.7T  8.32T 57  1.45K  1.89M  13.2M[/b]
vault1  10.7T  8.32T 78148   577K  8.47M
vault1  10.7T  8.32T 17159   749K  6.26M
vault1  10.7T  8.32T 74248   598K  6.56M
[b]vault1  10.7T  8.32T178  1.20K  1.62M  23.8M
vault1  10.7T  8.32T 46  5.23K  1.01M   168M[/b]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool create -f ... fails on disk with previous UFS on it

2007-05-11 Thread eric kustarz


On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:


Hi,

I have a test server that I use for testing my different jumpstart  
installations. This system is continuously installed and  
reinstalled with different system builds.
For some builds I have a finish script that creates a zpool using  
the utility found in the Solaris 10 update 3 miniroot.


I have found an issue where the zpool command fails to create a new  
zpool if the system previously had a UFS filesystem on the same slice.


The command and error is:

zpool create -f -R /a -m /srv srv c1t0d0s6
cannot create 'srv': one or more vdevs refer to the same device



Works fine for me:
# df -kh
Filesystem size   used  avail capacity  Mounted on
/dev/dsk/c1t1d0s0   17G   4.1G13G24%/
...
/dev/dsk/c1t1d0s6   24G24M24G 1%/zfs0
# umount /zfs0
# zpool create -f -R /a -m /srv srv c1t1d0s6
# zpool status
  pool: srv
state: ONLINE
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
srv ONLINE   0 0 0
  c1t1d0s6  ONLINE   0 0 0

errors: No known data errors
#

eric




The steps to reproduce are:

1. build a Solaris 10 Update 3 system via jumpstart with the  
following partitioning and only UFS filesystems:


partitioning explicit
filesys rootdisk.s0 6144 / logging
filesys rootdisk.s1 1024 swap
filesys rootdisk.s3 4096 /var logging,nosuid
filesys rootdisk.s6 free /srv logging
filesys rootdisk.s7 50 unnamed

2. Then rebuild the same system via jumpstart with the following  
partitioning with slice 6 left unnamed so that a finish script may  
create a zpool with the command 'zpool create -f -R /a -m /srv srv  
cntndns6':


partitioning explicit
filesys rootdisk.s0 6144 / logging
filesys rootdisk.s1 1024 swap
filesys rootdisk.s3 4096 /var logging,nosuid
filesys rootdisk.s6 free unnamed
filesys rootdisk.s7 50 unnamed

Has anyone hit this issue and is this a known bug with a workaround?

regards

matthew


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [Fwd: Re: [zfs-discuss] Will this work?]

2007-05-11 Thread Al Hopper
On Fri, 11 May 2007, Sophia Li wrote:

>   Original Message 
> On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:
> 
> > My personal opinion is that USB is not robust enough under (Open)Solaris
> > to provide the reliability that someone considering ZFS is looking for.
> > I base this on experience with two 7 port powered USB hubs, each with 4 
> > *
> > 2Gb Kingston flash drives, connected via 2 ports to a Solaris (update 3)
> > desktop box which runs ZFS on two internal 500Gb drives.  I see about 24
> > to 28Mb/Sec (bytes) maximum of bandwidth over each USB bus.  One time,
> > after disconnecting one hub (to show someone the hub with 4*USB drives) 
> > it
> > hung the OS and reset the box.  A subsequent import of the ZFS volume 
> > that
> > was disconnected, failed.  (Yes it was exported, but failed to import).
> > So my take on USB is ... it's not sufficiently robust - and a USB 
> > related
> > failure is likely to cause loss of the entire ZFS dataset;  i.e., its
> > likely to trash more that one drive in a raidz config.
>
> I am interested in your this comment on USB. But it seems too general
> and not helpful to solve problems. Several issues have been mixed
> together which may not necessarily be USB's fault. If you believe there

Agreed.

> is a USB issue, a better practice is to file a bug. And please make sure
> the problem is reproducible and be detailed in problem description. :-)

Understood.

> I play with USB devices a lot and seldom see hotplugging hangs a system.
> The hang looks very exceptional to me. Could you experiment more with
> the devices and combinations of filesystem configuration? e.g., if you
> put on the drives UFS instead of ZFS, would it hang? Is there a way that
> you can reproduce the hang much more reliably?

I really can't experiment with this particular machine, because its my
main desktop that drives a 22" and 30" LCDs, has about 18 Gnome workspaces
and 80+ windows active.  If the Xserver dies it take 10 to 15 minutes just
to get everything setup again so that I can get my productive development
environment in place.  And that is aside from the ZFS mirrored pool on the
machine that has 45+ filesystems and 6 zones defined. So - experimenting
with it is not possible for now.  ... more below ...

> Another question is if you are using ZFS on USB drives, the system hangs
> due to non-usb related reason and you reset the box, can data integrity
> on the USB drives be ensured?
>
> Yet another question is if you are using non-USB drives, the system
> hangs due to whatever reason and you reset the box, can data integrity
> on the non-USB drives be ensured? And how, by SW or HW?
>
> We need to think of the questions and make clear if such kind of data
> loss is particular to USB or not before coming to a conclusion too quickly.

The only conclusion I've reached is that attaching 6 or 8 750Gb disk
drives via USB for use as ZFS pool is not a good idea - because USB is not
robust enough to guarantee, that in the event of a USB failure or "event",
that no more than one disk configured in a raidz configuration will be
negatively impacted.  If more than one disk drive in a raidz storage
pool is negatively impacted, then you'll loose the entire pool.  USB is
not an appropriate bus to support that type of usage scenario IMHO.

USB is fine as a demonstrator of ZFS capabilities by connecting multiple
USB flash drives (as I've done) and can also be used as a way to archive
files reliably on a removable media (the flash drives).  And using ZFS
with flash drives solves the problem of corrupted or bad sectors on
low-cost flash drives - which may or may not have been 100% tested before
they were sold.

>
> 
> > I'd be interested in hearing other opinions on USB connected drives
> > under (Open)Solaris 
> 
>
> Any bus can have errors. USB is nothing particular, just the chance of
> encountering errors is bigger since USB device is cheap. But isn't the

Every bus topology has appropriate and inappropriate uses - and USB is no
exception to that rule.

> file system expected to handle possible errors?

see above.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Optimal strategy (add or replace disks) to build a cheap and raidz?

2007-05-11 Thread Pål Baltzersen
I use Supermicro AOC-SAT2-MV8

It is 8-port SATA2, JBOD only, and literally plug&play (sol10u3) and just 
~100Euro
It is PCI-X but mine is plugged into a plain PCI slot/mobo and works fine.
(Don't know how much better it would perform on a PCI-X slot/mobo).

I bought mine here:
http://www.mullet.se/sortiment/product.htm?product_id=133690&category_id=5907&search_page=
google for AOC-SAT2-MV8 should give you lots of webshops

Pål
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [Fwd: Re: [zfs-discuss] Will this work?]

2007-05-11 Thread Sophia Li

 Original Message 
On 5/10/07, Al Hopper <[EMAIL PROTECTED]> wrote:


My personal opinion is that USB is not robust enough under (Open)Solaris
to provide the reliability that someone considering ZFS is looking for.
I base this on experience with two 7 port powered USB hubs, each with 4 *
2Gb Kingston flash drives, connected via 2 ports to a Solaris (update 3)
desktop box which runs ZFS on two internal 500Gb drives.  I see about 24
to 28Mb/Sec (bytes) maximum of bandwidth over each USB bus.  One time,
after disconnecting one hub (to show someone the hub with 4*USB drives) it
hung the OS and reset the box.  A subsequent import of the ZFS volume that
was disconnected, failed.  (Yes it was exported, but failed to import).
So my take on USB is ... it's not sufficiently robust - and a USB related
failure is likely to cause loss of the entire ZFS dataset;  i.e., its
likely to trash more that one drive in a raidz config.


I am interested in your this comment on USB. But it seems too general 
and not helpful to solve problems. Several issues have been mixed 
together which may not necessarily be USB's fault. If you believe there 
is a USB issue, a better practice is to file a bug. And please make sure 
the problem is reproducible and be detailed in problem description. :-)


I play with USB devices a lot and seldom see hotplugging hangs a system. 
The hang looks very exceptional to me. Could you experiment more with 
the devices and combinations of filesystem configuration? e.g., if you 
put on the drives UFS instead of ZFS, would it hang? Is there a way that 
you can reproduce the hang much more reliably?


Another question is if you are using ZFS on USB drives, the system hangs 
due to non-usb related reason and you reset the box, can data integrity 
on the USB drives be ensured?


Yet another question is if you are using non-USB drives, the system 
hangs due to whatever reason and you reset the box, can data integrity 
on the non-USB drives be ensured? And how, by SW or HW?


We need to think of the questions and make clear if such kind of data 
loss is particular to USB or not before coming to a conclusion too quickly.






I'd be interested in hearing other opinions on USB connected drives
under (Open)Solaris 




Any bus can have errors. USB is nothing particular, just the chance of 
encountering errors is bigger since USB device is cheap. But isn't the 
file system expected to handle possible errors?


Thanks,
Sophia
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Will this work?

2007-05-11 Thread Manoj Joseph

Robert Thurlow wrote:

I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors - see bugid 6539587.  I haven't played with USB drives
enough to trust them more, but this was a hole I fell in with Firewire.
I've had fabulous luck with a Firewire attached DVD burner, though.


6539587 does not seem to be visible on the opensolaris bugs database. :-/

-Manoj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-11 Thread Darren J Moffat

Bill Sommerfeld wrote:

On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:

Btw: In one experiment I tried to boot the kernel under kmdb
control (-kd), patched "minclsyspri := 61" and used a
breakpoint inside spa_active() to patch the spa_zio_* taskq
to use prio 60 when importing the gzip compressed pool
(so that the gzip compressed pool was using prio 60 threads
and usb and other stuff was using prio >= 61 threads).
That didn't help interactive performance...


oops.  sounds like cpu-intensive compression (and encryption/decryption
once that's upon us) should ideally be handed off to worker threads that
compete on a "fair" footing with compute-intensive userspace threads, or
(better yet) are scheduled like the thread which initiated the I/O.


This will be different with encryption.  The crypto framework already 
tries to do "fair" scheduling, it can be called in sync and async mode. 
 We use per provider taskqs for hardware and for async software 
requests we have a taskq per cpu that can have scheduling priority/class 
set on it by putting the svc://system/cryptosvc service into an 
appropriate project.


I haven't done any performance testing of crypto yet so I don't know how 
it will work in this case but we do know that the current method works 
well for networking.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss