Re: [zfs-discuss] kernel panic - was it zfs related?

2008-07-16 Thread Thommy M.
Michael Hale wrote:
 Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc i386  
 i86pc) rebooted.
[...]
 dumping to /dev/zvol/dsk/rootpool/dump, offset 65536, content: kernel
 
 Is there a way to tell if ZFS caused the kernel panic?  I notice that  
 it says imapd: in the middle of the msgbuffer, does that mean imapd  
 caused the kernel panic?  I'm just trying to figure out what to do  
 here and determine if a bug caused the panic so that I can submit the  
 proper information to get it fixed :^)

Is it really true that you run your companies mailserver on snv_91 and
with root on ZFS? No offense, but in that case I think the proper thing
to do is to switch to Solaris 10 5/08.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs patches in latest sol10 u2 patch bundle

2008-07-16 Thread Manyam
Hi ZFS gurus  --  I have a v240 with solaris10 u2 release  and ZFs - could you 
please tell me if by applying the latest patch bundle of update 2 -- I will get 
the all the ZFS patches installed as well ?

Thanks much for your support 

~Balu
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs patches in latest sol10 u2 patch bundle

2008-07-16 Thread Chris Cosby
S10U2 has zfs version=1. Any patches are just bug fixes (I'm not sure if
there are any). If your intention is to get to a newer, later, greater ZFS,
you'll need to upgrade. S10U5 has, for example, version=4. Differences in
the versions of zfs can be found at
http://opensolaris.org/os/community/zfs/version/4/ (change the 4 to any
number 1-11 for details).

On Wed, Jul 16, 2008 at 9:55 AM, Manyam [EMAIL PROTECTED] wrote:

 Hi ZFS gurus  --  I have a v240 with solaris10 u2 release  and ZFs - could
 you please tell me if by applying the latest patch bundle of update 2 -- I
 will get the all the ZFS patches installed as well ?

 Thanks much for your support

 ~Balu


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
chris -at- microcozm -dot- net
=== Si Hoc Legere Scis Nimium Eruditionis Habes
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs patches in latest sol10 u2 patch bundle

2008-07-16 Thread Brian H. Nelson
Manyam wrote:
 Hi ZFS gurus  --  I have a v240 with solaris10 u2 release  and ZFs - could 
 you please tell me if by applying the latest patch bundle of update 2 -- I 
 will get the all the ZFS patches installed as well ?

   

It is possible to patch your way up to the U5 kernel and related 
patches, which should give you all the latest ZFS bits (available in 
Solaris anyways). I have done this from U3, but I believe coming from U2 
wouldn't be much different. I assume that the required patches are in 
the latest bundle, but I believe 'smpatch update' is the prescribed 
method these days. Be aware that there is at least one obsolete patch 
that must be installed by hand in order to satisfy a dependency. I don't 
recall the patch number, but I know the dependant patch will print out a 
notice as such if the required patch is not installed. You will have to 
go through several patch-reboot iterations (one for each kernel patch, 
U2-U5) in order to get all the way there. Once your done patching, you 
should be able to do a 'zpool upgrade' to the current version (4).

Depending on your situation though, it may just be easier to do an 
upgrade :)

-Brian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]

2008-07-16 Thread Matthew Huang

Dear ALL,

IHAC who would like to use Sun Fire X4500 to be the NFS server for the 
backend services, and would like to see the potential performance gain 
comparing to their existing systems. However the outputs of the I/O 
stress test with iozone show the mixed results as follows:


   * The read performance sharply degrades (almost down to 1/20, i.e
 from 2,000,000 down to 100,000) when the file sizes are larger
 than 256KBytes.
   * The write performance remains good (roughly 1,000,000) even with
 the file sizes larger than 100MBytes.

The NFS/ZFS server configuraion and the test environment is briefed as

   * The ZFS pool for NFS is composed of the 6 disks in stripping with
 one on each SATA controller.
   * Solaris 10 Update 5 (Solaris Factory Installation)
   * The on-board GigE ports are trunked for better I/O and network
 throughput.
   * Single NFS client on which the I/O stress tool iozone is deployed
 and run.

Attached the iozone output, and some outputs extracted from the attached 
file are presetned later in this email. Any inputs, like NFS/ZFS tuning, 
troubleshooting, etc., are very welcome. Many thanks for your time and 
support.


Regards,
Matthew



Writer report
  4 8  16 32 64 128256
5121024   2048   4096   8192  16384

64  779620  1032956  1086132  1030728  1083646
128 05  1048964  1049389  1131926  1123528  1075981
256 914273   841924   885664   914105   920652   933973  1062429
512 840779   853200   953428   927458   936045   942734   948038  
1030096
1024847660   872161   941096   942849   973457   962306   970583  
1008880  1072174
2048850868   918755   960554   992736  1000505  1019961  1019952  
1042211  1037506  1061164
4096889644   986289  1035360  105  1078489  1098988  1097544  
1105839  1099925  1071938   938146
8192915000  1020690  1085911  1102268  1112897  1130393  1142702  
1147031  1139839  1107481   972561  814472
16384   916635  1031351  1095623  1109493  1123586  1134307  1142546  
1142854  1148374  1121658   993808  862090  801800
327680000  1125390  1135211  1141106  
1139796  1136396  1117100  1001529  862452  830220
655360000  1122712  1129913  1134172  
1141482  1143636  1127615  1009551  864830  834768
*131072   0000  1118828  1130799  1133341  
1133643  1143385  1128063  1010421  861890  833680*
262144   0000   951793   963910   939610   
953132   957778   925947   848510  751680  727216
524288   0000   902587   932762   900618   
911183   883468   851441   752104  689376  656806



Reader report
 4  8  16 32 64 128256   
512   1024  2048  40 96  8192  16384

64 1278121  1882241n bsp; 2063082  2136657  2291288
1281542262  1936240  2000479  2170065  2288046  2202688
*256 105303   106272   106619   107021   107116   105002  107026*
512 110011   110654   110679   110896   11140298158   98860   
97191
1024107314   108612   109332   109321   10955496658   97413   
96304   96449
204817   111839   111863   112158   112114   100171   93958   
93730   93653   93703
4096112831   113058   113268   113346   113286   108239   98639   
98651  100903   99805   95888
8192113939   113965   114112   114205   114177   111413  106600  
105741  107029  106588  101477   96805
16384   113757   113999   114529   114538   114577   112883  111071  
110964  110951  105530  106261  102985  101311
327680000   113580   113973  112219  
112981  111017  38  110726  107683  104093
655360000   114672   113785  112932  
113050  113672  113219  112366  110995  108959
131072   0000   114295   114519  113907  
113827  114091  114142  112111  112946  112200
262144   0000   114687   114522  114532  
114478  114436  114371  114177  113774  113154
524288   0000   114669   114691  114578  
114715  114487  114628  114434  114149  114141



Iozone: Performance Test of File I/O
Version $Revision: 3.303 $
Compiled for 32 bit mode.
Build: linux 

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
 Randy Dunlap, Mark Montague, Dan Million, 
 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

Run began: Mon Jul 14 18:00:55 2008

Excel chart generation enabled
Auto Mode
Command line used: 

Re: [zfs-discuss] Raid-Z with N^2+1 disks

2008-07-16 Thread Richard Elling
Frank Cusack wrote:
 On July 14, 2008 9:54:43 PM -0700 Frank Cusack [EMAIL PROTECTED] wrote:
   
 On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn 
 
 [EMAIL PROTECTED] wrote:
   
 It sounds like they're talking more about traditional hardware RAID
 but is this also true for ZFS?  Right now I've got four 750GB drives
 that I'm planning to use in a raid-z 3+1 array.  Will I get markedly
 better performance with 5 drives (2^2+1) or 6 drives 2*(2^1+1)
 because the parity calculations are more efficient across N^2
 drives?
 
 With ZFS and modern CPUs, the parity calculation is surely in the noise
 to the point of being unmeasurable.
   
 I would agree with that.  The parity calculation has *never* been a
 factor in and of itself.  The problem is having to read the rest of
 the stripe and then having to wait for a disk revolution before writing.
 

 oh, you know what though?  raid-z had this bug, or maybe we should just
 call it a behavior, where you only want an {even,odd} number of drives
 in the vdev.  I can't remember if it was even or odd.  Or maybe it was
 that you wanted only N^2+1 disks, choose any N.  Otherwise you had
 suboptimal performance in certain cases.  I can't remember the exact
 details but it wasn't because of more efficient parity calculations.
 Maybe something about block sizes having to be powers of two and the
 wrong number of disks forcing a read?

 Anybody know what I'm referring to?  Has it been fixed?  I see the
 zfs best practices guide says to use only odd numbers of disks, but
 it doesn't say why.  (don't you hate that?)
   

See the Metaslab alignment thread.
http://www.opensolaris.org/jive/thread.jspa?messageID=60241#60241
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic - was it zfs related?

2008-07-16 Thread Michael Hale

On Jul 15, 2008, at 4:31 PM, Richard Elling wrote:

 Michael Hale wrote:
 Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc  
 i386  i86pc) rebooted.

 Looking at /var/crash/HOSTNAME, I saw the unix.0 and vmcore0 files.

 Loading them up in MDB, I get the following:

  ::panicinfo

 In general, if you get a panic that cannot be directly attributed
 to hardware, please file a bug.
 -- richard



What's the proper way to file a bug report for opensolaris?  Is there  
a web form and a way to upload the core file?
--
Michael Hale[EMAIL 
PROTECTED] 
 
Manager of Engineering Support  Enterprise Engineering 
Group
Transcom Enhanced Services  
http://www.transcomus.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]

2008-07-16 Thread Bob Friesenhahn
On Wed, 16 Jul 2008, Matthew Huang wrote:
 comparing to their existing systems. However the outputs of the I/O stress 
 test with iozone show the mixed results as follows:

   * The read performance sharply degrades (almost down to 1/20, i.e
 from 2,000,000 down to 100,000) when the file sizes are larger
 than 256KBytes.

This issue is almost certainly client-side rather than server side. 
The 256KByte threshold is likely the NFS buffer cache size (could be 
overall filesystem cache size) on the client. In order to know for 
sure, run iozone directly on the server as well.

If tests directly on the server don't show a slowdown at the 256KByte 
threshold, then the abrupt slowdown is due to client caching combined 
with inadequate network transfer performance or excessive network 
latency.  If sequential read performance is important to you, then you 
should investigate NFS client tuning parameters (mount parameters) 
related to the amount of sequential read-ahead performed by the 
client.  If clients request an unnecessary amount of read-ahead, then 
network performance could suffer due to transferring data which is 
never used.  When using NFSv3 or later, TCP tuning parameters can be a 
factor as well.

You can expect that ZFS read performance will slow down on the server 
once the ZFS ARC size becomes significant as compared to the amount of 
installed memory on the server.  For re-reads, if the file is larger 
than the ARC can grow, then ZFS needs to go to disk rather than use 
its cache.

Do an ftp transfer from the server to the client.  A well-tuned NFS 
should be at least as fast as this.

Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic - was it zfs related?

2008-07-16 Thread Richard Elling
Michael Hale wrote:

 What's the proper way to file a bug report for opensolaris?  Is there 
 a web form and a way to upload the core file?

http://bugs.opensolaris.org
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs patches in latest sol10 u2 patch bundle

2008-07-16 Thread Manyam
 It is possible to patch your way up to the U5 kernel
 and related 
 patches, which should give you all the latest ZFS
 bits (available in 
 Solaris anyways). I have done this from U3, but I
 believe coming from U2 
 wouldn't be much different. I assume that the
 required patches are in 
 the latest bundle, but I believe 'smpatch update' is
 the prescribed 
 method these days. Be aware that there is at least
 one obsolete patch 
 that must be installed by hand in order to satisfy a
 dependency. I don't 
 recall the patch number, but I know the dependant
 patch will print out a 
 notice as such if the required patch is not
 installed. You will have to 
 go through several patch-reboot iterations (one for
 each kernel patch, 
 U2-U5) in order to get all the way there. Once your
 done patching, you 
 should be able to do a 'zpool upgrade' to the current
 version (4).
 
 Depending on your situation though, it may just be
 easier to do an 
 upgrade :)
 
 -Brian

Thanks! Bryan -- that helps
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs patches in latest sol10 u2 patch bundle

2008-07-16 Thread Ian Collins
Brian H. Nelson wrote:
 Manyam wrote:
   
 Hi ZFS gurus  --  I have a v240 with solaris10 u2 release  and ZFs - could 
 you please tell me if by applying the latest patch bundle of update 2 -- I 
 will get the all the ZFS patches installed as well ?

   
 

 It is possible to patch your way up to the U5 kernel and related 
 patches, which should give you all the latest ZFS bits (available in 
 Solaris anyways). I have done this from U3, but I believe coming from U2 
 wouldn't be much different. I assume that the required patches are in 
 the latest bundle, but I believe 'smpatch update' is the prescribed 
 method these days. 

There is a (very large) pre-packaged update 5 patch bundle on the
sunsolve patch page to handle this situation.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 40min ls in empty directory

2008-07-16 Thread Ben Rockwood
I've run into an odd problem which I lovingly refer to as a black hole 
directory.  

On a Thumper used for mail stores we've found find's take an exceptionally long 
time to run.  There are directories that have as many as 400,000 files, which I 
immediately considered the culprit.  However, under investigation, they aren't 
the problem at all.  The problem is seen here in this truss output (first 
column is delta time):


 0.0001 lstat64(tmp, 0x08046A20)  = 0
 0. openat(AT_FDCWD, tmp, O_RDONLY|O_NDELAY|O_LARGEFILE) = 8
 0.0001 fcntl(8, F_SETFD, 0x0001)   = 0
 0. fstat64(8, 0x08046920)  = 0
 0. fstat64(8, 0x08046AB0)  = 0
 0. fchdir(8)   = 0
1321.3133   getdents64(8, 0xFEE48000, 8192) = 48
1255.8416   getdents64(8, 0xFEE48000, 8192) = 0
 0.0001 fchdir(7)   = 0
 0.0001 close(8)= 0

These two getdents64 syscalls take approx 20 mins each.  Notice that the 
directory structure is 48 bytes, the directory is empty:

drwx--   2 102  1022 Feb 21 02:24 tmp

My assumption is that the directory is corrupt, but I'd like to prove that.  I 
have a scrub running on the pool, but its got about 16 hours to go before it 
completes.  20% complete thus far and nothing is reported.

No errors are logged when I stimulate this problem.

Does anyone have suggestions on how to get additional data on this issue?  I've 
used dtrace flows to examine, however what I really want to see is the zio's as 
a result of the getdents, but can't see how to do so.  Ideally I'd quiet the 
system and watch all zio's occurring while I stimulate it, but this is 
production and not possible.   If anyone knows how to watch DMU/ZIO activity 
that _only_ pertains to a certain PID please let me know. ;)

Suggestions on how to pro-actively catch these sorts of instances are welcome, 
as are alternative explanations.

benr.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raid-Z with N^2+1 disks

2008-07-16 Thread David Magda
On Jul 14, 2008, at 20:49, Bob Friesenhahn wrote:

 Any time you see even a single statement which is incorrect, it is  
 best to ignore that forum poster entirely and if no one corrects  
 him, then ignore the entire forum.

Yes, because each and every one of us must correct inaccuracies on the  
Internet:

http://xkcd.com/386/

:)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread Arif Khan
Hi Guys,
Can I use MPXIO/STMS with ZFS to do multipathing amount pools/devices ?

Any issues, any specific version of STMS to avoid/use ?

Thanks

Arif

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread Bob Friesenhahn
On Wed, 16 Jul 2008, Arif Khan wrote:

 Hi Guys,
 Can I use MPXIO/STMS with ZFS to do multipathing amount pools/devices ?

 Any issues, any specific version of STMS to avoid/use ?

By STMS I assume that you are talking about MPXIO.  Solaris 10 comes 
with a quite usable MPXIO and it does work great with ZFS.  MPXIO 
manages the devices so that ZFS only sees one path to each LUN.  Old 
versions did not seem to support SAS but recent releases do.  The 
ability to true load share or do active/standby depends on the model 
that the storage array uses.

MPXIO is quite ugly and rough around the edges (at least compared with 
ZFS) but it works.

Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread James C. McPherson
Arif Khan wrote:
 Hi Guys,
 Can I use MPXIO/STMS with ZFS to do multipathing amount pools/devices ?

Yes. It just works (tm).

 Any issues, any specific version of STMS to avoid/use ?

One issue which I've come across recently is that stmsboot is not
behaving itself properly when it does its reboot. I'm redesigning
and reimplementing stmsboot to cope with this and a few other things
as well.


My recommendation:

(1) only have zpools in your host
(2) run stmsboot -e. DON'T reboot when it asks you to
(3) reboot, and make sure you add the -B \$ZFS-BOOTFS arg
 to the command line. (Can't recall the sparc version, sorry)
(4) when you system comes back up to single-user and tells you
 that it can't mount root, run

 svcadm disable mpxio-upgrade

(5) either hit ^D (ctrl D) or reboot


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread James C. McPherson
Bob Friesenhahn wrote:
 On Wed, 16 Jul 2008, Arif Khan wrote:
 
 Hi Guys,
 Can I use MPXIO/STMS with ZFS to do multipathing amount pools/devices ?

 Any issues, any specific version of STMS to avoid/use ?
 
 By STMS I assume that you are talking about MPXIO.  Solaris 10 comes 
 with a quite usable MPXIO and it does work great with ZFS.  MPXIO 
 manages the devices so that ZFS only sees one path to each LUN.  Old 
 versions did not seem to support SAS but recent releases do. 

You're right, we added MPxIO support for SAS to S10 via patch
125081/125082, rev -14 or later.

...
 MPXIO is quite ugly and rough around the edges (at least compared with 
 ZFS) but it works.

Just curious - what do you see as the ugliness in MPxIO? I don't
have an agenda to push, I'd just like to get feedback from you
on what you see as opportunities for improvement.

thankyou,
James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread Bob Friesenhahn
On Thu, 17 Jul 2008, James C. McPherson wrote:
 ...
 MPXIO is quite ugly and rough around the edges (at least compared with 
 ZFS) but it works.

 Just curious - what do you see as the ugliness in MPxIO? I don't
 have an agenda to push, I'd just like to get feedback from you
 on what you see as opportunities for improvement.

The most obvious thing is the ungainly long device names and the odd 
requirement to update /etc/vfstab.  The other thing I noticed is that 
while 'mpathadm' works as documented, it lacks in user friendlyness. 
It almost always requires executing a command to show what is 
available, and then doing a specific query based on the items listed. 
A few 'composite' type commands which list the devices along with 
certain commonly requested information would be useful.  Currently we 
get a list of logical units and then we do 'show lu' to dump massive 
amounts of detail.  When there are large amounts of logical units (as 
I have) this approach becomes overwelming if all I want to learn is 
the current path status for the devices in a ZFS pool.

When I first used 'stmsboot -e' (as the StorageTek 2540 docs 
recommend), it caused my system not to boot since it uses SAS disks 
and there were apparently problems with local SAS disks at that time 
(and it sounds like there still is).  I learned the hard way to only 
request multipath for what actually needs it.

Here is a small work-around script I coded up recently to list path 
status:

#!/bin/sh
# Test path access to multipathed devices
devs=`mpathadm list lu | grep /dev/rdsk/`
for dev in $devs
do
   echo === $dev ===
   mpathadm show lu $dev | grep 'Access State'
done

It should provide a small example of useful composite functionality 
(which can obviously be far better than my script!).

Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread James C. McPherson
Hi Bob,
thanks for the quick response. Comments inline below

Bob Friesenhahn wrote:
 On Thu, 17 Jul 2008, James C. McPherson wrote:
 ...
 MPXIO is quite ugly and rough around the edges (at least compared 
 with ZFS) but it works.

 Just curious - what do you see as the ugliness in MPxIO? I don't
 have an agenda to push, I'd just like to get feedback from you
 on what you see as opportunities for improvement.
 
 The most obvious thing is the ungainly long device names and the odd 
 requirement to update /etc/vfstab.  

I'm fairly sure that the long device names aspect won't change.

I don't understand what you mean by Odd requirement to update /etc/vfstab
- when we turn on mpxio the device paths change, so any fs that's not
ZFS will require repointing, as it were. One of the issues I've come
up against with stmsboot and ZFSroot is that stmsboot has no clue
whatsoever on how to deal with ZFS, so that's something I'm building
in to the changes I'm making.

  The other thing I noticed is that
 while 'mpathadm' works as documented, it lacks in user friendlyness. 
[snip]

I totally agree. Some time ago I logged a bug against mpathadm's
command line behaviour but got no satisfactory response from the
group which maintains it.

 When I first used 'stmsboot -e' (as the StorageTek 2540 docs recommend), 
 it caused my system not to boot since it uses SAS disks and there were 
 apparently problems with local SAS disks at that time (and it sounds 
 like there still is).  I learned the hard way to only request multipath 
 for what actually needs it.

I'm the bloke on the hook for the S10 MPxIO/mpt backport since I'm the
one who integrated the changes into the relevant gate. Could you give
me more detail please on what problems you saw with the ST2540?

While you're at it, if you've got any suggestions that you'd like
me to consider with my reimplementation of stsmboot then please let
me know.

Cheers,
James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and 2530 jbod

2008-07-16 Thread Frank Cusack
On June 14, 2007 1:56:05 AM -0700 Frank Cusack [EMAIL PROTECTED] wrote:
 Anyway I agree Sun should fill this hole, but the 2530 misses the mark.
 I'd like to see a chassis that takes 750GB/1TB SATA drives, with SAS
 host ports.  And sell just the chassis, so I can skip the 100%+ drive
 markup.  I guess I'm looking for a Promise J300s, but at twice the
 price (which is worth it to get better engineered hardware).

I'm going to go ahead and give myself credit for the J4000 line of
products. :) :) :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and 2530 jbod

2008-07-16 Thread Rich Teer
On Wed, 16 Jul 2008, Frank Cusack wrote:

 On June 14, 2007 1:56:05 AM -0700 Frank Cusack [EMAIL PROTECTED] wrote:
  Anyway I agree Sun should fill this hole, but the 2530 misses the mark.
  I'd like to see a chassis that takes 750GB/1TB SATA drives, with SAS
  host ports.  And sell just the chassis, so I can skip the 100%+ drive
  markup.  I guess I'm looking for a Promise J300s, but at twice the
  price (which is worth it to get better engineered hardware).
 
 I'm going to go ahead and give myself credit for the J4000 line of
 products. :) :) :)

*Ahem*

http://richteer.blogspot.com/2007/04/close-but-no-cigar.html

and


http://richteer.blogspot.com/2006/05/sun-storage-product-i-would-like-to.html

:-)

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread Bob Friesenhahn
On Thu, 17 Jul 2008, James C. McPherson wrote:
 I'm fairly sure that the long device names aspect won't change.

 I don't understand what you mean by Odd requirement to update /etc/vfstab
 - when we turn on mpxio the device paths change, so any fs that's not
 ZFS will require repointing, as it were. One of the issues I've come

It was a design choice to change the device paths.  There are other 
approaches which would not have necessitated device path changes.

 I'm the bloke on the hook for the S10 MPxIO/mpt backport since I'm the
 one who integrated the changes into the relevant gate. Could you give
 me more detail please on what problems you saw with the ST2540?

There were no problems with the ST2540.  The issues were with the SAS 
drives in the local backplane (Sun Ultra 40 M2).

Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with STMS/MPXIO

2008-07-16 Thread James C. McPherson
Bob Friesenhahn wrote:
 On Thu, 17 Jul 2008, James C. McPherson wrote:
 I'm fairly sure that the long device names aspect won't change.

 I don't understand what you mean by Odd requirement to update /etc/vfstab
 - when we turn on mpxio the device paths change, so any fs that's not
 ZFS will require repointing, as it were. One of the issues I've come
 
 It was a design choice to change the device paths.  There are other 
 approaches which would not have necessitated device path changes.

What would you have done instead?

 I'm the bloke on the hook for the S10 MPxIO/mpt backport since I'm the
 one who integrated the changes into the relevant gate. Could you give
 me more detail please on what problems you saw with the ST2540?
 
 There were no problems with the ST2540.  The issues were with the SAS 
 drives in the local backplane (Sun Ultra 40 M2).

Bob, I actually do want to know the specifics. As I mentioned,
I'm the person who delivered the backport of those features.
That means that I need to followup on issues such as the one
you are alluding to.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and 2530 jbod

2008-07-16 Thread Frank Cusack
On July 16, 2008 9:40:03 PM -0700 Rich Teer [EMAIL PROTECTED] 
wrote:
 
http://richteer.blogspot.com/2006/05/sun-storage-product-i-would-like-to.html

I remember that!  The 2.5 disks don't really count as low cost, but
still your other post beats me. :P

Let's say it was a team effort. :)

-frank
ps. kudos to Sun.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-16 Thread Joe S
I got overzealous with snapshot creation. Every 5 mins is a bad idea.
Way too many.

What's the easiest way to delete the empty ones?

zfs list takes FOREVER
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to delete hundreds of emtpy snapshots [SEC=PERSONAL]

2008-07-16 Thread LEES, Cooper
Hi there,

Yes 5 minutes is too often.

I would, if it is easy and you have enough room, and don't want any of  
the snapshots, create another temp zfs file system and copy your data  
there and destroy the current zfs file system. Then recreate it and  
copy the data back. I don't know if there is a better way ... If there  
is I would be interested to know incase I ever actually need to fix  
something like this :)

Ta,
---
Cooper Ry Lees
UNIX Evangelist
Information Management Services (IMS)
Australian Nuclear Science and Technology Organisation (ANSTO)
[p] +61 2 9717 3853
[e] [EMAIL PROTECTED]

On 17/07/2008, at 3:16 PM, Joe S wrote:

 I got overzealous with snapshot creation. Every 5 mins is a bad idea.
 Way too many.

 What's the easiest way to delete the empty ones?

 zfs list takes FOREVER
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss