Re: [zfs-discuss] WD caviar/mpt issues

2010-06-22 Thread Giovanni Tirloni
On Fri, Jun 18, 2010 at 9:53 AM, Jeff Bacon  wrote:
> I know that this has been well-discussed already, but it's been a few months 
> - WD caviars with mpt/mpt_sas generating lots of retryable read errors, 
> spitting out lots of beloved " Log info 3108 received for target" 
> messages, and just generally not working right.
>
> (SM 836EL1 and 836TQ chassis - though I have several variations on theme 
> depending on date of purchase: 836EL2s, 846s and 847s - sol10u8, 
> 1.26/1.29/1.30 LSI firmware on LSI retail 3801 and 3081E controllers. Not 
> that it works any better on the brace of 9211-8is I also tried these drives 
> on.)
>
> Before signing up for the list, I "accidentally" bought a wad of caviar black 
> 2TBs. No, they are new enough to not respond to WDTLER.EXE, and yes, they are 
> generally unhappy with my boxen. I have them "working" now, running 
> direct-attach off 3 3081E-Rs with breakout cables in the SC836TQ (passthru 
> backplane) chassis, set up as one pool of 2 6+2 raidz2 vdevs (16 drives 
> total), but they still toss the occasional error and performance is, well, 
> abysmal - zpool scrub runs at about a third the speed of the 1TB cudas that 
> they share the machine with, in terms of iostat reported ops/sec or 
> bytes/sec. They don't want to work in an expander chassis at all - spin up 
> the drives and connect them and they'll run great for a while, then after 
> about 12 hours they start throwing errors. (Cycling power on the enclosure 
> does seem to reset them to run for another 12 hours, but...)
>
> I've caved in and bought a brace of replacement cuda XTs, and I am currently 
> going to resign these drives to other lesser purposes (attached to si3132s 
> and ICH10 in a box to be used to store backups, running Windoze). It's kind 
> of a shame, because their single-drive performance is quite good - I've been 
> doing single-drive tests in another chassis against cudas and constellations, 
> and they seem quite a bit faster except on random-seek.
>
> Have I missed any changes/updates in the situation?

I'm been getting very bad performance out of a LSI 9211-4i card
(mpt_sas) with Seagate Constellation 2TB SAS disks, SM SC846E1 and
Intel X-25E/M SSDs. Long story short, I/O will hang for over 1 minute
at random under heavy load.

Swapping the 9211-4i for a MegaRAID ELP (mega_sas) improves
performance by 30-40% instantly and there are no hangs anymore so I'm
guessing it's something related to the mpt_sas driver.

I submitted bug #6963321 a few minutes ago (not available yet).

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Ron Mexico
I ran into the same thing where I had to manually delete directories. 

Once you export the pool you can plug in the drives anywhere else. Reimport the 
pool and the file systems come right up — as long as the drives can be seen by 
the system.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Ok - 
  So I unmounted all the directories, and then deleted them from /media, then I 
rebooted and everything remounted correctly and the system is functioning 
again..

Ok.  time for a zpool scrub, then I will try my export and import..

whew :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Did some more reading..  Should have exported first...  gulp...

So, I powered down and moved the drives around until the system came back up 
and zpool status is clean..

However, now I can't seem to boot.  During boot it finds all 17 ZFS filesystems 
and starts mounting them.
I have several file systems under /media.  I have media/TV, media/Movies, 
media/Music, media/HomeVideos.
During boot, I get an error message that states  cannot mount /media directory 
is not empty and drops me into the recovery console.

Now, when I go into /media, I have all the filesystems there and can go into 
each and the directories contain my contents..  
If I unmount each filesystem under /media the directories each one mounts under 
stays there..  as expected..  And I don't see any other files in there besides 
the mountpoints.. 

Any advice on how to correct the mounting /media problem?  I am not even sure 
where to look since the /media directory appears empty.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread David Magda
On Tue, June 22, 2010 17:32, Bob Friesenhahn wrote:
> On Tue, 22 Jun 2010, Brian wrote:
>>
>> Is what I did wrong? I was under the impression that zfs wrote a
>> label to each disk so you can move it around between controllers...?
>
> You are correct. Normally exporting and importing the pool should
> cause zfs to import the pool correctly.  Moving disks around without
> first exporting the pool is something which is best avoided.

Perhaps an issue with stale data in "/etc/zfs/zpool.cache"?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Bob Friesenhahn

On Tue, 22 Jun 2010, Brian wrote:


Is what I did wrong? I was under the impression that zfs wrote a 
label to each disk so you can move it around between controllers...?


You are correct. Normally exporting and importing the pool should 
cause zfs to import the pool correctly.  Moving disks around without 
first exporting the pool is something which is best avoided.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Did a search, but could not find the info I am looking for.

I built out my OSOL system about a month ago and have been gradually making 
changes before I move it into production.  I have setup a mirrored rpool and a 
6 drive raidz2 pool for data.  In my system I have 2 8-port SAS cards and 6 
ports on the motherboard.  I was short on SAS to SATA cables so I originally 
built the system out using the 6 ports on the motherboard and one SAS to SATA 
breakout cable.  My new cables came and I reran all the cables to spread out 
the drives between controllers...

Rebooted and my raidz2 pool is unavailable:
  pool: tank
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank UNAVAIL  0 0 0  insufficient replicas
  raidz2-0   UNAVAIL  0 0 0  insufficient replicas
c4t16d0  UNAVAIL  0 0 0  cannot open
c4t17d0  ONLINE   0 0 0
c10t0d0  ONLINE   0 0 0
c10t2d0  ONLINE   0 0 0
c10t3d0  UNAVAIL  0 0 0  cannot open
c10t5d0  FAULTED  0 0 0  corrupted data

However,
when I look at format I see all my disks -- but the labels don't seem to match 
up:
AVAILABLE DISK SELECTIONS:
   0. c4t17d0 
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@11,0
   1. c4t18d0 
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@12,0
   2. c6t4d0 
  /p...@0,0/pci1002,5...@2/pci1014,3...@0/s...@4,0
   3. c6t9d0 
  /p...@0,0/pci1002,5...@2/pci1014,3...@0/s...@9,0
   4. c10t1d0 
  /p...@0,0/pci1462,7...@11/d...@1,0
   5. c10t2d0 
  /p...@0,0/pci1462,7...@11/d...@2,0
   6. c10t4d0 
  /p...@0,0/pci1462,7...@11/d...@4,0
   7. c10t5d0 
  /p...@0,0/pci1462,7...@11/d...@5,0

How, do I recover my raidz2 pool?

Is what I did wrong? I was under the impression that zfs wrote a label to each 
disk so you can move it around between controllers...?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-22 Thread Arne Jansen

Arne Jansen wrote:

Paul B. Henson wrote:

On Sun, 20 Jun 2010, Arne Jansen wrote:


In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have >7000,
but did some boot-time tuning).


What kind of boot tuning are you referring to? We've got about 8k
filesystems on an x4500, it takes about 2 hours for a full boot cycle 
which

is kind of annoying. The majority of that time is taken up with NFS
sharing, which currently scales very poorly :(.


As you said most of the time is spent for nfs sharing, but mounting also 
isn't
as fast as it could be. We found that the zfs utility is very 
inefficient as

it does a lot of unnecessary and costly checks. We set mountpoint to legacy
and handle mounting/sharing ourselves in a massively parallel fashion (50
processes). Using the system utilities makes things a lot better, but you
can speed up sharing a lot more by setting the SHARE_NOINUSE_CHECK 
environment
variable before invoking share(1M). With this you should be able to 
share your

tree in about 10 seconds.


I forgot the disclaimer: you can crash your machine if you call share with
improper arguments if you set this flag. iirc it skips a check if the fs
is already shared, so it cannot handle a re-share properly.



Good luck,
Arne



Thanks...




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-22 Thread Arne Jansen

Paul B. Henson wrote:

On Sun, 20 Jun 2010, Arne Jansen wrote:


In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have >7000,
but did some boot-time tuning).


What kind of boot tuning are you referring to? We've got about 8k
filesystems on an x4500, it takes about 2 hours for a full boot cycle which
is kind of annoying. The majority of that time is taken up with NFS
sharing, which currently scales very poorly :(.


As you said most of the time is spent for nfs sharing, but mounting also isn't
as fast as it could be. We found that the zfs utility is very inefficient as
it does a lot of unnecessary and costly checks. We set mountpoint to legacy
and handle mounting/sharing ourselves in a massively parallel fashion (50
processes). Using the system utilities makes things a lot better, but you
can speed up sharing a lot more by setting the SHARE_NOINUSE_CHECK environment
variable before invoking share(1M). With this you should be able to share your
tree in about 10 seconds.

Good luck,
Arne



Thanks...




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-22 Thread Paul B. Henson
On Sun, 20 Jun 2010, Arne Jansen wrote:

> In my experience the boot time mainly depends on the number of datasets,
> not the number of snapshots. 200 datasets is fairly easy (we have >7000,
> but did some boot-time tuning).

What kind of boot tuning are you referring to? We've got about 8k
filesystems on an x4500, it takes about 2 hours for a full boot cycle which
is kind of annoying. The majority of that time is taken up with NFS
sharing, which currently scales very poorly :(.

Thanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ls says: /tank/ws/fubar: Operation not applicable

2010-06-22 Thread Gordon Ross
lstat64("/tank/ws/fubar", 0x080465D0)   Err#89 ENOSYS
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ls says: /tank/ws/fubar: Operation not applicable

2010-06-22 Thread Andrew Gabriel

Gordon Ross wrote:

Anyone know why my ZFS filesystem might suddenly start
giving me an error when I try to "ls -d" the top of it?
i.e.: ls -d /tank/ws/fubar
/tank/ws/fubar: Operation not applicable

zpool status says all is well.  I've tried snv_139 and snv_137
(my latest and previous installs).  It's an amd64 box.
Both OS versions show the same problem.

Do I need to run a scrub?  (will take days...)

Other ideas?
  


It might be interesting to run it under truss, to see which syscall is 
returning that error.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ls says: /tank/ws/fubar: Operation not applicable

2010-06-22 Thread Gordon Ross
Anyone know why my ZFS filesystem might suddenly start
giving me an error when I try to "ls -d" the top of it?
i.e.: ls -d /tank/ws/fubar
/tank/ws/fubar: Operation not applicable

zpool status says all is well.  I've tried snv_139 and snv_137
(my latest and previous installs).  It's an amd64 box.
Both OS versions show the same problem.

Do I need to run a scrub?  (will take days...)

Other ideas?

Thanks,
Gordon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Ross Walker
On Jun 22, 2010, at 8:40 AM, Jeff Bacon  wrote:

>> The term 'stripe' has been so outrageously severely abused in this
>> forum that it is impossible to know what someone is talking about when
>> they use the term.  Seemingly intelligent people continue to use wrong
>> terminology because they think that protracting the confusion somehow
>> helps new users.  We are left with no useful definition of
>> 'striping'.
> 
> "There is no striping." 
> (I'm sorry, I couldn't resist.)

"There is no spoon"


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Jeff Bacon
> The term 'stripe' has been so outrageously severely abused in this
> forum that it is impossible to know what someone is talking about when
> they use the term.  Seemingly intelligent people continue to use wrong
> terminology because they think that protracting the confusion somehow
> helps new users.  We are left with no useful definition of
> 'striping'.

"There is no striping." 
(I'm sorry, I couldn't resist.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss