Call for Area SATA II RAID controller testers [Fwd: svn commit: r215234 - head/sys/dev/arcmsr]

2010-11-13 Thread Xin LI
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I have just committed a new vendor release of arcmsr(4) driver.  This is
intended for 8.2-RELEASE so please test if you could.

Thanks in advance!

(Note: I have a tarball at
http://people.freebsd.org/~delphij/misc/arcmsr.tar.xz which can be used
for 8.x system, untar over /usr/src and rebuild the kernel or module
depending on your configuration).

-  Original Message 
Subject: svn commit: r215234 - head/sys/dev/arcmsr
Date: Sat, 13 Nov 2010 08:58:36 + (UTC)
From: Xin LI delp...@freebsd.org
To: src-committ...@freebsd.org, svn-src-...@freebsd.org,
svn-src-h...@freebsd.org

Author: delphij
Date: Sat Nov 13 08:58:36 2010
New Revision: 215234
URL: http://svn.freebsd.org/changeset/base/215234

Log:
  Update to vendor release 1.20.00.19.

  Bug fixes:
* Fixed inquiry data fails comparion at DV1 step
* Fixed bad range input in bus_alloc_resource for ADAPTER_TYPE_B
* Fixed arcmsr driver prevent arcsas support for Areca SAS HBA ARC13x0

  Many thanks to Areca for continuing to support FreeBSD.

  This commit is intended for MFC before 8.2-RELEASE.

  Submitted by:   Ching-Lung Huang ching2048 areca com tw

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.16 (FreeBSD)

iQEcBAEBCAAGBQJM3lRTAAoJEATO+BI/yjfBs3wH/24ViOUPwdDjr9lDsX6RaWCQ
Ux+9YsekrXLVks61zH8B1dW1rXmthk+aiXpE223UYkcb2M5sLgOQCBYlSDoSwJXu
q8iLLZ9Dg6hWpLiS1u6sCj3jjsQbsDVuW1BCrCTSr/eOp6AbXI19GEDouPxVKkt3
wc1amh3eo6ZQAWnxksk+6/HK4nGJOQhjuEC8llybSsImeqqzoEGhRyqJVGa3NO7q
fZfTX108QItRmx9Uavh3/2Sa4WA9RWWky+QUSg3hPZg1kNSYJOHuCHgEQIGEE+9R
qG38IjHP+NPw0jZVAE7Qap0rA/iMY5FOKeLTjH0PvRBsFeRiPP22KRvdf8eQBM8=
=X4q7
-END PGP SIGNATURE-
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Can't use wireless networking after upgrade the world recently

2010-11-13 Thread Bernhard Schmidt
On Saturday 13 November 2010 08:32:05 Yue Wu wrote:
 Hi list,
 
 As the title, my settings for wireless networking worked fine before
 upgrade to the yesterday stable src, don't know why now it doesn't work
 anymore. I've attached the startup's log, loader.conf, rc.conf, and output
 of ifconfig(trancated useless parts of them as needed).
 
 any info is needed? plz let me know.
 
 Sorry for my poor English, plz let me know if I didn't describe clearly.

Can you try that with the wlan_amrr module loaded? Adding it to loader.conf 
should be sufficient.

-- 
Bernhard
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Sense fetching [Was: cdrtools /devel ...]

2010-11-13 Thread Alexander Motin
Brandon Gooch wrote:
 2010/11/5 Alexander Motin m...@freebsd.org:
 Hi.

 I've reviewed tests that scgcheck does to SCSI subsystem. It shown
 combination of several issues in both CAM, ahci(4) and cdrtools itself.
 Several small patches allow us to pass most of that tests:
 http://people.freebsd.org/~mav/sense/

 ahci_resid.patch: Add support for reporting residual length on data
 underrun. SCSI commands often returns results shorter then expected.
 Returned value allows application to know/check how much data it really
 has. It is also important for sense fetching, as ATAPI and USB devices
 return sense as data in response to REQUEST_SENSE command.

 sense_resid.patch: When manually requesting sense data (ATAPI or USB),
 request only as much data as user requested (not the fixed structure
 size), and return respective sense residual length.

 pass_autosence.patch: Unless CAM_DIS_AUTOSENSE is set, always fetch
 sense if not done by SIM, independently of CAM_PASS_ERR_RECOVER. As soon
 as device freeze released before returning to user-level, user-level
 application by definition can't reliably fetch sense data if some other
 application (like hald) tries to access device same time.

 cdrtools.patch: Make libscg (part of cdrtools) on FreeBSD to submit
 wanted sense length to CAM and do not clear sense return buffer. It is
 mostly cosmetics, important probably only for scgcheck.

 Testers and reviewers welcome. I am especially interested in opinion
 about pass_autosence.patch -- may be we should lower sense fetching even
 deeper, to make it work for all cam_periph_runccb() consumers.
 
 Hey mav, sorry to chime in after so long here, but have some of these
 patches been committed (as of r215179)?
 
 Which patches are still applicable for testing? I assume the cdrtools
 patch for sure...

Now uncommitted pass_autosence.patch and possibly cdrtools.patch.

-- 
Alexander Motin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-13 Thread Andriy Gapon
on 13/11/2010 04:27 Martin Matuska said the following:
 Yes, this is indeed a leak introduced by importing onnv revision 9214
 and it exists in perforce as well - very easy to reproduce.
 
 # mount -t zfs t...@t1 /mnt
 # umount /mnt (- hang)
 
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6604992
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6810367
 
 This is not compatible with mounting snapshots outside mounted ZFS and I
 was not able to reproduce the errors defined in 6604992 and 6810367
 (they are Solaris-specific). I suggest we comment out this code (from
 head, later MFC and p4 as well).
 
 Patch (should work with HEAD and 8-STABLE):
 http://people.freebsd.org/~mm/patches/zfs/zfs_vfsops.c.patch

Not quite sure, but perhaps it's better to make the logic in each place match
the other.  That is, I see that the code does hold on a filesystem of a covered
vnode, but does rele on a parent ZFS filesystem.
Or is this kind of protection not needed at all for FreeBSD?

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-13 Thread Martin Matuska
No, this is not good for us. Solaris does not allow mounting of
snapshots on any vnode, like we do. Solaris has them only in
.zfs/snapshots. This allows us to have read-only mounts without even
mounting the parent zfs.

Before v15 we have been happy with that code and had no issues :-)

I have a very simple testcase where just fixing the VFS_RELE breaks our
forced unmount. Let's say we use the correct VFS_RELE in zfs_vfsops.c:
VFS_RELE(vfsp-mnt_vnodecovered-v_vfsp);

Now let's say you have a mounted filesystem (e.g. md) under /mnt:
/dev/md5 on /mnt (ufs, local)

# mkdir /mnt/test
# mount -t zfs t...@t2 /mnt/test
# umount -f /mnt

Now you will hang because the second VFS_HOLD. So I stick to my opinion
that this extra protection is more a problem than a solution in our
case and it should be commented out.

Dňa 13.11.2010 11:27, Andriy Gapon  wrote / napísal(a):
 on 13/11/2010 04:27 Martin Matuska said the following:
 Yes, this is indeed a leak introduced by importing onnv revision 9214
 and it exists in perforce as well - very easy to reproduce.

 # mount -t zfs t...@t1 /mnt
 # umount /mnt (- hang)

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6604992
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6810367

 This is not compatible with mounting snapshots outside mounted ZFS and I
 was not able to reproduce the errors defined in 6604992 and 6810367
 (they are Solaris-specific). I suggest we comment out this code (from
 head, later MFC and p4 as well).

 Patch (should work with HEAD and 8-STABLE):
 http://people.freebsd.org/~mm/patches/zfs/zfs_vfsops.c.patch
 
 Not quite sure, but perhaps it's better to make the logic in each place match
 the other.  That is, I see that the code does hold on a filesystem of a 
 covered
 vnode, but does rele on a parent ZFS filesystem.
 Or is this kind of protection not needed at all for FreeBSD?
 
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-13 Thread Andriy Gapon
on 13/11/2010 13:06 Martin Matuska said the following:
 No, this is not good for us. Solaris does not allow mounting of
 snapshots on any vnode, like we do. Solaris has them only in
 .zfs/snapshots. This allows us to have read-only mounts without even
 mounting the parent zfs.
 
 Before v15 we have been happy with that code and had no issues :-)
 
 I have a very simple testcase where just fixing the VFS_RELE breaks our
 forced unmount. Let's say we use the correct VFS_RELE in zfs_vfsops.c:
 VFS_RELE(vfsp-mnt_vnodecovered-v_vfsp);
 
 Now let's say you have a mounted filesystem (e.g. md) under /mnt:
 /dev/md5 on /mnt (ufs, local)
 
 # mkdir /mnt/test
 # mount -t zfs t...@t2 /mnt/test
 # umount -f /mnt
 
 Now you will hang because the second VFS_HOLD.

Hang here would be bad, I agree.
But I think that the umount shouldn't succeed either, in this case.

 So I stick to my opinion
 that this extra protection is more a problem than a solution in our
 case and it should be commented out.


-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-13 Thread Kostik Belousov
On Sat, Nov 13, 2010 at 01:09:55PM +0200, Andriy Gapon wrote:
 on 13/11/2010 13:06 Martin Matuska said the following:
  No, this is not good for us. Solaris does not allow mounting of
  snapshots on any vnode, like we do. Solaris has them only in
  .zfs/snapshots. This allows us to have read-only mounts without even
  mounting the parent zfs.
  
  Before v15 we have been happy with that code and had no issues :-)
  
  I have a very simple testcase where just fixing the VFS_RELE breaks our
  forced unmount. Let's say we use the correct VFS_RELE in zfs_vfsops.c:
  VFS_RELE(vfsp-mnt_vnodecovered-v_vfsp);
  
  Now let's say you have a mounted filesystem (e.g. md) under /mnt:
  /dev/md5 on /mnt (ufs, local)
  
  # mkdir /mnt/test
  # mount -t zfs t...@t2 /mnt/test
  # umount -f /mnt
  
  Now you will hang because the second VFS_HOLD.
 
 Hang here would be bad, I agree.
 But I think that the umount shouldn't succeed either, in this case.
Normal unmount indeed shall not succeed in this case, because mount
adds a reference to the covered vnode. But forced unmount should be
allowed to proceed.

After unmount, you can use fsid to unmount the lower mount point.
 
  So I stick to my opinion
  that this extra protection is more a problem than a solution in our
  case and it should be commented out.
 
 
 -- 
 Andriy Gapon
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


pgpZDbybghpdJ.pgp
Description: PGP signature


Re: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-13 Thread Andriy Gapon
on 13/11/2010 13:21 Kostik Belousov said the following:
 On Sat, Nov 13, 2010 at 01:09:55PM +0200, Andriy Gapon wrote:
 on 13/11/2010 13:06 Martin Matuska said the following:
 No, this is not good for us. Solaris does not allow mounting of
 snapshots on any vnode, like we do. Solaris has them only in
 .zfs/snapshots. This allows us to have read-only mounts without even
 mounting the parent zfs.

 Before v15 we have been happy with that code and had no issues :-)

 I have a very simple testcase where just fixing the VFS_RELE breaks our
 forced unmount. Let's say we use the correct VFS_RELE in zfs_vfsops.c:
 VFS_RELE(vfsp-mnt_vnodecovered-v_vfsp);

 Now let's say you have a mounted filesystem (e.g. md) under /mnt:
 /dev/md5 on /mnt (ufs, local)

 # mkdir /mnt/test
 # mount -t zfs t...@t2 /mnt/test
 # umount -f /mnt

 Now you will hang because the second VFS_HOLD.

 Hang here would be bad, I agree.
 But I think that the umount shouldn't succeed either, in this case.
 Normal unmount indeed shall not succeed in this case, because mount
 adds a reference to the covered vnode. But forced unmount should be
 allowed to proceed.
 
 After unmount, you can use fsid to unmount the lower mount point.

Ah, I see now, thank you for the explanation.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Sense fetching [Was: cdrtools /devel ...]

2010-11-13 Thread Brandon Gooch
On Sat, Nov 13, 2010 at 3:34 AM, Alexander Motin m...@freebsd.org wrote:
 Brandon Gooch wrote:
 2010/11/5 Alexander Motin m...@freebsd.org:
 Hi.

 I've reviewed tests that scgcheck does to SCSI subsystem. It shown
 combination of several issues in both CAM, ahci(4) and cdrtools itself.
 Several small patches allow us to pass most of that tests:
 http://people.freebsd.org/~mav/sense/

 ahci_resid.patch: Add support for reporting residual length on data
 underrun. SCSI commands often returns results shorter then expected.
 Returned value allows application to know/check how much data it really
 has. It is also important for sense fetching, as ATAPI and USB devices
 return sense as data in response to REQUEST_SENSE command.

 sense_resid.patch: When manually requesting sense data (ATAPI or USB),
 request only as much data as user requested (not the fixed structure
 size), and return respective sense residual length.

 pass_autosence.patch: Unless CAM_DIS_AUTOSENSE is set, always fetch
 sense if not done by SIM, independently of CAM_PASS_ERR_RECOVER. As soon
 as device freeze released before returning to user-level, user-level
 application by definition can't reliably fetch sense data if some other
 application (like hald) tries to access device same time.

 cdrtools.patch: Make libscg (part of cdrtools) on FreeBSD to submit
 wanted sense length to CAM and do not clear sense return buffer. It is
 mostly cosmetics, important probably only for scgcheck.

 Testers and reviewers welcome. I am especially interested in opinion
 about pass_autosence.patch -- may be we should lower sense fetching even
 deeper, to make it work for all cam_periph_runccb() consumers.

 Hey mav, sorry to chime in after so long here, but have some of these
 patches been committed (as of r215179)?

 Which patches are still applicable for testing? I assume the cdrtools
 patch for sure...

 Now uncommitted pass_autosence.patch and possibly cdrtools.patch.


OK. Patched kernel and cdrtools has resulted in a working cdrecord
(burned an ISO successfully) and an endless stream of:

...
(pass0:ata0:0:0:0): Requesting SCSI sense data
(pass0:ata0:0:0:0): SCSI status error
(pass0:ata0:0:0:0): Requesting SCSI sense data
(pass0:ata0:0:0:0): SCSI status error
(pass0:ata0:0:0:0): Requesting SCSI sense data
(pass0:ata0:0:0:0): SCSI status error
(pass0:ata0:0:0:0): Requesting SCSI sense data
(pass0:ata0:0:0:0): SCSI status error
...

ad infinitum until I start cdrecord:

(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): Requesting SCSI sense data
(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): READ CAPACITY. CDB: 25 0 0 0 0 0 0 0 0 0
(cd0:ata0:0:0:0): CAM status: SCSI Status Error
(cd0:ata0:0:0:0): SCSI status: Check Condition
(cd0:ata0:0:0:0): SCSI sense: NOT READY asc:4,8 (Logical unit not
ready, long write in progress)
(cd0:ata0:0:0:0): Error 16, Unretryable error
(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): Requesting SCSI sense data
(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): READ CAPACITY. CDB: 25 0 0 0 0 0 0 0 0 0
(cd0:ata0:0:0:0): CAM status: SCSI Status Error
(cd0:ata0:0:0:0): SCSI status: Check Condition
(cd0:ata0:0:0:0): SCSI sense: NOT READY asc:4,8 (Logical unit not
ready, long write in progress)
(cd0:ata0:0:0:0): Error 16, Unretryable error
(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): Requesting SCSI sense data
(cd0:ata0:0:0:0): SCSI status error
(cd0:ata0:0:0:0): READ CAPACITY. CDB: 25 0 0 0 0 0 0 0 0 0
(cd0:ata0:0:0:0): CAM status: SCSI Status Error
(cd0:ata0:0:0:0): SCSI status: Check Condition
(cd0:ata0:0:0:0): SCSI sense: NOT READY asc:4,8 (Logical unit not
ready, long write in progress)
(cd0:ata0:0:0:0): Error 16, Unretryable error
(pass0:ata0:0:0:0): SCSI status error
(pass0:ata0:0:0:0): Requesting SCSI sense data

cdrecord output:

bran...@x300:~$ sudo cdrecord dev=0,0,0
Fedora-14-i686-Live-Desktop.iso
   (11-13 11:41)
cdrecord: No write mode specified.
cdrecord: Assuming -sao mode.
cdrecord: If your drive does not accept -sao, try -tao.
cdrecord: Future versions of cdrecord may have different drive
dependent defaults.
Cdrecord-ProDVD-ProBD-Clone 3.00 (amd64-unknown-freebsd9.0) Copyright
(C) 1995-2010 Jörg Schilling
scsidev: '0,0,0'
scsibus: 0 target: 0 lun: 0
Using libscg version 'schily-0.9'.
Device type: Removable CD-ROM
Version: 0
Response Format: 2
Capabilities   :
Vendor_info: 'MATSHITA'
Identifikation : 'DVD-RAM UJ-844  '
Revision   : 'RC02'
Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
Using generic SCSI-3/mmc   CD-R/CD-RW driver (mmc_cdr).
Driver flags   : MMC-3 SWABAUDIO BURNFREE
Supported modes: TAO PACKET SAO
cdrecord: Warning: Cannot read drive buffer.
cdrecord: Warning: The DMA speed test has been skipped.
Starting to write CD/DVD/BD at speed 16 in real SAO mode for single session.
Last chance to quit, starting real write0 seconds. Operation starts.
Turning BURN-Free off
cdrecord: WARNING: Drive 

ZFS panic after replacing log device

2010-11-13 Thread Terry Kennedy
I'm posting this to the freebsd-stable and freebsd-fs mailing lists. Followups
should probably happen on freebsd-fs.

I have a ZFS pool configured as:

zpool create data raidz da1 da2 da3 da4 da5 raidz da6 da7 da8 da9 da10 
raidz da11 da12 da13 da14 da15 spare da16 log da0

where da1-16 are WD2003FYYS drives (2TB RE4) and da0 is a 256GB PCI-Express
SSD (name omitted to protect the guilty).

The SSD has been dropping offline randomly - it seems that one or more flash 
modules pop out of their sockets and need to be re-seated frequently for some 
reason.

The most recent time it did that, I replaced the SSD with another one (for some 
reason, the manufacturer ties the flash modules to a particular controller, so 
just moving the modules results in an offline SSD and inability to manage it 
due to license limits exceeded or some such nonsense).

ZFS wasn't happy with the log device being changed, and reported it as 
corrupted, with the suggested corrective action being to zpool clear it. I 
did that, and then did a zpool replace data da0 da0 and it claimed to 
successfully resilver it. I then did a zpool scrub and the scrub completed 
with no errors. So far, so good.

However, any attempt to write to the array results in a near-immediate panic:

panic: solaris assert: sm-sm_spare + size = sm-sm_size, file: 
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c,
 
line: 93 cpuid=2

(Screenshot at http://www.tmk.com/transient/zfs-panic.png in case I mis-typed
something).

This is repeatable across reboot / scrub / test cycles. System is 8-STABLE as 
of Fri Nov  5 19:08:35 EDT 2010, on-disk pool is version 4/15, same as the 
kernel.

I know that certain operations on log devices aren't supported until pool 
version 19 or thereabouts, but the error messages and zpool command results 
gave the impression that what I was doing was supported and worked (when it 
didn't). If this is truly a you can't do that in pool version 15, perhaps a 
warning could be added so users don't get fooled into thinking it worked?

I can give a developer remote console / root access to the box if that would 
help. I have a couple days before I will need to nuke the pool and restore it 
from backups.

Terry Kennedy http://www.tmk.com
te...@tmk.com New York, NY USA
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org