com20020-pci IRQ problem
Hello, I have a Contemporary Controls PCI20 card and am trying to use the latest arcnet drivers under Mandrake 8.0 with a 2.4.4 kernel. I do the following: insmod arcnet insmod arc-rawmode insmod com20020 which all load just fine. However, when I do: insmod com20020-pci I get the following errors: init_module: No such device Hint: insmod errors can be caused by incorrect module parameters, including invalid IO or IRQ parameters In syslog I get: May 17 09:39:08 collector kernel: arcnet: COM20020 PCI support May 17 09:39:08 collector kernel: PCI: Found IRQ 10 for device 00:04.0 /proc/pci reports the following for the PCI20: Bus 0, device 5, function 0: Network controller: Contemporary Controls CCSI PCI20-CXB ARCnet (rev 1). IRQ 10. Non-prefetchable 32 bit memory at 0x4100 [0x417f]. I/O at 0x2080 [0x20ff]. I/O at 0x2400 [0x240f]. /proc/ioports reports for the PCI20: 2080-20ff : Contemporary Controls CCSI PCI20-CXB ARCnet 2400-240f : Contemporary Controls CCSI PCI20-CXB ARCnet 2400-2408 : arcnet (COM20020) /proc/interrupts registers nothing for 10 (but I assume this is normal because the driver never correctly loads?) I would appreciate any help in getting this module loaded. There are no irq conflicts for IRQ 10, and I tried putting the card in different slots and on other IRQs to no avail. Thanks, Mike __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [patch] optimize o_direct on block device - v3
Hi Andrew, Thanks again for finding the fix to the problem I reported. Can you tell me when I might expect this fix to show up in 2.6.20-rc? Thanks, Mike Andrew Morton wrote: > On Thu, 11 Jan 2007 13:21:57 -0600 > Michael Reed <[EMAIL PROTECTED]> wrote: > >> Testing on my ia64 system reveals that this patch introduces a >> data integrity error for direct i/o to a block device. Device >> errors which result in i/o failure do not propagate to the >> process issuing direct i/o to the device. >> >> This can be reproduced by doing writes to a fibre channel block >> device and then disabling the switch port connecting the host >> adapter to the switch. >> > > Does this fix it? > > > > > > diff -puN fs/block_dev.c~a fs/block_dev.c > --- a/fs/block_dev.c~a > +++ a/fs/block_dev.c > @@ -146,7 +146,7 @@ static int blk_end_aio(struct bio *bio, > iocb->ki_nbytes = -EIO; > > if (atomic_dec_and_test(bio_count)) { > - if (iocb->ki_nbytes < 0) > + if ((long)iocb->ki_nbytes < 0) > aio_complete(iocb, iocb->ki_nbytes, 0); > else > aio_complete(iocb, iocb->ki_left, 0); > _ > > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [patch] scsi: use lock per host instead of per device for shared queue tag host
How 'bout a comment in scsh_host.h indicating that the pointer will be NULL unless initialized by the driver? "Protect shared block queue tag" is unique to stex. Perhaps have no comment on the variable declaration in scsi_host.h and explain why you use it in stex. Mike Ed Lin wrote: > The block layer uses lock to protect request queue. Every scsi device > has a unique request queue, and queue lock is the default lock in struct > request_queue. This is good for normal cases. But for a host with > shared queue tag (e.g. stex controllers), a queue lock per device means > the shared queue tag is not protected when multiple devices are accessed > at a same time. This patch is a simple fix for this situation by introducing > a host queue lock to protect shared queue tag. Without this patch we will > see various kernel panics (including the BUG() and kernel errors in > blk_queue_start_tag and blk_queue_end_tag of ll_rw_blk.c) when accessing > different devices simultaneously (e.g. copying big file from one device to > another in smp kernels). > > This is against kernel 2.6.20-rc5. > > Signed-off-by: Ed Lin <[EMAIL PROTECTED]> > --- > drivers/scsi/scsi_lib.c |2 +- > drivers/scsi/stex.c |2 ++ > include/scsi/scsi_host.h |3 +++ > 3 files changed, 6 insertions(+), 1 deletion(-) > > diff -purN a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > --- a/drivers/scsi/scsi_lib.c 2007-01-23 14:40:28.0 -0800 > +++ b/drivers/scsi/scsi_lib.c 2007-01-23 14:46:43.0 -0800 > @@ -1574,7 +1574,7 @@ struct request_queue *__scsi_alloc_queue > { > struct request_queue *q; > > - q = blk_init_queue(request_fn, NULL); > + q = blk_init_queue(request_fn, shost->req_q_lock); > if (!q) > return NULL; > > diff -purN a/drivers/scsi/stex.c b/drivers/scsi/stex.c > --- a/drivers/scsi/stex.c 2007-01-23 14:40:28.0 -0800 > +++ b/drivers/scsi/stex.c 2007-01-23 14:48:59.0 -0800 > @@ -1254,6 +1254,8 @@ stex_probe(struct pci_dev *pdev, const s > if (err) > goto out_free_irq; > > + spin_lock_init(&host->__req_q_lock); > + host->req_q_lock = &host->__req_q_lock; > err = scsi_init_shared_tag_map(host, host->can_queue); > if (err) { > printk(KERN_ERR DRV_NAME "(%s): init shared queue failed\n", > diff -purN a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h > --- a/include/scsi/scsi_host.h2007-01-23 14:40:29.0 -0800 > +++ b/include/scsi/scsi_host.h2007-01-23 14:57:04.0 -0800 > @@ -508,6 +508,9 @@ struct Scsi_Host { > spinlock_t default_lock; > spinlock_t *host_lock; > > + spinlock_t __req_q_lock; > + spinlock_t *req_q_lock;/* protect shared block queue tag */ > + > struct mutexscan_mutex;/* serialize scanning activity */ > > struct list_headeh_cmd_q; > > > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Infinite retries reading the partition table
Luben Tuikov wrote: ...snip... > This statement in scsi_io_completion() causes the infinite retry loop: >if (scsi_end_request(cmd, 1, good_bytes, !!result) == NULL) > return; The code in 2.6.19 is "result==0", not "!!result", which is logically the same as "result!=0". Did you mean to change the logic here? Am I missing something? Mike - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] support O_DIRECT in tmpfs/ramfs
Peter Staubach wrote: > Hua Zhong wrote: >> Hi, >> >> A while ago there was a discussion about supporting direct-io on tmpfs. >> >> Here is a simple patch that does it. >> >> 1. A new fs flag FS_RAM_BASED is added and the O_DIRECT flag is ignored >>if this flag is set (suggestions on a better name?) >> >> 2. Specify FS_RAM_BASED for tmpfs and ramfs. >> >> 3. When EINVAL is returned only a fput is done. I changed it to go >>through cleanup_all. But there is still a cleanup problem: >> >> If a new file is created and then EINVAL is returned due to O_DIRECT, >> the file is still left on the disk. I am not exactly sure how to fix >> it other than adding another fs flag so we could check O_DIRECT >> support at a much earlier stage. Comments on how to fix it? > > This would seem to create two different sets of O_DIRECT semantics, > wouldn't it? I think that it would be possible to develop an application > using one of these FS_RAM_BASED file systems as the testbed, but then be > surprised when the application failed to work on other file systems such > as ext3. As I'm ignorant with regard to what is needed for "compliant" support of O_DIRECT on tmpfs, what are the issues with actually implementing the proper semantics, including the alignment and any transfer length restrictions? My $.02 is that the implementation should be fully compliant with the current semantics or it shouldn't be implemented. And I think it should be implemented. Mike > > ps > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: REGRESSION: 2.6.20-rc3-git4: EIO not returned to direct i/o application following disk error
More info. Linux duck 2.6.20-rc3-git4 #10 SMP PREEMPT Fri Jan 5 10:58:11 CST 2007 ia64 ia64 ia64 GNU/Linux For this test I used an Emulex host adapter, but the problem is not unique to it. It also occurs when the targets are connected to either LSI or QLogic adapters. Using O_DIRECT, once the targets are removed the application fails to terminate until it completes its write phase and then attempts to read/verify the written data. It then reports data miscompares. It should get an EIO and fail instead of continuing to have the write (and read) operations succeed. It used to get EIO As this is easily reproducible, I'm happy to try any changes which might address the problem. duck /root# ./disktest -cvwbB -n 10 -u 16 /dev/sde1 Tue Jan 9 15:21:39 2007: ./disktest: options: c v w b B n u ./disktest: setting O_DIRECT Tue Jan 9 15:21:39 2007: ./disktest: /dev/sde1: maxiou 4096, secsize 64 Tue Jan 9 15:21:39 2007: ./disktest: /dev/sde1: unique id is 0, random number seed is 0 Tue Jan 9 15:21:39 2007: ./disktest: /dev/sde1: starting block 0, using direct i/o. Tue Jan 9 15:21:39 2007: ./disktest: running 10 passes. Tue Jan 9 15:21:39 2007: ./disktest: /dev/sde1: testing 12799 i/o units, 16 sectors in length, min io size 1, pass = 1 Tue Jan 9 15:21:39 2007: ./disktest: /dev/sde1: 0x6473 - sequential writes Tue Jan 9 15:21:42 2007: ./disktest: /dev/sde1: 0x6473 - sequential reads Tue Jan 9 15:21:44 2007: ./disktest: /dev/sde1: 0x6473 - random writes Tue Jan 9 15:21:44 2007: ./disktest: /dev/sde1: 0x6473 - sequential reads Tue Jan 9 15:21:47 2007: ./disktest: /dev/sde1: 0x6473 - 256 random reads Tue Jan 9 15:21:47 2007: ./disktest: /dev/sde1: testing 12799 i/o units, 16 sectors in length, min io size 1, pass = 2 Tue Jan 9 15:21:47 2007: ./disktest: /dev/sde1: 0x6473 - sequential writes -- portdisable Tue Jan 9 15:22:39 2007: ./disktest: /dev/sde1: 0x6473 - sequential reads Tue Jan 9 15:22:39 2007: ./disktest: /dev/sde1: data compare error number 1 at 0x2c4000: word 0 of block 0 (0) act = bad0bad0deaddead exp = and from the console duck /root# lpfc 0001:00:02.0: 0:1305 Link Down Event x2 received Data: x2 x20 x110 rport-2:0-2: blocked FC remote port time out: removing target and saving binding rport-2:0-3: blocked FC remote port time out: removing target and saving binding lpfc 0001:00:02.0: 0:0203 Devloss timeout on WWPN 10:0:0:6:2b:8:b:9c NPort x130100 Data: x208 x7 x5 sd 2:0:0:0: SCSI error: return code = 0x0001 end_request: I/O error, dev sde, sector 179986 lpfc 0001:00:02.0: 0:0203 Devloss timeout on WWPN 10:0:0:6:2b:8:b:9d NPort x13 Data: x208 x7 x4 scsi 2:0:0:0: rejecting I/O to dead device scsi 2:0:0:0: rejecting I/O to dead device scsi 2:0:0:0: rejecting I/O to dead device scsi 2:0:0:0: rejecting I/O to dead device scsi 2:0:0:0: rejecting I/O to dead device (huge numbers of these occur) I tried the same test using O_FSYNC instead of O_DIRECT and the write error following the portdisable propagated to the test and it exited with an EIO. Mike Michael Reed wrote: > Testing using 2.6.20-rc3-git4 I observed that my direct i/o test > application no longer receives an EIO when the fc transport deletes > a target following a fibre channel switch port disable. > > With 2.6.19 EIO is returned and the application terminates. > > With 2.6.20, the requested read length is returned with incorrect > data in the buffer. > > (I was playing around with an error recovery patch when I first > discovered this, and do believe it is not limited in scope to > targets being removed by the transport.) > > This is a serious regression which puts customer data at risk. > > (Is there a formal mechanism for filing a bug?) > > Mike - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [patch] optimize o_direct on block device - v3
Testing on my ia64 system reveals that this patch introduces a data integrity error for direct i/o to a block device. Device errors which result in i/o failure do not propagate to the process issuing direct i/o to the device. This can be reproduced by doing writes to a fibre channel block device and then disabling the switch port connecting the host adapter to the switch. Can this patch be adjusted to take this into account? Or pulled? It was introduced with patch-2.6.19-git22. I believe the commit is e61c90188b9956edae1105eef361d8981a352fcd. http://marc.theaimsgroup.com/?l=linux-scsi&m=116803117327546&w=2 http://marc.theaimsgroup.com/?l=linux-scsi&m=116838403700214&w=2 I'm happy to test any suggested fixes. Thanks, Mike Reed SGI Chen, Kenneth W wrote: > This patch implements block device specific .direct_IO method instead > of going through generic direct_io_worker for block device. > > direct_io_worker is fairly complex because it needs to handle O_DIRECT > on file system, where it needs to perform block allocation, hole detection, > extents file on write, and tons of other corner cases. The end result is > that it takes tons of CPU time to submit an I/O. > > For block device, the block allocation is much simpler and a tight triple > loop can be written to iterate each iovec and each page within the iovec > in order to construct/prepare bio structure and then subsequently submit > it to the block layer. This significantly speeds up O_D on block device. > > > Signed-off-by: Ken Chen <[EMAIL PROTECTED]> > > > --- > This is v3 of the patch, I have addressed all the comments from Andrew, > Christoph, and Zach. Too many to list here for v2->v3 changes. I've > created 34 test cases specifically for corner cases and tested this patch. > (my monkey test code is on http://kernel-perf.sourceforge.net/diotest). > > > > diff -Nurp linux-2.6.19/fs/block_dev.c linux-2.6.19.ken/fs/block_dev.c > --- linux-2.6.19/fs/block_dev.c 2006-11-29 13:57:37.0 -0800 > +++ linux-2.6.19.ken/fs/block_dev.c 2006-12-06 13:16:43.0 -0800 > @@ -129,43 +129,188 @@ blkdev_get_block(struct inode *inode, se > return 0; > } > > -static int > -blkdev_get_blocks(struct inode *inode, sector_t iblock, > - struct buffer_head *bh, int create) > +static int blk_end_aio(struct bio *bio, unsigned int bytes_done, int error) > { > - sector_t end_block = max_block(I_BDEV(inode)); > - unsigned long max_blocks = bh->b_size >> inode->i_blkbits; > + struct kiocb *iocb = bio->bi_private; > + atomic_t *bio_count = &iocb->ki_bio_count; > > - if ((iblock + max_blocks) > end_block) { > - max_blocks = end_block - iblock; > - if ((long)max_blocks <= 0) { > - if (create) > - return -EIO;/* write fully beyond EOF */ > - /* > - * It is a read which is fully beyond EOF. We return > - * a !buffer_mapped buffer > - */ > - max_blocks = 0; > - } > + if (bio_data_dir(bio) == READ) > + bio_check_pages_dirty(bio); > + else { > + bio_release_pages(bio); > + bio_put(bio); > + } > + > + /* iocb->ki_nbytes stores error code from LLDD */ > + if (error) > + iocb->ki_nbytes = -EIO; > + > + if (atomic_dec_and_test(bio_count)) { > + if (iocb->ki_nbytes < 0) > + aio_complete(iocb, iocb->ki_nbytes, 0); > + else > + aio_complete(iocb, iocb->ki_left, 0); > } > > - bh->b_bdev = I_BDEV(inode); > - bh->b_blocknr = iblock; > - bh->b_size = max_blocks << inode->i_blkbits; > - if (max_blocks) > - set_buffer_mapped(bh); > return 0; > } > > +#define VEC_SIZE 16 > +struct pvec { > + unsigned short nr; > + unsigned short idx; > + struct page *page[VEC_SIZE]; > +}; > + > +#define PAGES_SPANNED(addr, len) \ > + (DIV_ROUND_UP((addr) + (len), PAGE_SIZE) - (addr) / PAGE_SIZE); > + > +/* > + * get page pointer for user addr, we internally cache struct page array for > + * (addr, count) range in pvec to avoid frequent call to get_user_pages. If > + * internal page list is exhausted, a batch count of up to VEC_SIZE is used > + * to get next set of page struct. > + */ > +static struct page *blk_get_page(unsigned long addr, size_t count, int rw, > + struct pvec *pvec) > +{ > + int ret, nr_pages; > + if (pvec->idx == pvec->nr) { > + nr_pages = PAGES_SPANNED(addr, count); > + nr_pages = min(nr_pages, VEC_SIZE); > + down_read(¤t->mm->mmap_sem); > + ret = get_user_pages(current, current->mm, addr, nr_pages, > + rw == READ, 0, pvec->page, NULL); > + up_read(¤t->mm->mmap_sem); > + if (ret < 0) >
Re: [patch] optimize o_direct on block device - v3
Andrew Morton wrote: > On Thu, 11 Jan 2007 13:21:57 -0600 > Michael Reed <[EMAIL PROTECTED]> wrote: > >> Testing on my ia64 system reveals that this patch introduces a >> data integrity error for direct i/o to a block device. Device >> errors which result in i/o failure do not propagate to the >> process issuing direct i/o to the device. >> >> This can be reproduced by doing writes to a fibre channel block >> device and then disabling the switch port connecting the host >> adapter to the switch. >> > > Does this fix it? > Yes it does! Thank you for finding this so quickly. Mike > > > > > diff -puN fs/block_dev.c~a fs/block_dev.c > --- a/fs/block_dev.c~a > +++ a/fs/block_dev.c > @@ -146,7 +146,7 @@ static int blk_end_aio(struct bio *bio, > iocb->ki_nbytes = -EIO; > > if (atomic_dec_and_test(bio_count)) { > - if (iocb->ki_nbytes < 0) > + if ((long)iocb->ki_nbytes < 0) > aio_complete(iocb, iocb->ki_nbytes, 0); > else > aio_complete(iocb, iocb->ki_left, 0); > _ > > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[no subject]
unsubscribe linux-kernel -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/