Re: e1000, sshd, and the infamous Corrupted MAC on input

2005-02-02 Thread Matt Mackall
On Wed, Feb 02, 2005 at 10:44:14PM -0500, Ethan Weinstein wrote:
 Hey all,
 
 I've been having quite a time with the e1000 driver running at gigabit 
 speeds.  Running it at 100Fdx has never been a problem, which I've done 
 done for a long time. Last week I picked up a gigabit switch, and that's 
 when the trouble began.  I find that transferring large amounts of data 
 using scp invariably ends up with sshd spitting out Disconnecting: 
 Corrupted MAC on input.  After deciding I must have purchased a bum 
 switch, I grabbed another model.. only to get the same error.
 Finally, I used a crossover cable between the two boxes, which resulted 
 in the same error from sshd again.

Well ssh isn't an especially good test as it's hard to debug.

Try transferring large compressed files via netcat and comparing the
results. eg:

host1# nc -l -p 2000  foo.bz2

host2# nc host1 2000  foo.bz2

If the md5sums differ, follow up with a cmp -bl to see what changed.

Then we can look at the failure patterns and determine if there's some
data or alignment dependence.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] add local bio pool support and modify dm

2005-02-02 Thread Jens Axboe
On Wed, Feb 02 2005, Andrew Morton wrote:
 Dave Olien [EMAIL PROTECTED] wrote:
 
   +extern inline void zero_fill_bio(struct bio *bio)
   +{
   +  unsigned long flags;
   +  struct bio_vec *bv;
   +  int i;
   +
   +  bio_for_each_segment(bv, bio, i) {
   +  char *data = bvec_kmap_irq(bv, flags);
   +  memset(data, 0, bv-bv_len);
   +  flush_dcache_page(bv-bv_page);
   +  bvec_kunmap_irq(data, flags);
   +  }
   +}
 
 heavens.  Why was this made inline?  And extern inline?
 
 It's too big for inlining (and is super-slow anyway) and will cause all
 sorts of unpleasant header file dependencies for all architectures.  bio.h
 now needs to see the implementation of everyone's flush_dcache_page(), for
 example.
 
 
 Something like this?
 
 --- 
 25/include/linux/bio.h~add-local-bio-pool-support-and-modify-dm-uninline-zero_fill_bio
 2005-02-02 18:17:18.225901376 -0800
 +++ 25-akpm/include/linux/bio.h   2005-02-02 18:17:18.230900616 -0800
 @@ -286,6 +286,7 @@ extern void bio_set_pages_dirty(struct b
  extern void bio_check_pages_dirty(struct bio *bio);
  extern struct bio *bio_copy_user(struct request_queue *, unsigned long, 
 unsigned int, int);
  extern int bio_uncopy_user(struct bio *);
 +void zero_fill_bio(struct bio *bio);
  
  #ifdef CONFIG_HIGHMEM
  /*
 @@ -335,18 +336,4 @@ extern inline char *__bio_kmap_irq(struc
   __bio_kmap_irq((bio), (bio)-bi_idx, (flags))
  #define bio_kunmap_irq(buf,flags)__bio_kunmap_irq(buf, flags)
  
 -extern inline void zero_fill_bio(struct bio *bio)
 -{
 - unsigned long flags;
 - struct bio_vec *bv;
 - int i;
 -
 - bio_for_each_segment(bv, bio, i) {
 - char *data = bvec_kmap_irq(bv, flags);
 - memset(data, 0, bv-bv_len);
 - flush_dcache_page(bv-bv_page);
 - bvec_kunmap_irq(data, flags);
 - }
 -}
 -
  #endif /* __LINUX_BIO_H */
 diff -puN 
 fs/bio.c~add-local-bio-pool-support-and-modify-dm-uninline-zero_fill_bio 
 fs/bio.c
 --- 
 25/fs/bio.c~add-local-bio-pool-support-and-modify-dm-uninline-zero_fill_bio   
 2005-02-02 18:17:18.227901072 -0800
 +++ 25-akpm/fs/bio.c  2005-02-02 18:17:18.231900464 -0800
 @@ -182,6 +182,21 @@ struct bio *bio_alloc(int gfp_mask, int 
   return bio_alloc_bioset(gfp_mask, nr_iovecs, fs_bio_set);
  }
  
 +void zero_fill_bio(struct bio *bio)
 +{
 + unsigned long flags;
 + struct bio_vec *bv;
 + int i;
 +
 + bio_for_each_segment(bv, bio, i) {
 + char *data = bvec_kmap_irq(bv, flags);
 + memset(data, 0, bv-bv_len);
 + flush_dcache_page(bv-bv_page);
 + bvec_kunmap_irq(data, flags);
 + }
 +}
 +EXPORT_SYMBOL(zero_fill_bio);
 +
  /**
   * bio_put - release a reference to a bio
   * @bio:   bio to release reference to
 _

Yep looks good, thanks Andrew.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Fastboot] [PATCH] Reserving backup region for kexec based crashdumps.

2005-02-02 Thread Hirokazu Takahashi
Hi Vivek and Eric,

IMHO, why don't we swap not only the contents of the top 640K
but also kernel working memory for kdump kernel?

I guess this approach has some good points.

 1.Preallocating reserved area is not mandatory at boot time.
   And the reserved area can be distributed in small pieces
   like original kexec does.

 2.Special linking is not required for kdump kernel.
   Each kdump kernel can be linked in the same way,
   where the original kernel exists.

Am I missing something?


 physical memory
   +---+
   | 640K  +
   |...|   |
   |   | copy
   +---+   |
   |   |   |
   |original-+|
   |kernel |  ||
   |   |  ||
   |...|  ||
   |   |  ||
   |   |  ||
   |   | swap  |
   |   |  ||
   +---+  ||
   |reserved--+
   |area   |  |
   |   |  |
   |kdump  |-+
   |kernel |
   +---+
   |   |
   |   |
   |   |
   +---+



 Hi Eric,
 
 It looks like we are looking at things a little differently. I
 see a portion of the picture in your mind, but obviously not 
 entirely.
 
 Perhaps, we need to step back and iron out in specific terms what 
 the interface between the two kernels should be in the crash dump
 case, and the distribution of responsibility between kernel, user space
 and the user. 
 
 [BTW, the patch was intended as a step in development up for
 comment early enough to be able to get agreement on the interface
 and think issues through to more completeness before going 
 too far. Sorry, if that wasn't apparent.]
 
 When you say evil intermingling, I'm guessing you mean the
 crashbackup= boot parameter ? If so, then yes, I agree it'd
 be nice to find a way around it that doesn't push hardcoding
 elsewhere.
 
 Let me explain the interface/approach I was looking at.
 
 1.First kernel reserves some area of memory for crash/capture kernel as
 specified by [EMAIL PROTECTED] boot time parameter.
 
 2.First kernel marks the top 640K of this area as backup area. (If
 architecture needs it.) This is sort of a hardcoding and probably this
 space reservation can be managed from user space as well as mentioned by
 you in this mail below.
 
 3. Location of backup region is exported through /proc/iomem which can
 be read by user space utility to pass this information to purgatory code
 to determine where to copy the first 640K.
 
 Note that we do not make any additional reservation for the 
 backup region. We carve this out from the top of the already 
 reserved region and export it through /proc/iomem so that 
 the user space code and the capture kernel code need not 
 make any assumptions about where this region is located.
 
 4. Once the capture kernel boots, it needs to know the location of
 backup region for two purposes.
 
 a. It should not overwrite the backup region.
 
 b. There needs to be a way for the capture tool to access the original
contents of the backed up region
 
 Boot time parameter [EMAIL PROTECTED] has been provided to pass this
 information to capture kernel. This parameter is valid only for capture
 kernel and becomes effective only if CONFIG_CRASH_DUMP is enabled.
 
 
  What is wrong with user space doing all of the extra space
  reservation?
 
 Just for clarity, are you suggesting kexec-tools creating an additional
 segment for the backup region and pass the information to kernel.
 
 There is no problem in doing reservation from user space except
 one. How does the user and in-turn capture kernel come to know the
 location of backup region, assuming that the user is going to provide
 the exactmap for capture kernel to boot into.
 
 Just a thought, is it  a good idea for kexec-tools to be creating and
 passing memmap parameters doing appropriate adjustment for backup
 region.
 
 I had another question. How is the starting location of elf headers 
 communicated to capture tool? Is parameter segment a good idea? or 
 some hardcoding? 
 
 Another approach can be that backup area information is encoded in elf
 headers and capture kernel is booted with modified memmap (User gets
 backup region information from /proc/iomem) and capture tool can
 extract backup area information from elf headers as stored by first
 kernel.
 
 Could you please elaborate a little more on what aspect of your view
 differs from the above.
 
 Thanks
 Vivek

Thaks,
Hirokazu Takahashi.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Touchpad problems with 2.6.11-rc2

2005-02-02 Thread Dmitry Torokhov
On Wednesday 02 February 2005 17:27, Peter Osterlund wrote:
 On Wed, 2 Feb 2005, Dmitry Torokhov wrote:
 
  On Wed, 02 Feb 2005 13:52:03 -0800 (PST), Peter Osterlund
  [EMAIL PROTECTED] wrote:
  
  if (mousedev-touch) {
   +   size = dev-absmax[ABS_X] - dev-absmin[ABS_X];
   +   if (size == 0) size = xres;
 
  Sorry, missed this piece first time around. Since we don't want to
  rely on screen size anymore I think we should set size = 256 *
  FRACTION_DENOM / 2 if device limits are not set up to just report raw
  coords. What do you think?
 
 I think that this case can't happen until we add support for some other
 touchpad that doesn't set the absmin/absmax variables. Both alps and
 synaptics currently set them.
 
 However, the fallback value should definitely not depend on
 FRACTION_DENOM, since this constant doesn't affect the mouse speed at all.

Oh, yes, we divide by FRACTION_DENOM later. So having size = 256 * 2
should undo all scaling and report coordinates one for one which I think
is a reasonable solution if device did not set it's size.

-- 
Dmitry
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Fastboot] [PATCH] Reserving backup region for kexec based crashdumps.

2005-02-02 Thread Itsuro Oda
Hi,

On 02 Feb 2005 08:24:03 -0700
[EMAIL PROTECTED] (Eric W. Biederman) wrote:
 
 So the kernel+initrd that captures a crash dump will live and execute
 in a reserved area of memory.  It needs to know which memory regions
 are valid, and it needs to know small things like the final register
 state of each cpu. 

Exactly.

Please let me clarify what you are going to.
1) standard kernel: reserve a small contigous area for a dump kernel
   (this is not changed as the current code)
2) standard kernel: export the information of valid physical memory
   regions. (/proc/iomem or /proc/cpumem etc.)
3) kexec (system call?): store the information of valid physical memory
   regions as ELF program header to the reserved area (mentioned 1)).
4) standard kernel: when a panic occur, append (ex.) the register
   information as ELF note after the memory information (if necessary).
   and jump new kernel
5) dump kernel: export all valid physical memory (and saved register
   information) to the user. (as /dev/oldmem /proc/vmcore ?)

Is this correct ?  one question: how the dump kernel know the saved
area of ELF headers ?

one more question: I don't understand what the 640K backup area is. 
Please let me know why it is necessary.

Thanks.
-- 
Itsuro ODA [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


<    2   3   4   5   6   7