Re: [BUG] 2.2.19 -> 80% Packet Loss

2001-06-15 Thread Scott Laird




On Fri, 15 Jun 2001 [EMAIL PROTECTED] wrote:
>
> > You can fix this by upping the socket buffer that ping asks for (look
> > for setsockopt( ... SO_RCVBUF ...)) and then tuning the kernel to
> > allow larger socket buffers.  The file to fiddle with is
> > /proc/sys/net/core/rmem_max.
>
> Currently it is set to 65535. I doubled it several times and each time saw
> no change when I sent it a ping flood with packet size 64590 or higher.
> What sort of magnitude were you thinking?

Did you change both /proc/sys/net/core/rmem_max *and* ping's setsockopt?
Do an strace on ping and see what's happening.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [BUG] 2.2.19 - 80% Packet Loss

2001-06-15 Thread Scott Laird




On Fri, 15 Jun 2001 [EMAIL PROTECTED] wrote:

  You can fix this by upping the socket buffer that ping asks for (look
  for setsockopt( ... SO_RCVBUF ...)) and then tuning the kernel to
  allow larger socket buffers.  The file to fiddle with is
  /proc/sys/net/core/rmem_max.

 Currently it is set to 65535. I doubled it several times and each time saw
 no change when I sent it a ping flood with packet size 64590 or higher.
 What sort of magnitude were you thinking?

Did you change both /proc/sys/net/core/rmem_max *and* ping's setsockopt?
Do an strace on ping and see what's happening.


Scott

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [BUG] 2.2.19 -> 80% Packet Loss

2001-06-14 Thread Scott Laird


Odds are it's a raw socket receive buffer issue.  Stock pings only ask for
a ~96k socket buffer, which means that they can only hold one ~64k packet
at a time.  So, if you're ever slow grabbing packets out of the buffer,
you're going to drop traffic.

You can fix this by upping the socket buffer that ping asks for (look for
setsockopt( ... SO_RCVBUF ...)) and then tuning the kernel to allow larger
socket buffers.  The file to fiddle with is /proc/sys/net/core/rmem_max.

That doesn't really answer why you'd want to fling that many 64k-ish ping
packets around, though.


Scott

On Thu, 14 Jun 2001 [EMAIL PROTECTED] wrote:

>
> 1. When pinging a machine using kernel 2.2.19 I consistently get an 80%
> packet loss when doing a ping -f with a packet size of 64590 or higher.
>
>
> 2. A "ping -f -s 64589" to a machine running kernel 2.2.19 results in 0%
> packet loss. By incrementing the packetsize by one "ping -f -s 64590"  or
> higher, I consistently see 80% packet loss. ifconfig on the receiving
> machine shows no anomolies.
>
>
> 3. 2.2.19, ping, flood, 64589, 64590, 80%, packet, loss
>
>
> 4. Linux version 2.2.19-7.0.1 ([EMAIL PROTECTED]) (gcc version
> egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)) #1 Tue Apr 10 00:55:03
> EDT 2001
>
>
> 5. There was no oops associated with this.
>
>
> 6. >>EOF
> /bin/ping -f -s 64589
> /bin/ping -f -s 64590
> EOF
>
>
> 7.1  /usr/src/linux-2.2.19/scripts/ver_linux
> Linux orchid 2.2.19-7.0.1 #1 Tue Apr 10 00:55:03 EDT 2001 i686 unknown
>
> Gnu C  2.96
> Gnu make   3.79.1
> binutils   2.10.0.18
> util-linux 2.10r
> modutils   2.3.21
> e2fsprogs  1.18
> Linux C Library> libc.2.2
> Dynamic linker (ldd)   2.2
> Procps 2.0.7
> Net-tools  1.56
> Console-tools  0.3.3
> Sh-utils   2.0
> Modules Loaded autofs nfsd lockd sunrpc 3c59x agpgart usb-uhci
> usbcore
>
>
> 7.2 cat /proc/cpuinfo
> processor : 0
> vendor_id : GenuineIntel
> cpu family: 6
> model : 8
> model name: Pentium III (Coppermine)
> stepping  : 3
> cpu MHz   : 751.725
> cache size: 256 KB
> fdiv_bug  : no
> hlt_bug   : no
> sep_bug   : no
> f00f_bug  : no
> coma_bug  : no
> fpu   : yes
> fpu_exception : yes
> cpuid level   : 2
> wp: yes
> flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov
> pat pse36 mmx fxsr xmm
> bogomips  : 1500.77
>
>
> 7.3 cat /proc/modules
> autofs  9424   3 (autoclean)
> nfsd  182752   8 (autoclean)
> lockd  45264   1 (autoclean) [nfsd]
> sunrpc 61808   1 (autoclean) [nfsd lockd]
> 3c59x  21584   1 (autoclean)
> agpgart18960   0 (unused)
> usb-uhci   18736   0 (unused)
> usbcore43120   1 [usb-uhci]
>
>
> 7.4 No SCSI devices
>
>
> 7.5 /sbin/ifconfig (after sending it several pings with 80% packet loss)
>
> eth0  Link encap:Ethernet  HWaddr 00:60:97:D7:60:E4
>   inet addr:10.0.0.102  Bcast:10.0.0.255  Mask:255.255.255.0
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:3972983 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:5466442 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:100
>   Interrupt:12 Base address:0xa800
>
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   UP LOOPBACK RUNNING  MTU:3924  Metric:1
>   RX packets:210 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:210 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>
>
> X. This has only been tested/observed on a 10Mb as well as a newer 100Mb
> LAN. This has also been observed on machines running kernel 2.2.17. Non
> Linux machines (NT and Win98) were tested in the same manner and do not
> show the same symptoms. I reviewed the LKML archives, grep'd in the
> sources and did a string on the kernel binary and was not able to find any
> useful reference to 64590, 64589 or ping losses due to packet size.
>
> --
> Chuck Wolber
> System Administrator
> AltaServ Corporation
> (425)576-1202
> ten.vresatla@wkcuhc
>
> Quidquid latine dictum sit, altum viditur.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [BUG] 2.2.19 - 80% Packet Loss

2001-06-14 Thread Scott Laird


Odds are it's a raw socket receive buffer issue.  Stock pings only ask for
a ~96k socket buffer, which means that they can only hold one ~64k packet
at a time.  So, if you're ever slow grabbing packets out of the buffer,
you're going to drop traffic.

You can fix this by upping the socket buffer that ping asks for (look for
setsockopt( ... SO_RCVBUF ...)) and then tuning the kernel to allow larger
socket buffers.  The file to fiddle with is /proc/sys/net/core/rmem_max.

That doesn't really answer why you'd want to fling that many 64k-ish ping
packets around, though.


Scott

On Thu, 14 Jun 2001 [EMAIL PROTECTED] wrote:


 1. When pinging a machine using kernel 2.2.19 I consistently get an 80%
 packet loss when doing a ping -f with a packet size of 64590 or higher.


 2. A ping -f -s 64589 to a machine running kernel 2.2.19 results in 0%
 packet loss. By incrementing the packetsize by one ping -f -s 64590  or
 higher, I consistently see 80% packet loss. ifconfig on the receiving
 machine shows no anomolies.


 3. 2.2.19, ping, flood, 64589, 64590, 80%, packet, loss


 4. Linux version 2.2.19-7.0.1 ([EMAIL PROTECTED]) (gcc version
 egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)) #1 Tue Apr 10 00:55:03
 EDT 2001


 5. There was no oops associated with this.


 6. EOF
 /bin/ping -f -s 64589
 /bin/ping -f -s 64590
 EOF


 7.1  /usr/src/linux-2.2.19/scripts/ver_linux
 Linux orchid 2.2.19-7.0.1 #1 Tue Apr 10 00:55:03 EDT 2001 i686 unknown

 Gnu C  2.96
 Gnu make   3.79.1
 binutils   2.10.0.18
 util-linux 2.10r
 modutils   2.3.21
 e2fsprogs  1.18
 Linux C Library libc.2.2
 Dynamic linker (ldd)   2.2
 Procps 2.0.7
 Net-tools  1.56
 Console-tools  0.3.3
 Sh-utils   2.0
 Modules Loaded autofs nfsd lockd sunrpc 3c59x agpgart usb-uhci
 usbcore


 7.2 cat /proc/cpuinfo
 processor : 0
 vendor_id : GenuineIntel
 cpu family: 6
 model : 8
 model name: Pentium III (Coppermine)
 stepping  : 3
 cpu MHz   : 751.725
 cache size: 256 KB
 fdiv_bug  : no
 hlt_bug   : no
 sep_bug   : no
 f00f_bug  : no
 coma_bug  : no
 fpu   : yes
 fpu_exception : yes
 cpuid level   : 2
 wp: yes
 flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov
 pat pse36 mmx fxsr xmm
 bogomips  : 1500.77


 7.3 cat /proc/modules
 autofs  9424   3 (autoclean)
 nfsd  182752   8 (autoclean)
 lockd  45264   1 (autoclean) [nfsd]
 sunrpc 61808   1 (autoclean) [nfsd lockd]
 3c59x  21584   1 (autoclean)
 agpgart18960   0 (unused)
 usb-uhci   18736   0 (unused)
 usbcore43120   1 [usb-uhci]


 7.4 No SCSI devices


 7.5 /sbin/ifconfig (after sending it several pings with 80% packet loss)

 eth0  Link encap:Ethernet  HWaddr 00:60:97:D7:60:E4
   inet addr:10.0.0.102  Bcast:10.0.0.255  Mask:255.255.255.0
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:3972983 errors:0 dropped:0 overruns:0 frame:0
   TX packets:5466442 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:100
   Interrupt:12 Base address:0xa800

 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   UP LOOPBACK RUNNING  MTU:3924  Metric:1
   RX packets:210 errors:0 dropped:0 overruns:0 frame:0
   TX packets:210 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0


 X. This has only been tested/observed on a 10Mb as well as a newer 100Mb
 LAN. This has also been observed on machines running kernel 2.2.17. Non
 Linux machines (NT and Win98) were tested in the same manner and do not
 show the same symptoms. I reviewed the LKML archives, grep'd in the
 sources and did a string on the kernel binary and was not able to find any
 useful reference to 64590, 64589 or ping losses due to packet size.

 --
 Chuck Wolber
 System Administrator
 AltaServ Corporation
 (425)576-1202
 ten.vresatla@wkcuhc

 Quidquid latine dictum sit, altum viditur.

 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: New gigabit cards

2001-03-28 Thread Scott Laird



On Wed, 28 Mar 2001, Gregory Maxwell wrote:
>
> Asante:
> FriendlyNet GigaNIX 1000TPC (Cu)  $149.99
>

Interesting -- this seems to be the only card of the set that actually has
drivers available for download, although the D-Link card has drivers for
an older GigE card listed.

According to the drivers, the 1000TPC uses the NS DP83820.  According to
the DP83820's datasheet, it has a 8k Tx buffer and a 32k Rx buffer.
That's a bit shy of the 512k-1M that older cards use :-(.  At wire speed,
that means that you'll have to service the NIC's interrupt within ~60 us
on transmit and ~250 us on receive.  That seems rather optimistic.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: New gigabit cards

2001-03-28 Thread Scott Laird



On Wed, 28 Mar 2001, Gregory Maxwell wrote:

 Asante:
 FriendlyNet GigaNIX 1000TPC (Cu)  $149.99


Interesting -- this seems to be the only card of the set that actually has
drivers available for download, although the D-Link card has drivers for
an older GigE card listed.

According to the drivers, the 1000TPC uses the NS DP83820.  According to
the DP83820's datasheet, it has a 8k Tx buffer and a 32k Rx buffer.
That's a bit shy of the 512k-1M that older cards use :-(.  At wire speed,
that means that you'll have to service the NIC's interrupt within ~60 us
on transmit and ~250 us on receive.  That seems rather optimistic.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Another rsync over ssh hang (repeatable, with 2.4.1 on bothends)

2001-03-02 Thread Scott Laird



On Fri, 2 Mar 2001 [EMAIL PROTECTED] wrote:
> > together to put 2.2.18 on this machine.  I can't guarantee when I'll
> > be able to do this though.
>
> You planned to make more accurate strace on Monday, if I remember correctly.
> Now it is not necessary, Scott's one is enough to understand that
> some problem exists and cannot be explained by buggy 2.2.15.

One data point on my hang -- I increased
/proc/sys/net/core/wmem_{max,default} from 64k to 256k, and then increased
/proc/sys/net/ipv5/tcp_wmem from "4096 16384 131072" to "16384 65536
262144", and the hangs seem to have either stopped or (more likely)
drastically reduced in frequency.  I was able to rsync a couple GB without
stalling.

I can perform more tests, if anyone has anything in particular that they'd
like to see.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Another rsync over ssh hang (repeatable, with 2.4.1 on bothends)

2001-03-02 Thread Scott Laird



On Fri, 2 Mar 2001 [EMAIL PROTECTED] wrote:
  together to put 2.2.18 on this machine.  I can't guarantee when I'll
  be able to do this though.

 You planned to make more accurate strace on Monday, if I remember correctly.
 Now it is not necessary, Scott's one is enough to understand that
 some problem exists and cannot be explained by buggy 2.2.15.

One data point on my hang -- I increased
/proc/sys/net/core/wmem_{max,default} from 64k to 256k, and then increased
/proc/sys/net/ipv5/tcp_wmem from "4096 16384 131072" to "16384 65536
262144", and the hangs seem to have either stopped or (more likely)
drastically reduced in frequency.  I was able to rsync a couple GB without
stalling.

I can perform more tests, if anyone has anything in particular that they'd
like to see.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Another rsync over ssh hang (repeatable, with 2.4.1 on both ends)

2001-03-01 Thread Scott Laird


I have a fairly repeatable rsync over ssh stall that I'm seeing between
two Linux boxes, both running identical 2.4.1 kernels.  The stall is
fairly easy to repeat in our environment -- it can happen up to several
times per minute, and usually happens at least once per minute.  It
doesn't really seem to be data-sensitive.  The stall will last until the
session times out *unless* I take one of two steps to "unstall" it.  The
easiest way to do this is to run 'strace -p $PID' against the sending ssh
process.  As soon as the strace is started, rsync starts working again,
but will stall again (even with strace still running) after a short period
of time.

We've seen this bug (or a *very* similar one) with 2.2.16 and 2.4.[01].  I
haven't tried a newer 2.2.x or 2.4.2 or -acX.


One system is a P2/400, the other is a P3/800.  The two boxes are
communicating over a mostly idle Ethernet, through 3 switches.  One end is
a EEPro 100, the other end is an Acenic, although that shouldn't matter.

During a stall, the sending end shows a lot of data stuck in the Recv-Q:

Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp72848  0 ref.lab.ocp.interna:840 ref-0.sys.pnap.net:ssh  ESTABLISHED

The receiving end shows a similar problem, but on the sending queue:

Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  28960 ref-0.sys.pnap.net:ssh  ref.lab.ocp.interna:840 ESTABLISHED

Like I said, I don't believe that this is a network issue, because I can
un-stall the rsync by either stracing the *sending* ssh process, or by
putting the sending rsync into the background with ^Z and then popping it
back into the foreground.  I have tcpdumps that I can send, but they look
pretty straightforward to me -- the window fills, so data stops flowing.

Strace doesn't seem to be particularly informative:


select(4, [0], [1], NULL, NULL) = 1 (out [1])
write(1, "x"..., 66156) = 66156
...
select(4, [0], [1 3], NULL, NULL)   = 2 (out [1 3])
write(1, "\0\0\0\0\274\2\0\0\0\0\0\0\271\30\0\0\0\0\0\0\274\2\0\0"..., 69526


Strace on the receiving end shows the obvious -- it's sitting in select
waiting for data to arrive.

According to 'ps l', the ssh process is waiting in 'sock_wait_for_wmem'.

We've tried changing versions of rsync and ssh without any success.  FWIW,
this kernel was compiled with GCC 2.95.2, from Debian potato.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Another rsync over ssh hang (repeatable, with 2.4.1 on both ends)

2001-03-01 Thread Scott Laird


I have a fairly repeatable rsync over ssh stall that I'm seeing between
two Linux boxes, both running identical 2.4.1 kernels.  The stall is
fairly easy to repeat in our environment -- it can happen up to several
times per minute, and usually happens at least once per minute.  It
doesn't really seem to be data-sensitive.  The stall will last until the
session times out *unless* I take one of two steps to "unstall" it.  The
easiest way to do this is to run 'strace -p $PID' against the sending ssh
process.  As soon as the strace is started, rsync starts working again,
but will stall again (even with strace still running) after a short period
of time.

We've seen this bug (or a *very* similar one) with 2.2.16 and 2.4.[01].  I
haven't tried a newer 2.2.x or 2.4.2 or -acX.


One system is a P2/400, the other is a P3/800.  The two boxes are
communicating over a mostly idle Ethernet, through 3 switches.  One end is
a EEPro 100, the other end is an Acenic, although that shouldn't matter.

During a stall, the sending end shows a lot of data stuck in the Recv-Q:

Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp72848  0 ref.lab.ocp.interna:840 ref-0.sys.pnap.net:ssh  ESTABLISHED

The receiving end shows a similar problem, but on the sending queue:

Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  28960 ref-0.sys.pnap.net:ssh  ref.lab.ocp.interna:840 ESTABLISHED

Like I said, I don't believe that this is a network issue, because I can
un-stall the rsync by either stracing the *sending* ssh process, or by
putting the sending rsync into the background with ^Z and then popping it
back into the foreground.  I have tcpdumps that I can send, but they look
pretty straightforward to me -- the window fills, so data stops flowing.

Strace doesn't seem to be particularly informative:

blocked, strace starts
select(4, [0], [1], NULL, NULL) = 1 (out [1])
write(1, "x"..., 66156) = 66156
...
select(4, [0], [1 3], NULL, NULL)   = 2 (out [1 3])
write(1, "\0\0\0\0\274\2\0\0\0\0\0\0\271\30\0\0\0\0\0\0\274\2\0\0"..., 69526
blocked again

Strace on the receiving end shows the obvious -- it's sitting in select
waiting for data to arrive.

According to 'ps l', the ssh process is waiting in 'sock_wait_for_wmem'.

We've tried changing versions of rsync and ssh without any success.  FWIW,
this kernel was compiled with GCC 2.95.2, from Debian potato.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: LILO and serial speeds over 9600

2001-02-12 Thread Scott Laird



On 12 Feb 2001, H. Peter Anvin wrote:
>
> Just checked my own code, and SYSLINUX does indeed support 115200 (I
> changed this to be a 32-bit register ages ago, apparently.)  Still
> doesn't answer the question "why"... all I think you do is increase
> the risk for FIFO overrun and lost characters (flow control on a boot
> loader console is vestigial at the best.)

It's simple -- we want the kernel to have its serial console running at
115200, and we don't want to have to change speeds to talk to the
bootloader.  Some boot processes, particularaly fsck, can be *REALLY*
verbose on screwed up systems.  I've seen systems take hours to run fsck,
even on small filesystems, simply because they were blocking on a 9600 bps
console.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://vger.kernel.org/lkml/



Re: Request: increase in PCI bus limit

2001-01-31 Thread Scott Laird



On Wed, 31 Jan 2001, George wrote:
>
> If someone says 1 bus, give them one bus.
>
> Just make the description say:
>   Add 1 for every PCI
>   Add 1 for every AGP
>   Add 1 for every CardBus
>   Also account for anything else funny in the system.
>
> Then panic on boot if they're wrong (sort of like processor type).

Where do cards with PCI-PCI bridges, like multiport PCI ethernet cards,
fit into this?  I can easily add 3 or 4 extra busses into a box just by
grabbing a couple extra Intel dual-port Ethernet cards.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Request: increase in PCI bus limit

2001-01-31 Thread Scott Laird



On Wed, 31 Jan 2001, George wrote:

 If someone says 1 bus, give them one bus.

 Just make the description say:
   Add 1 for every PCI
   Add 1 for every AGP
   Add 1 for every CardBus
   Also account for anything else funny in the system.

 Then panic on boot if they're wrong (sort of like processor type).

Where do cards with PCI-PCI bridges, like multiport PCI ethernet cards,
fit into this?  I can easily add 3 or 4 extra busses into a box just by
grabbing a couple extra Intel dual-port Ethernet cards.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Delay in authentication.

2001-01-08 Thread Scott Laird


Is syslog running correctly?  When syslog screws up, it very frequently
results in this sort of problem.


Scott

On Mon, 8 Jan 2001, Chris Meadors wrote:

> On Mon, 8 Jan 2001, Igmar Palsenberg wrote:
> 
> > check /etc/pam.d/login
> 
> No pam.
> 
> > Could be kerberos that is biting you, althrough that doesn't explain the
> > portmap story.
> 
> So no kerberos.
> 
> I just rebuilt the shadow suite (where my login comes from) to be on the
> safe side.  But the problem is still there.
> 
> ldd login shows:
> libshadow.so.0 => /lib/libshadow.so.0 (0x4001a000)
> libcrypt.so.1 => /lib/libcrypt.so.1 (0x40033000)
> libc.so.6 => /lib/libc.so.6 (0x4006)
> /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x4000)
> 
> I'm running glibc-2.2, but this problem also existed in 2.1.x (which I had
> installed when I went to the 2.3 kernels that exposed this problem).
> 
> -Chris
> -- 
> Two penguins were walking on an iceberg.  The first penguin said to the
> second, "you look like you are wearing a tuxedo."  The second penguin
> said, "I might be..." --David Lynch, Twin Peaks
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Delay in authentication.

2001-01-08 Thread Scott Laird


Is syslog running correctly?  When syslog screws up, it very frequently
results in this sort of problem.


Scott

On Mon, 8 Jan 2001, Chris Meadors wrote:

 On Mon, 8 Jan 2001, Igmar Palsenberg wrote:
 
  check /etc/pam.d/login
 
 No pam.
 
  Could be kerberos that is biting you, althrough that doesn't explain the
  portmap story.
 
 So no kerberos.
 
 I just rebuilt the shadow suite (where my login comes from) to be on the
 safe side.  But the problem is still there.
 
 ldd login shows:
 libshadow.so.0 = /lib/libshadow.so.0 (0x4001a000)
 libcrypt.so.1 = /lib/libcrypt.so.1 (0x40033000)
 libc.so.6 = /lib/libc.so.6 (0x4006)
 /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000)
 
 I'm running glibc-2.2, but this problem also existed in 2.1.x (which I had
 installed when I went to the 2.3 kernels that exposed this problem).
 
 -Chris
 -- 
 Two penguins were walking on an iceberg.  The first penguin said to the
 second, "you look like you are wearing a tuxedo."  The second penguin
 said, "I might be..." --David Lynch, Twin Peaks
 
 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/
 
 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: PROBLEM with raid5 and 2.4.0 kernel

2001-01-07 Thread Scott Laird


It works if you compile the kernel with the processor type set to Pentium
II or higher, or disable RAID5.  I've been meaning to report this one, but
2.4.0 was released before I had time to test the last prerelease, and I
haven't had time to test the final release yet.


Scott

On Sun, 7 Jan 2001, Jeff Forbes wrote:

> I have been trying out the new 2.4.0 kernel and am unable to get
> raid5 to work. When I install the raid5 module with
> modprobe raid5
> I get a segmentation fault and the following error appears in the dmesg output:
> 
> raid5: measuring checksumming speed
> 8regs :   806.577 MB/sec
> 32regs:   548.259 MB/sec
> invalid operand: 
> CPU:0
> EIP:0010:[]
> EFLAGS: 00010206
> eax: d240fe88   ebx: d23f2f40   ecx: 000f   edx: 8005003b
> esi: d23f   edi: 0001720a   ebp:    esp: d240fe80
> ds: 0018   es: 0018   ss: 0018
> Process modprobe (pid: 748, stackpage=d240f000)
> Stack:   0020 c0113cba 059f 10dd 00017209 0005
> c0257424 0286 0001 c0257423 c0257403 0020 d4864090 
> d4864434
> d486442d 0224 d4864004 0f40 d23f d23f2f40 d23f 
> d23f2f40
> Call Trace: [] [] [] [] 
> [] [] []
> [] [] [] [] [] 
> [] [] []
> [] [] []
> 
> Code: 0f 11 00 0f 11 48 10 0f 11 50 20 0f 11 58 30 0f 18 4e 00 0f
> 
> when I modprobe raid5 again no errors are reported and dmesg has the following
> 
> md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
> md.c: sizeof(mdp_super_t) = 4096
> raid5 personality registered
> 
> Of course raid5 cannot be compiled into the kernel since it also gives the 
> same error as above and dies.
> 
> Any ideas?
> 
> 
> Jeffrey Forbes, Ph.D.
> http://www.stellarhost.com
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: PROBLEM with raid5 and 2.4.0 kernel

2001-01-07 Thread Scott Laird


It works if you compile the kernel with the processor type set to Pentium
II or higher, or disable RAID5.  I've been meaning to report this one, but
2.4.0 was released before I had time to test the last prerelease, and I
haven't had time to test the final release yet.


Scott

On Sun, 7 Jan 2001, Jeff Forbes wrote:

 I have been trying out the new 2.4.0 kernel and am unable to get
 raid5 to work. When I install the raid5 module with
 modprobe raid5
 I get a segmentation fault and the following error appears in the dmesg output:
 
 raid5: measuring checksumming speed
 8regs :   806.577 MB/sec
 32regs:   548.259 MB/sec
 invalid operand: 
 CPU:0
 EIP:0010:[d4862b09]
 EFLAGS: 00010206
 eax: d240fe88   ebx: d23f2f40   ecx: 000f   edx: 8005003b
 esi: d23f   edi: 0001720a   ebp:    esp: d240fe80
 ds: 0018   es: 0018   ss: 0018
 Process modprobe (pid: 748, stackpage=d240f000)
 Stack:   0020 c0113cba 059f 10dd 00017209 0005
 c0257424 0286 0001 c0257423 c0257403 0020 d4864090 
 d4864434
 d486442d 0224 d4864004 0f40 d23f d23f2f40 d23f 
 d23f2f40
 Call Trace: [c0113cba] [d4864090] [d4864434] [d486442d] 
 [d4864004] [d4862aec] [d48640fc]
 [d4864674] [d4864117] [d486463c] [d4862000] [d486409c] 
 [c012a0ca] [d4862000] [c01149a8]
 [d484f000] [d4862060] [c0108d7f]
 
 Code: 0f 11 00 0f 11 48 10 0f 11 50 20 0f 11 58 30 0f 18 4e 00 0f
 
 when I modprobe raid5 again no errors are reported and dmesg has the following
 
 md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
 md.c: sizeof(mdp_super_t) = 4096
 raid5 personality registered
 
 Of course raid5 cannot be compiled into the kernel since it also gives the 
 same error as above and dies.
 
 Any ideas?
 
 
 Jeffrey Forbes, Ph.D.
 http://www.stellarhost.com
 
 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/
 
 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: You've Been Slashdotted

2001-01-04 Thread Scott Laird


On Fri, 5 Jan 2001, Michael D. Crawford wrote:
> 
> You're probably not going to have much luck getting any source off any servers
> tonight.  Might I suggest you pop over to Slashdot and give the clueless some
> clues on getting their new kernels working?  They need help.

Dunno -- my mirror (ftp-mirror.internap.com, or ftp15.us.kernel.org) is
only seeing 1-2 Mbps worth of traffic, out of 10 Mbps available to it.  I
suspect that a lot of mirrors are similar.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: You've Been Slashdotted

2001-01-04 Thread Scott Laird


On Fri, 5 Jan 2001, Michael D. Crawford wrote:
 
 You're probably not going to have much luck getting any source off any servers
 tonight.  Might I suggest you pop over to Slashdot and give the clueless some
 clues on getting their new kernels working?  They need help.

Dunno -- my mirror (ftp-mirror.internap.com, or ftp15.us.kernel.org) is
only seeing 1-2 Mbps worth of traffic, out of 10 Mbps available to it.  I
suspect that a lot of mirrors are similar.


Scott

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/