Re: MII problem, need more eyes

2002-10-23 Thread Mike Silbersack

On Tue, 22 Oct 2002, Bruce Evans wrote:

> This return of 0 is apparently intentional.  The comment before this
> says:
>
> Here the "However" clause is not in RELENG_4.  Returning 0 gets the
> status updated.  I think this is just too expensive to do every second.
> Autonegotiation is only retried every 17 seconds (every 5 seconds in
> RELENG_4).

You're right, it looks like this is new functionality, not just code
refactoring.  I was hoping for a quick fix, and overlooked this.

There are two things I think that I can do to reduce the amount of time
taken up...

#1, Don't do the status update every second, only have it run every 10
seconds or so.

#2, Reduce the number of PHY operations.  mii_phy_tick reads the status
register, then nsphy_status rereads it basically immediately.  I'll have
to examine how the other phy drivers operate in this aspect.

Mike "Silby" Silbersack


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



[PATCH] switching to if_xname

2002-10-23 Thread Brooks Davis
As previously discussed, I've created a patch to replace if_unit and
if_name with if_xname in struct ifnet following the example of NetBSD
and OpenBSD.  I've tried to place the more intresting parts of the diff
near the top.

This patch will break ports which use libkvm to read interface
statistics.  This effects at least wmnet and wmnet2 (though both
segfault on current without this patch).  The patches to fix this should
be trivial since most of those applications probably support NetBSD or
OpenBSD and I plan to bump __FreeBSD_version.  Other then those issues
and a generalization of interface globing in IPFW, this patch should be
a giant no-op from the user perspective.  Real features will come later,
but the API/ABI change needs to happen soon or it's going to be a 6.0
feature.

I'm running this patch on my laptop with a GENERIC kernel without any
problems so far.  For all it's size, most of the changes are log() or
printf() calls plus a fairly consistant change to each network driver's
attach function so this should be generally low impact.

The patch is available at the URL below.  I tried to send a copy to the
list, but it looks like it got silently tossed as too large.  If you
want a copy e-mailed to you, feel free to ask me for one.

http://people.freebsd.org/~brooks/patches/if_xname.diff

Please review.

Thanks,
Brooks

-- 
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4



msg07175/pgp0.pgp
Description: PGP signature


Re: Machine becomes non-responsive, only ^T shows it as alive under l oad: IPFW, TCP proxying

2002-10-23 Thread Kevin Stevens
On Wednesday, Oct 23, 2002, at 19:41 US/Pacific, Don Bowman wrote:


I have an application listening on an ipfw 'fwd' rule.
I'm sending ~3K new sessions per second to it. It
has to turn around and issue some of these out as
a proxy, in response to which some of them the destination
host won't exist.

I have RST limiting on. I'm seeing messages like:
Limiting open port RST response from 1312 to 200 packets per second

come out sometimes.

After a while of such operation (~1/2 hour), the machine
becomes unresponsive: the network interfaces no longer respond,
the serial console responds to ^T yielding a status line,
but ^C etc do nothing, and the bash which was there won't
give me a prompt.

^T indicates my bash is running, 0% of CPU in use, etc.

I have no choice but to power-cycle it.

Any suggestions for how one would start debugging this to
find out where its stuck, and how?


At a guess, you need to tune the state-table retention time down.

KeS


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Machine becomes non-responsive, only ^T shows it as alive under load: IPFW, TCP proxying

2002-10-23 Thread Don Bowman

I have an application listening on an ipfw 'fwd' rule.
I'm sending ~3K new sessions per second to it. It
has to turn around and issue some of these out as
a proxy, in response to which some of them the destination
host won't exist.

I have RST limiting on. I'm seeing messages like:
Limiting open port RST response from 1312 to 200 packets per second

come out sometimes.

After a while of such operation (~1/2 hour), the machine
becomes unresponsive: the network interfaces no longer respond,
the serial console responds to ^T yielding a status line,
but ^C etc do nothing, and the bash which was there won't
give me a prompt.

^T indicates my bash is running, 0% of CPU in use, etc.

I have no choice but to power-cycle it.

Any suggestions for how one would start debugging this to
find out where its stuck, and how?

This is running 4.7 STABLE on a single XEON 2.0 GHz, 1GB
of memory. The bandwidth wasn't that high, varying between
3 and 30Mbps.

Perhaps related, sometimes I get: bge0: watchdog timeout -- resetting

The only NIC which is active is bge0. I have an 'em0' which
is idle (no IP), and an fxp0 (which has an IP but is idle).

--don ([EMAIL PROTECTED] www.sandvine.com)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



tcp_input's header prediction and a collapsing send window

2002-10-23 Thread Bill Baumann

I'm experiencing a bug where snd_wnd collapses.  I see snd_wnd approach
zero even though data is sent/received and ack'ed successfully.

After taking a close look at tcp_input, I think I see a senario where this
could happen.  Say header prediction handles ~2 GB of data without
problems, then a retransmission happens.  snd_wnd starts collapsing as it
should.  The header prediction code is correctly skipped as the snd_wnd no
long matches the advertised window.  We recover from the retransmission,
*BUT* the code that reopens window is skipped because of rolled over
sequence numbers.

In the ack processing code (step 6), the variable snd_wl1 tracks the
newest sequence number that we've seen.  It helps prevent snd_wnd from
being reopened on re-transmitted data.  If snd_wl1 is greater than
received sequence #, we skip it.  This is fine unless we're 2^31 bytes
ahead and SEQ_LT says we're behind.  

Since snd_wl1 is only updated if the condition is true -- we're stuck.
snd_wl1 is only updated with in SYN/FIN processing code and in step 6.

So if we process 2GB in the header prediction code -- where the step 6
never executes, and then somehow reach step 6.  snd_wnd collapses and
tcp_output stops sending.


I have a trace mechanism that dumps various tcp_input variables that
corroborates this theory.  I have lined this up with tcpdump. The trace
shows snd_wnd collapsing and snd_wl1 > th_seq even as healthy traffic is
transmitted and received.  The outcome is a halted transmitter.


Possible remedy: update snd_wl1 in the header prediction code.

What do you all think?  Is this real?  Or am I missing something?

Regards,
Bill Baumann


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: RFC: BSD network stack virtualization

2002-10-23 Thread Julian Elischer


On Thu, 24 Oct 2002, Marko Zec wrote:

> Julian Elischer wrote:
> 
> 
> > 11/ why was ng_bridge unsuitable for your use?
> 
> Both the native and netgraph bridging code, I believe, were designed with
> the presumption that only one "upper" hook is really needed to establish the
> communication to kernel's single network stack. However, this concept
> doesn't hold on a kernel with multiple network stacks. I just picked the
> native bridging code first and extended it to support hooking to multiple
> "upper" hooks. The similar extensions have yet to be applied to ng_bridge, I
> just didn't have time to implement the same functionality in two different
> frameworks.

ng_bridge doesn't really distinguish between hooks to upper and lower
detinations. it only knows wha tMAC addresses are seen to come from each 
hook and ensures that packets destined to those addresses are sent to 
those hooks.. you can have as many 
'upper' hooks as you wish (and there are some uses for that).


> 
> > 12/ can you elaborate on the following:
> >   # fix netgraph interface node naming
> >   # fix the bugs in base networking code (statistics in
> > "native" bridging, additional logic for ng_bridge...)
> 
> When the interface is moved to a different virtual image, it's unit number
> gets reassigned, so the interface that was named say "vlan3" in the master
> virtual image, will become "vlan0" when assigned to the child. The same
> thing happens when the child virtual image returns the interface back to its
> parent. The naming of netgraph nodes associated with interfaces (ng_iface,
> ng_ether) should be updated accordingly, which is currently not done.
> I also considered virtualizing a netgraph stack, this would be also very
> cool if each virtual image could manage its own netgraph tree. However, when
> weighting implementation priorities, I concluded that this was something
> that could wait for other more basic things to be reworked properly first.
> Therefore in the current implementation it is possible to manage the
> netgraph subsystem only from the "master" virtual image.
> 
> Hope this was enough elaboration for actually testing the code :)


it's difficult because my test machines run -current (my 4.x machines
are dedicated to production purposes though I may be able
try with one..


> 
> Have fun,

Thankyou..

p.s.
I should drop down to Croatia next time I'm in Budapest.
I'm told it's beautiful.

p.p.s
cute bear cubs on your site!

> 
> Marko
> 
> 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: which resources ends with ste interface?

2002-10-23 Thread .
> We have a nightmare situation with DFE-580TX 4-port cards that use the ste
> driver.  The driver seems to just choke.  I'm not sure if its an issue with
> PCI interrupts or what.  It throttles back the time-outs, but even then
> after its been up for days one of the interfaces will start acting up and
> our LAN seems more like an ISDN to the file server.

I have router with

1sw~(2)#ifconfig -l
dc0 dc1 dc2 dc3 xl0 xl1 xl2 ste0 ste1 ste2 ste3 lp0 ppp0 stf0 faith0 vlan0 vlan1 vlan2 
vlan3 vlan4 vlan5 vlan6 vlan7 lo0 lo1 lo2 lo3
Where dc0..3 and ste0..3 are 4-port cards
total traffic about 150Gbyte/day.
It works.
Thank you for alert about DFE-580TX
The hardware selection is like mined field - do not know where explode :-((

dc0:  port 0x7c00-0x7c7f mem 0xdfdffc00-0xdfdf irq 12 at 
device 4.0 on pci2
dc1:  port 0x7800-0x787f mem 0xdfdff800-0xdfdffbff irq 10 at 
device 5.0 on pci2
dc2:  port 0x7400-0x747f mem 0xdfdff400-0xdfdff7ff irq 5 at 
device 6.0 on pci2
dc3:  port 0x7000-0x707f mem 0xdfdff000-0xdfdff3ff irq 11 at 
device 7.0 on pci2
xl0: <3Com 3c905C-TX Fast Etherlink XL> port 0xdc00-0xdc7f mem 0xdf80-0xdff f 
irq 10 at device 10.0 on pci0
using shared irq10.
xl1: <3Com 3c905-TX Fast Etherlink XL> port 0xd800-0xd83f irq 11 at device 12.0 on pci0
using shared irq11.
xl2: <3Com 3c905-TX Fast Etherlink XL> port 0xd400-0xd43f irq 12 at device 13.0 on pci0
using shared irq12.
ste0:  port 0x9c00-0x9c7f irq 10 at device 4.0 on pc i3
ste1:  port 0x9800-0x987f irq 5 at device 5.0 on pci 3
ste2:  port 0x9400-0x947f irq 11 at device 6.0 on pc i3
ste3:  port 0x9000-0x907f irq 12 at device 7.0 on pc i3

> Michael F. DeMan
> Director of Technology
> OpenAccess Internet Services
> 1305 11th St., 3rd Floor
> Bellingham, WA 98225
> Tel 360-647-0785 x204
> Fax 360-738-9785
> [EMAIL PROTECTED]

-- 
@BABOLO  http://links.ru/

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: RFC: BSD network stack virtualization

2002-10-23 Thread Marko Zec
Julian Elischer wrote:

> I'm very impressed. I do however have some questions.
> (I have not read the code yet, just the writeup)
>
> 1/ How do you cope with each machine expecting to have it's own loopback
> interface?  Is it sufficient to make lo1 lo2 lo3 etc. and attache them
> to the appropriate VMs?

The creation of "lo0" interface is done "automatically" at the time of
creation of each new virtual image instance. The BSD networking code
generally assumes the "lo0" interface exists all the times, so it just
wouldn't be possible to create a network stack instance without a unique
"lo" ifc.

> 2/ How much would be gained (i.e. is it worth it) to combine this with
> jail?  Can you combine them? (does it work?) Does it make sense?

Userland separation (hiding) between processes running in different virtual
images is actually accomplished by reusing the jail framework. The
networking work is completely free of jail legacy. Although I didn't test
it, I'm 100% sure it should be possible to run multiple jails inside each of
virtual images. For me it doesn't make too much sense though, that's why I
didn't bother with testing...

> 3/ You implemented this in 4.x which means that we need to reimplement
> it in -current before it has any chance of being 'included'. Do you
> think that would be abig problem?

I must admit that I do not follow the development in -current, so it's hard
to tell how much the network stacks have diverted in the areas affected by
virtualization work. My initial intention was to "polish" the virtualization
code on the 4.x platform - there are still some major chunks of coding yet
to be done, such as removal of virtual images, and patching of IPv6 and
IPSEC code. Hopefully this will be in sync with the release of 5.0, so than
I can spend some time porting it to -current. However, if reasonable demand
is created, I'm prepared to revise that approach...

> 5/ Does inclusion of the virtualisation have any measurable effect on
> throughputs for systems that are NOT using virtualisation. In other
> words, does the non Virtualised code-path get much extra work? (doi you
> have numbers?) (i.e. does it cost much for the OTHER users if we
> incorporated this into FreeBSD?)

The first thing in my pipeline is doing decent performance/throughput
measurements, but these days I just cannot find enough spare time for doing
that properly (I still have a daytime job...). The preliminary netperf tests
show around 1-2% drop in maximum TCP throughput on loopback with 1500 bytes
MTU, so in my opinion this is really a neglectable penalty. Of course, the
applications limited by media speed won't experience any throughput
degradation, except probably hardly measurable increase in CPU time spent in
interrupt context.

> 6/ I think that your ng_dummy node is cute..
> can I commit it separatly? (after porting it to -current..)

Actually, this code is ugly, as I was stupid enough to invent my own queue
management methods, instead of using the existing ones. However, from the
user perspective the code seems to work without major problems, so if you
want to commit it I would be glad...

> 7/ the vmware image is a great idea.
>
> 8/ can you elaborate on the following:
>   * hiding of "foreign" filesystem mounts within chrooted virtual images

Here is an self-explaining example of hiding "foreign" filesystem mounts:

tpx30# vimage -c test1 chroot /opt/chrooted_vimage
tpx30# mount
/dev/ad0s1a on / (ufs, local, noatime)
/dev/ad0s1g on /opt (ufs, local, noatime, soft-updates)
/dev/ad0s1f on /usr (ufs, local, noatime, soft-updates)
/dev/ad0s1e on /var (ufs, local, noatime, soft-updates)
mfs:22 on /tmp (mfs, asynchronous, local, noatime)
/dev/ad0s2 on /dos/c (msdos, local)
/dev/ad0s5 on /dos/d (msdos, local)
/dev/ad0s6 on /dos/e (msdos, local)
procfs on /proc (procfs, local)
procfs on /opt/chrooted_vimage/proc (procfs, local)
/usr on /opt/chrooted_vimage/usr (null, local, read-only)
tpx30# vimage test1
Switched to vimage test1
%mount
procfs on /proc (procfs, local)
/usr on /usr (null, local, read-only)
%

> 9/ how does VIPA differ from the JAIL address binding?

Actually, VIPA feature should be considered completely independent of
network stack virtualization work. The jail address is usually bound to an
alias address configured on a physical interface. When this interface goes
down, all the connections using this address drop dead instantly. VIPA is a
loopback-type internal interface that will always remain up regardless of
the physical network topology changes. If the system has multiple physical
interfaces, and an alternative path can be established following an NIC or
network route outage, the connections bound to VIPA will survive. Anyhow,
the idea is borrowed from IBM's OS/390 TCP/IP implementation, so you can
find more on this concept on
http://www-1.ibm.com/servers/eserver/zseries/networking/vipa.html

> 10/ could you use ng_eiface instead of if_ve?

Most probably yes, but my system crashed each time when trying to 

Re: which resources ends with ste interface?

2002-10-23 Thread Michael DeMan
We have a nightmare situation with DFE-580TX 4-port cards that use the ste
driver.  The driver seems to just choke.  I'm not sure if its an issue with
PCI interrupts or what.  It throttles back the time-outs, but even then
after its been up for days one of the interfaces will start acting up and
our LAN seems more like an ISDN to the file server.

On 10/23/02 5:13 PM, ""."@babolo.ru" <"."@babolo.ru> wrote:

> 
> I have a router with 5 ste FX NIC
> and 1 xl TP NIC
> 
> ste0 is upstream, one ste not used and
> all others are for home users net.
> 
> after down/up ste0 works good.
> about 1 day or about 5..10 GByte,
> then some received packets delays
> for 1..2 seconds, then most received
> packets delays for 2..5 seconds,
> packet drop on receive drops.
> Transmit is good.
> Measured by ping between directly
> connected host and tcpdump on both,
> where instrument host has about zero traffic
> and has no problems (it is console server)
> 
> User's ste interfaces works good after
> ifconfig down/up, and delays appear
> after massive arp scans, and usually
> such a scan stops interface,
> but state is UP.
> 
> I have no instrumental host in that
> network, so I cant say, is tx functioning
> or not in that state.
> 
> The good way to see breakage is
> tcpdump -npiste0 ether broadcast and not ip > & /some/file &
> (tcsh) and look in file after interface stops.
> It ends up with huge amount of arp requests on
> nonexistant hosts.
> I reprodused this breakage.
> 2 sec of intensive arp scanes leads
> to change ping time to one of users
> from usual 0..10 msec to 2..3 sec for at least
> 10 min after scan ends.
> 
> I can't reproduce this reaction on arp scan on ste0,
> mean ping time do not change in or after scan time.
> But may be such a scan reduce time of good
> work of ste0. I can try to increase of arp
> scan time to test if need.
> 
> So my question is: how can I found the cause?
> 0tw~(12)#netstat -m
> 391/1056/65536 mbufs in use (current/peak/max):
>   391 mbufs allocated to data
> 390/814/16384 mbuf clusters in use (current/peak/max)
> 1892 Kbytes allocated to network (3% of mb_map in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines
> 
> I saw 3 times bigger peak values, but
> never saw values near the max.
> 
> 0tw~(13)#uname -a
> FreeBSD tw 4.7-STABLE FreeBSD 4.7-STABLE #2: Wed Oct 16 05:37:50 MSD 2002
> [EMAIL PROTECTED]:/tmp/babolo/usr/src/sys/gw  i386
> 
> dmesg exhausted by multiple
> arp: X attempts to modify permanent entry for Y on ste4
> and
> ipfw: X Deny ...
> strings, so part of /var/log/all instead:
> 
> Oct 23 16:18:39 tw /kernel: Copyright (c) 1992-2002 The FreeBSD Project.
> Oct 23 16:18:39 tw /kernel: Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989,
> 1991, 1992, 1993, 1994
> Oct 23 16:18:39 tw /kernel: The Regents of the University of California. All
> rights reserved.
> Oct 23 16:18:39 tw /kernel: FreeBSD 4.7-STABLE #2: Wed Oct 16 05:37:50 MSD
> 2002
> Oct 23 16:18:39 tw /kernel: [EMAIL PROTECTED]:/tmp/babolo/usr/src/sys/gw
> Oct 23 16:18:39 tw /kernel: Calibrating clock(s) ... TSC clock: 865576478 Hz,
> i8254 clock: 1193259 Hz
> Oct 23 16:18:39 tw /kernel: CLK_USE_I8254_CALIBRATION not specified - using
> default frequency
> Oct 23 16:18:39 tw /kernel: Timecounter "i8254"  frequency 1193182 Hz
> Oct 23 16:18:39 tw /kernel: CLK_USE_TSC_CALIBRATION not specified - using old
> calibration method
> Oct 23 16:18:39 tw /kernel: CPU: VIA C3 Samuel 2 (865.52-MHz 686-class CPU)
> Oct 23 16:18:39 tw /kernel: Origin = "CentaurHauls"  Id = 0x678  Stepping = 8
> Oct 23 16:18:39 tw /kernel: Features=0x803035
> Oct 23 16:18:39 tw /kernel: real memory  = 134152192 (131008K bytes)
> Oct 23 16:18:39 tw /kernel: Physical memory chunk(s):
> Oct 23 16:18:39 tw /kernel: 0x1000 - 0x0009efff, 647168 bytes (158 pages)
> Oct 23 16:18:39 tw /kernel: 0x004d - 0x07fa, 128843776 bytes (31456
> pages)
> Oct 23 16:18:39 tw /kernel: config> di adv0
> Oct 23 16:18:39 tw /kernel: config> di aha0
> Oct 23 16:18:39 tw /kernel: config> di aic0
> Oct 23 16:18:39 tw /kernel: config> di bt0
> Oct 23 16:18:39 tw /kernel: config> di cs0
> Oct 23 16:18:39 tw /kernel: config> di ed0
> Oct 23 16:18:39 tw /kernel: config> di fe0
> Oct 23 16:18:39 tw /kernel: config> di fdc0
> Oct 23 16:18:39 tw /kernel: config> di lnc0
> Oct 23 16:18:39 tw /kernel: config> di sn0
> Oct 23 16:18:39 tw /kernel: config> di sio2
> Oct 23 16:18:39 tw /kernel: config> q
> Oct 23 16:18:39 tw /kernel: avail memory = 125022208 (122092K bytes)
> Oct 23 16:18:39 tw /kernel: bios32: Found BIOS32 Service Directory header at
> 0xc00fdb20
> Oct 23 16:18:39 tw /kernel: bios32: Entry = 0xfdb30 (c00fdb30)  Rev = 0  Len =
> 1
> Oct 23 16:18:39 tw /kernel: pcibios: PCI BIOS entry at 0xdb51
> Oct 23 16:18:39 tw /kernel: pnpbios: Found PnP BIOS data at 0xc00f6f70
> Oct 23 16:18:39 tw /kernel: pnpbios: Entry = f:5fb4  Rev = 1.0
> Oct 23 16:18:39 tw /kernel: Other BIOS signatures found:
> Oct 23

which resources ends with ste interface?

2002-10-23 Thread .

I have a router with 5 ste FX NIC
and 1 xl TP NIC

ste0 is upstream, one ste not used and
all others are for home users net.

after down/up ste0 works good.
about 1 day or about 5..10 GByte,
then some received packets delays
for 1..2 seconds, then most received
packets delays for 2..5 seconds,
packet drop on receive drops.
Transmit is good.
Measured by ping between directly
connected host and tcpdump on both,
where instrument host has about zero traffic
and has no problems (it is console server)

User's ste interfaces works good after
ifconfig down/up, and delays appear
after massive arp scans, and usually
such a scan stops interface,
but state is UP.

I have no instrumental host in that
network, so I cant say, is tx functioning
or not in that state.

The good way to see breakage is
tcpdump -npiste0 ether broadcast and not ip > & /some/file &
(tcsh) and look in file after interface stops.
It ends up with huge amount of arp requests on
nonexistant hosts.
I reprodused this breakage.
2 sec of intensive arp scanes leads
to change ping time to one of users
from usual 0..10 msec to 2..3 sec for at least
10 min after scan ends.

I can't reproduce this reaction on arp scan on ste0,
mean ping time do not change in or after scan time.
But may be such a scan reduce time of good
work of ste0. I can try to increase of arp
scan time to test if need.

So my question is: how can I found the cause?
0tw~(12)#netstat -m
391/1056/65536 mbufs in use (current/peak/max):
391 mbufs allocated to data
390/814/16384 mbuf clusters in use (current/peak/max)
1892 Kbytes allocated to network (3% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

I saw 3 times bigger peak values, but
never saw values near the max.

0tw~(13)#uname -a
FreeBSD tw 4.7-STABLE FreeBSD 4.7-STABLE #2: Wed Oct 16 05:37:50 MSD 2002 
[EMAIL PROTECTED]:/tmp/babolo/usr/src/sys/gw  i386

dmesg exhausted by multiple
arp: X attempts to modify permanent entry for Y on ste4
and
ipfw: X Deny ...
strings, so part of /var/log/all instead:

Oct 23 16:18:39 tw /kernel: Copyright (c) 1992-2002 The FreeBSD Project.
Oct 23 16:18:39 tw /kernel: Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 
1992, 1993, 1994
Oct 23 16:18:39 tw /kernel: The Regents of the University of California. All rights 
reserved.
Oct 23 16:18:39 tw /kernel: FreeBSD 4.7-STABLE #2: Wed Oct 16 05:37:50 MSD 2002
Oct 23 16:18:39 tw /kernel: [EMAIL PROTECTED]:/tmp/babolo/usr/src/sys/gw
Oct 23 16:18:39 tw /kernel: Calibrating clock(s) ... TSC clock: 865576478 Hz, i8254 
clock: 1193259 Hz
Oct 23 16:18:39 tw /kernel: CLK_USE_I8254_CALIBRATION not specified - using default 
frequency
Oct 23 16:18:39 tw /kernel: Timecounter "i8254"  frequency 1193182 Hz
Oct 23 16:18:39 tw /kernel: CLK_USE_TSC_CALIBRATION not specified - using old 
calibration method
Oct 23 16:18:39 tw /kernel: CPU: VIA C3 Samuel 2 (865.52-MHz 686-class CPU)
Oct 23 16:18:39 tw /kernel: Origin = "CentaurHauls"  Id = 0x678  Stepping = 8
Oct 23 16:18:39 tw /kernel: Features=0x803035
Oct 23 16:18:39 tw /kernel: real memory  = 134152192 (131008K bytes)
Oct 23 16:18:39 tw /kernel: Physical memory chunk(s):
Oct 23 16:18:39 tw /kernel: 0x1000 - 0x0009efff, 647168 bytes (158 pages)
Oct 23 16:18:39 tw /kernel: 0x004d - 0x07fa, 128843776 bytes (31456 pages)
Oct 23 16:18:39 tw /kernel: config> di adv0
Oct 23 16:18:39 tw /kernel: config> di aha0
Oct 23 16:18:39 tw /kernel: config> di aic0
Oct 23 16:18:39 tw /kernel: config> di bt0
Oct 23 16:18:39 tw /kernel: config> di cs0
Oct 23 16:18:39 tw /kernel: config> di ed0
Oct 23 16:18:39 tw /kernel: config> di fe0
Oct 23 16:18:39 tw /kernel: config> di fdc0
Oct 23 16:18:39 tw /kernel: config> di lnc0
Oct 23 16:18:39 tw /kernel: config> di sn0
Oct 23 16:18:39 tw /kernel: config> di sio2
Oct 23 16:18:39 tw /kernel: config> q
Oct 23 16:18:39 tw /kernel: avail memory = 125022208 (122092K bytes)
Oct 23 16:18:39 tw /kernel: bios32: Found BIOS32 Service Directory header at 0xc00fdb20
Oct 23 16:18:39 tw /kernel: bios32: Entry = 0xfdb30 (c00fdb30)  Rev = 0  Len = 1
Oct 23 16:18:39 tw /kernel: pcibios: PCI BIOS entry at 0xdb51
Oct 23 16:18:39 tw /kernel: pnpbios: Found PnP BIOS data at 0xc00f6f70
Oct 23 16:18:39 tw /kernel: pnpbios: Entry = f:5fb4  Rev = 1.0
Oct 23 16:18:39 tw /kernel: Other BIOS signatures found:
Oct 23 16:18:39 tw /kernel: ACPI: 000fc3e0
Oct 23 16:18:39 tw /kernel: Preloaded elf kernel "kernel" at 0xc04a9000.
Oct 23 16:18:39 tw /kernel: Preloaded userconfig_script "/boot/kernel.conf" at 
0xc04a90a8.
Oct 23 16:18:39 tw /kernel: VESA: information block
Oct 23 16:18:39 tw /kernel: 56 45 53 41 00 02 50 0b 00 c0 01 00 00 00 8b 0b 
Oct 23 16:18:39 tw /kernel: 00 c0 40 00 01 01 68 0b 00 c0 79 0b 00 c0 83 0b 
Oct 23 16:18:39 tw /kernel: 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
Oct 23 16:18:39 tw /kernel: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
Oct 23 16:18:39 tw /kernel: VESA: 63 mode(s) found
Oct 23 16:18:3

Re: RFC: BSD network stack virtualization

2002-10-23 Thread Julian Elischer
I'm very impressed. I do however have some questions.
(I have not read the code yet, just the writeup)

1/ How do you cope with each machine expecting to have it's own loopback
interface?  Is it sufficient to make lo1 lo2 lo3 etc. and attache them
to the appropriate VMs?

2/ How much would be gained (i.e. is it worth it) to combine this with
jail?  Can you combine them? (does it work?) Does it make sense?

3/ You implemented this in 4.x which means that we need to reimplement
it in -current before it has any chance of being 'included'. Do you
think that would be abig problem?

5/ Does inclusion of the virtualisation have any measurable effect on
throughputs for systems that are NOT using virtualisation. In other
words, does the non Virtualised code-path get much extra work? (doi you
have numbers?) (i.e. does it cost much for the OTHER users if we
incorporated this into FreeBSD?)

6/ I think that your ng_dummy node is cute..
can I commit it separatly? (after porting it to -current..)

7/ the vmware image is a great idea.

8/ can you elaborate on the following:
  * hiding of "foreign" filesystem mounts within chrooted virtual images

9/ how does VIPA differ from the JAIL address binding?

10/ could you use ng_eiface instead of if_ve?

11/ why was ng_bridge unsuitable for your use?

12/ can you elaborate on the following:
  # fix netgraph interface node naming
  # fix the bugs in base networking code (statistics in
"native" bridging, additional logic for ng_bridge...)



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Hostap mode and reboot

2002-10-23 Thread Michael DeMan
Hi,

We're running FBSD 4.6.2 in hostap mode to support a couple of wireless
clients.  The setup is pretty standard but there is one critical problem.

If the hostap machine gets rebooted, the clients do not reconnect and we
must also reboot them before they're back up.

Has anybody else experienced this or know of any workarounds?

Thanks,
- Mike


Michael F. DeMan
Director of Technology
OpenAccess Internet Services
1305 11th St., 3rd Floor
Bellingham, WA 98225
Tel 360-647-0785 x204
Fax 360-738-9785
[EMAIL PROTECTED]




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



VLAN problems with replies to broadcast

2002-10-23 Thread Charlie Root
We have built a machine with 3 vlan parenting off an fxp.  The vlan
are bridged and vlan0 has an IP.  The fxp has no IP and is excluded
from the bridge group.

 ** root@fw ** ~ ** Sat Oct 19 18:38:08
# ifconfig
fxp0: flags=8943 mtu 1500
ether 00:02:b3:5b:dd:98
media: Ethernet autoselect (100baseTX )
status: active
vlan0: flags=8843 mtu 1500
inet 192.168.10.1 netmask 0xff00 broadcast 192.168.10.255
ether 00:02:b3:5b:dd:98
vlan: 5 parent interface: fxp0
lo0: flags=8049 mtu 16384
inet 127.0.0.1 netmask 0xff00
vlan1: flags=8843 mtu 1500
ether 00:02:b3:5b:dd:98
vlan: 10 parent interface: fxp0
vlan2: flags=8843 mtu 1500
ether 00:02:b3:5b:dd:98
vlan: 20 parent interface: fxp0

The fxp is plugged into an SMC Tigerswitch.  The SMC is configured to
pass VLAN's 5, 10 and 20.

Everything works except replies to broadcast packets.

e.g. using tcpdump I observe an arp request coming from the SMC
switch. tcpdump reports that there are 3 packets (one tagged with each
VLAN -- not clear whether there really are 3 distinct packets or
whether tcpdump is make a best-effort to report a broadcast packet).
tcpdump also displays a single reply tagged with the one correct VLAN
(the remote host's traffic is tagged by the SMC). The remote host does
not receive the reply.  Presumably the SMC is not forwarding the
packet.

The same behaviour is observable for dhcp requests.

Is there some reason why a packet sent in reply from a VLAN interface
might be tagged differently such that the SMC would refuse it?

Has anyone else observed such behaviour?

Can anyone suggest some tests I might try or further reading?

Thank you for your time.


--ericx

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message



Re: BSD network stack virtualization + IEEE 802.1Q

2002-10-23 Thread Marko Zec
Bill Coutinho wrote:

> Sean Chittenden, in FreeBSD-Arch list, pointed me to your "BSD network stack
> virtualization" site.
>
> What I'm trying to achieve is one box with many independent "virtual
> servers" (using the Jail subsystem), but with each vistual server attached
> to a different VLAN using the same physical NIC. This NIC should be
> connected to a switch with the 802.1Q protocol.
>
> My question is: is it possible to associate a "virtual stack" to a VLAN
> number in a 802.1Q enabled net interface, and combine it with the Jail
> subsystem "struct jail"?

Yes, you can do that with the virtualized network stack very easily, but without
using jail(8). Here is a sample step-by-step example, which I hope accomplishes
what you want:

First, we have to create two new virtual images. Here we also set the hostnames
for the new vimages, which is of course not the mandatory step, but makes things
more comprehensible:

tpx30# vimage -c my_virtual_node1
tpx30# vimage -c my_virtual_node2
tpx30# vimage my_virtual_node1 hostname node1
tpx30# vimage my_virtual_node2 hostname node2

We the create two vlan interfaces, and associate them with the physical ifc and
vlan tags:

tpx30# ifconfig vlan create
vlan0
tpx30# ifconfig vlan create
vlan1
tpx30# ifconfig vlan0 vlan 1001 vlandev fxp0
tpx30# ifconfig vlan1 vlan 1002 vlandev fxp0
tpx30# ifconfig
fxp0: flags=8843 mtu 1500
inet 192.168.201.130 netmask 0xff00 broadcast 192.168.201.255
ether 00:09:6b:e0:d5:fc
media: Ethernet autoselect (10baseT/UTP)
status: active
vlan0: flags=8842 mtu 1500
ether 00:09:6b:e0:d5:fc
vlan: 1001 parent interface: fxp0
vlan1: flags=8842 mtu 1500
ether 00:09:6b:e0:d5:fc
vlan: 1002 parent interface: fxp0
lo0: flags=8049 mtu 16384
inet 127.0.0.1 netmask 0xff00
tpx30#

Further, we move (reassign) the vlan interfaces to the appropriate virtual
images. The vlan ifcs will disappear from the current (master) virtual image:

tpx30# vimage -i my_virtual_node1 vlan0
tpx30# vimage -i my_virtual_node2 vlan1
tpx30# ifconfig
fxp0: flags=8843 mtu 1500
inet 192.168.201.130 netmask 0xff00 broadcast 192.168.201.255
ether 00:09:6b:e0:d5:fc
media: Ethernet autoselect (10baseT/UTP)
status: active
lo0: flags=8049 mtu 16384
inet 127.0.0.1 netmask 0xff00
tpx30#

Now we spawn a new interactive shell in one of the created virtual images. We
can now manage the interfaces in the usual way, start new processes/daemons,
configure ipfw...

tpx30# vimage my_virtual_node1
Switched to vimage my_virtual_node1
node1# ifconfig vlan0 1.2.3.4
node1# ifconfig
vlan0: flags=8843 mtu 1500
inet 1.2.3.4 netmask 0xff00 broadcast 1.255.255.255
ether 00:09:6b:e0:d5:fc
vlan: 1001 parent interface: fxp0@master
lo0: flags=8008 mtu 16384
node1# inetd
node1# exit

Note that you won`t be able to change the vlan tag and/or parent interface
inside the virtual image where vlan interface resides, but only in the virtual
image that contains the physical interface (that was the "master" vimage in this
example).

Finally, here is the summary output from vimage -l command issued in the master
virtual image:

tpx30# vimage -l
"master":
37 processes, load averages: 0.00, 0.02, 0.00
CPU usage: 0.26% (0.26% user, 0.00% nice, 0.00% system)
Nice level: 0, no CPU limit, no process limit, child limit: 15
2 network interfaces, 2 child vimages
"my_virtual_node2":
0 processes, load averages: 0.00, 0.00, 0.00
CPU usage: 0.00% (0.00% user, 0.00% nice, 0.00% system)
Nice level: 0, no CPU limit, no process limit
2 network interfaces, parent vimage: "master"
"my_virtual_node1":
1 processes, load averages: 0.00, 0.00, 0.00
CPU usage: 0.24% (0.20% user, 0.00% nice, 0.04% system)
Nice level: 0, no CPU limit, no process limit
2 network interfaces, parent vimage: "master"

Hope this helps,

Marko


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message