-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,
I've been playing around with WME to test various network performance,
and come across a problem that I can't quite understand.
I have an application that generates traffic with various TOS
(BACKGROUND, BEST EFFORT, VOICE, VIDEO). It uses raw sockets to transmit
the IP packets. This all works well if ip->ip_len is less than 192
bytes. If ip_>ip_len is larger than 192, the call to ieee80211_classify
(/usr/src/sys/net80211/ieee80211_output.c) will classify the packet as
"BEST EFFORT" no matter what value my application set the TOS  field as.

Debugging ieee80211_classify, I see that both ip->ip_tos and ip->ip_len
are set to zero when a I send a packet with  ip->ip_len larger than 192
bytes.
Sniffing the network, I can see my packets have the correct TOS and
length, but they don't get the correct WME classification.


- -------------ieee80211_output.c(iee80211_classify)------------
        if (eh->ether_type == htons(ETHERTYPE_IP)) {
                const struct ip *ip = (struct ip *)
                        (mtod(m, u_int8_t *) + sizeof (*eh));
                /*
                 * IP frame, map the TOS field.
                 */
//added by myself
        printf("IP_TOS: %d, IP_LEN: %d\n",ip->ip_tos,ntohl(ip->ip_len));
//end
                switch (ip->ip_tos) {
                case 0x08:
                case 0x20:
                        d_wme_ac = WME_AC_BK;   /* background */
                        break;
                case 0x28:
                case 0xa0:
                        d_wme_ac = WME_AC_VI;   /* video */
                        break;
                case 0x30:                      /* voice */
                case 0xe0:
                case 0x88:                      /* XXX UPSD */
                case 0xb8:
                        d_wme_ac = WME_AC_VO;
                        break;
                default:
                        d_wme_ac = WME_AC_BE;
                        break;
                }

- -----------------------------------------------------

When I use SOCK_DGRAM socket instead of raw, everything works fine.

I use FreeBSD 6.0-STABLE and my wireless NIC uses an atheros chipset.

Has anyone got an idea what is going on ?

regards,
Geir Egeland
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFD6zZAAsOHgqjtXwERAqO6AKDVrEBmrlBvIu5qEx/1WSsYryQTGQCgidwv
6U4vVby9nDjEabmtsPzZoeE=
=r/wF
-----END PGP SIGNATURE-----
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to