[dpdk-dev] [Snort-devel] 答复: A mutithreaded DPDK DAQ Module for Snort 3.0

2016-09-16 Thread Zhu, Heqing
Nacht:

what about to submit the dpdk patch to DPDK.org?  It is a good piece of work, 
it will make sense to avoid the extra patch whenever possible.  We will review 
this patch by the mailing list.

From: Nacht Z [mailto:nac...@outlook.com]
Sent: Friday, September 16, 2016 10:56 AM
To: snort-devel at lists.sourceforge.net
Subject: [Snort-devel] ??: A mutithreaded DPDK DAQ Module for Snort 3.0


I have made three patch for 
DPDK-16.04, 
DAQ-2.1.0 and 
Snort-3.0.0-a4-201-auto
 now.
Patch dpdk.patch to dpdk and then install dpdk.
Patch daq.patch to daq and then install daq.
For snort, we need to first ./configure and then patch snort.patch to snort 
path and then install snort.

The case you said is what I haven?t considered. I?ll try to solve this problem.


???: Michael Altizer mailto:mialtize at cisco.com>>
: 2016?9?15? 22:49:34
???: snort-devel at lists.sourceforge.net
??: Re: [Snort-devel] A mutithreaded DPDK DAQ Module for Snort 3.0

Thanks, NachtZ - this looks like a great start to a multi-threaded DPDK DAQ 
module.  It might be better if you were to offer it as a standalone DAQ module 
for the time being (see https://github.com/Xiche/daq_odp for an example).

Just a warning for anyone trying to just pick this up and use it: like NachtZ 
said, each packet thread will only receive packets from a single interface.  
This means that Snort inspection will be generally ineffectual in an inline 
scenario as any given packet thread will only be looking at one direction of 
the traffic and be fairly confused when it comes to bidirectional protocols 
(say, TCP).

On 09/13/2016 10:18 AM, Nacht Z wrote:

Hello Everyone:

I have implemented a multithreaded DPDK DAQ module for daq 2.10 and snort 3.0. 
Here is the project link in github:DPDK_DAQ.
The link is a complete daq-2.1.0 project and a guide about how to install and 
use the mode in snort 3.0.
This module supports multithread and have changed relationship between 
snort3.0?s pigs(infact that?s thread?s another name in snort3.0) and NICs. A pig
can only have one NIC in dpdk module. So if you want to run muti-nics, you 
should use -z option in snort3. If not, you can only use one nic in fact.
I have also test the performance by using Spirent Test Center. I linked the 
snort and Test Center like this:

 Spirent Port0   <-->   Snort Port2

  ?  ?

  |  |

  |  |

  ?  ?

 Spirent Port1   <-->   Snort Port3

I send packets from the port0 to port2 and port1 to port3. The snort(run inline 
mode and with bps mode ?not ip?) forward the flows as the link port2 -> port3-> 
port1 and port3->port2->port0 at the
same time. In my 82599ES, I can run nealy full speed(99%) in 10G LAN mode 
without losing packets.(But when I run 100 speed it will lose 4445/5 
packets.)

This project is based on daq_netmap.c module and Tiwei Bie?s 
project.

Any comments would be appreciated. Thanks a lot!
Best wishes
NachtZ




--




___

Snort-devel mailing list

Snort-devel at lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/snort-devel

Archive:

http://sourceforge.net/mailarchive/forum.php?forum_name=snort-devel



Please visit http://blog.snort.org for the latest news about Snort!




[dpdk-dev] i40e: cannot change mtu to enable jumbo frame

2016-02-09 Thread Zhu, Heqing
Helin is still in Chinese New Year Vacation. Will the below command option help 
? 

4.5.9. port config - max-pkt-len
Set the maximum packet length:

testpmd> port config all max-pkt-len (value)
This is equivalent to the --max-pkt-len command-line option.


-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Julien Meunier
Sent: Tuesday, February 9, 2016 9:36 AM
To: Zhang, Helin ; dev at dpdk.org
Subject: [dpdk-dev] i40e: cannot change mtu to enable jumbo frame

Hello Helin,

I tried to send jumbo frames to a i40e card. However, I observed that all 
frames are dropped. Moreover, set_mtu function is not implemented on i40e PMD.

 > testpmd --log-level 8 --huge-dir=/mnt/huge -n 4 -l 2,18 --socket-mem
1024,1024 -w :02:00.0 -w :02:00.2 -- -i --nb-cores=1
--nb-ports=2 --total-num-mbufs=65536

=
Configuration
=

+---+  +-+
|   |  | |
| tgen  |  | |
|   +--+ port 0  |
|   |  | |
|   |  | |
|   |  | |
|   |  | |
|   +--+ port 1  |
|   |  | |
+---+  +-+

DPDK: DPDK-v2.2

==
MTU = 1500
==
Packet sent from a tgen
 > p = Ether / IP / UDP / Raw(MTU + HDR(Ethernet)- HDR(IP) - HDR(UDP))  > 
 > len(p) = 1514

testpmd> start
PMD: i40e_rxd_to_vlan_tci(): Mbuf vlan_tci: 0, vlan_tci_outer: 0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
PMD: i40e_update_vsi_stats(): *** VSI[13] stats start ***
PMD: i40e_update_vsi_stats(): rx_bytes:1518
PMD: i40e_update_vsi_stats(): rx_unicast:  1
PMD: i40e_update_vsi_stats(): *** VSI[13] stats end ***
PMD: i40e_dev_stats_get(): *** PF stats start ***
PMD: i40e_dev_stats_get(): rx_bytes:1514
PMD: i40e_dev_stats_get(): rx_unicast:  1
PMD: i40e_dev_stats_get(): rx_unknown_protocol: 1
PMD: i40e_dev_stats_get(): rx_size_1522: 1
PMD: i40e_dev_stats_get(): *** PF stats end ***

   -- Forward statistics for port 0  --
   RX-packets: 1  RX-dropped: 0 RX-total: 1
   TX-packets: 0  TX-dropped: 0 TX-total: 0


PMD: i40e_update_vsi_stats(): *** VSI[14] stats start ***
PMD: i40e_update_vsi_stats(): tx_bytes:1514
PMD: i40e_update_vsi_stats(): tx_unicast:  1
PMD: i40e_update_vsi_stats(): *** VSI[14] stats end ***
PMD: i40e_dev_stats_get(): *** PF stats start ***
PMD: i40e_dev_stats_get(): tx_bytes:1514
PMD: i40e_dev_stats_get(): tx_unicast:  1
PMD: i40e_dev_stats_get(): tx_size_1522: 1
PMD: i40e_dev_stats_get(): *** PF stats end ***

   -- Forward statistics for port 1 --
   RX-packets: 0  RX-dropped: 0 RX-total: 0
   TX-packets: 1  TX-dropped: 0 TX-total: 1



   + Accumulated forward statistics for all ports+
   RX-packets: 1  RX-dropped: 0 RX-total: 1
   TX-packets: 1  TX-dropped: 0 TX-total: 1



=> OK

==
MTU = 1600
==
Packet sent
 > p = Ether / IP / UDP / Raw(MTU + HDR(Ethernet)- HDR(IP) - HDR(UDP))  > 
 > len(p) = 1614

testpmd> port config mtu 0 1600
rte_eth_dev_set_mtu: Function not supported Set MTU failed. diag=-95
testpmd> port config mtu 1 1600
rte_eth_dev_set_mtu: Function not supported Set MTU failed. diag=-95
testpmd> start
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
PMD: i40e_update_vsi_stats(): *** VSI[13] stats start ***
PMD: i40e_update_vsi_stats(): rx_bytes:1618
PMD: i40e_update_vsi_stats(): rx_unicast:  1
PMD: i40e_update_vsi_stats(): *** VSI[13] stats end ***
PMD: i40e_dev_stats_get(): *** PF stats start ***
PMD: i40e_dev_stats_get(): rx_bytes:1614
PMD: i40e_dev_stats_get(): rx_unicast:  1
PMD: i40e_dev_stats_get(): rx_unknown_protocol: 1
PMD: i40e_dev_stats_get(): rx_size_big:  1
PMD: i40e_dev_stats_get(): *** PF stats end ***

   -- Forward statistics for port 0  --
   RX-packets: 1  RX-dropped: 0 RX-total: 1
   TX-packets: 0  TX-dropped: 0 TX-total: 0


PMD: i40e_update_vsi_stats(): *** VSI[14] stats start ***
PMD: i40e_update_vsi_stats(): tx_bytes:0
PMD: i40e_update_vsi_stats(): tx_unicast:  0
PMD: i40e_update_vsi_stats(): *** VSI[14] stats end ***
PMD: i40e_dev_stats_get(): *** PF stats start ***
PMD: 

[dpdk-dev] DPDK patch backlog

2015-10-16 Thread Zhu, Heqing
+1

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Stephen Hemminger
Sent: Friday, October 16, 2015 5:44 AM
To: Thomas Monjalon 
Cc: dev at dpdk.org
Subject: [dpdk-dev] DPDK patch backlog

There are currently 428 patches in New state in DPDK patchwork.

Thomas, could you start reducing that backlog?
The simplest solution would be to merge some of the big patch series from Intel 
for the base drivers, then reviewers can focus on the other patches.


[dpdk-dev] which 40G card support DPDK or DPDK 2.0

2015-01-24 Thread Zhu, Heqing
http://www.dpdk.org/doc/nics 

i40e (X710, XL710) can support Intel 40G Ethernet cards - Source code is part 
of DPDK today.
http://www.dpdk.org/browse/dpdk/tree/lib/librte_pmd_i40e/ 


-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Jim Hao Chen
Sent: Friday, January 23, 2015 2:25 PM
To: dev at dpdk.org
Subject: [dpdk-dev] which 40G card support DPDK or DPDK 2.0

Hello:

I am sorry if this question has been asked before, or this is not right 
list to ask, I searched the list archive, did not see any answer.

Any one can share information about which Mellanox 40G NIC on the mark 
can/will support DPDK or DPDK 2.0? The DPDK supported NIC page only list mlx4 
(ConnectX-3, ConnectX-3 Pro)?

we are in the process to acquire hardware to start DPDK related projects.

Thanks

Jim Chen

iCAIR/Northwestern University


[dpdk-dev] Avoid stripping vlan tag with I350 VF on a VM

2014-12-01 Thread Zhu, Heqing
Did you try to remove the VLAN strip on the PF side?  

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Shiantung Wong
Sent: Monday, December 01, 2014 11:29 AM
To: dev at dpdk.org
Subject: [dpdk-dev] Avoid stripping vlan tag with I350 VF on a VM

My application running on a VM needs to deal with multiple VLANS on packets 
received over I350 using Virtual Function. But I always see the received 
packets without vlan tag.
This is my setup:
  - host: 3.13.6-031306-generic
  - QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.7)
  - guest: 2.6.32-358.el6.x86_64
  - dpdk: 1.6

And this is how I set up in the VM:
---
memset(_conf, 0, sizeof(port_conf));
if (vlanCnt > 0)
 port_conf.rxmode.hw_vlan_filter = 1;

port_conf.rxmode.jumbo_frame = 1;
port_conf.rxmode.max_rx_pkt_len = 9018;

ret = rte_eth_dev_configure(port, 1, 1, _conf);
if (ret < 0) {
rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "Error on config port%u, 
ret=%d\n",
(unsigned)port, ret);
return -1;
}

for(i = 0; i < vlanCnt; i++) {
ret = rte_eth_dev_vlan_filter(port, vlan[i], 1);
if (ret < 0) {
rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "Error on config port%u, 
vlan%d ret=%d\n",
(unsigned)port, vlan[i], ret);
return -1;
}
}
-

If I also try to explicitly disable the stripping with next, but get an error 
return -ENOSUP.

ret = rte_eth_dev_set_vlan_strip_on_queue(port, 0, 0);

I appreciate anyone can help. on how to set it up.

Thanks,
Shian Wong


[dpdk-dev] [PATCH v4 00/10] VM Power Management

2014-11-21 Thread Zhu, Heqing
Pablo just sent a new patch set. This is a significant effort and it addressed 
a valid technical problem statement. 
I express my support to this feature into the DPDK mainline. 

IMHO, the previous *rejection* reason are not solid. It is important to 
encourage the real contribution like this. 


-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of O'driscoll, Tim
Sent: Monday, November 10, 2014 10:54 AM
To: Carew, Alan; Thomas Monjalon
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH v4 00/10] VM Power Management

> From: Carew, Alan
> 
> > Did you make any progress in Qemu/KVM community?
> > We need to be sync'ed up with them to be sure we share the same goal.
> > I want also to avoid using a solution which doesn't fit with their plan.
> > Remember that we already had this problem with ivshmem which was 
> > planned to be dropped.
> >
. . .
> 
> Unfortunately, I have not yet received any feedback:
> http://lists.nongnu.org/archive/html/qemu-devel/2014-11/msg01103.html

Just to add to what Alan said above, this capability does not exist in qemu at 
the moment, and based on there having been no feedback on the qemu mailing list 
so far, I think it's reasonable to assume that it will not be implemented in 
the immediate future. The VM Power Management feature has also been designed to 
allow easy migration to a qemu-based solution when this is supported in future. 
Therefore, I'd be in favour of accepting this feature into DPDK now.

It's true that the implementation is a work-around, but there have been similar 
cases in DPDK in the past. One recent example that comes to mind is userspace 
vhost. The original implementation could also be considered a work-around, but 
it met the needs of many in the community. Now, with support for vhost-user in 
qemu 2.1, that implementation is being improved. I'd see VM Power Management 
following a similar path when this capability is supported in qemu.


Tim


[dpdk-dev] DPDK Features for Q1 2015

2014-10-22 Thread Zhu, Heqing


> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Liang, Cunming
> Sent: Wednesday, October 22, 2014 8:06 AM
> To: Zhou, Danny; Thomas Monjalon; O'driscoll, Tim
> Cc: dev at dpdk.org; Fastabend, John R; Ronciak, John
> Subject: Re: [dpdk-dev] DPDK Features for Q1 2015
> 
> > >
> > > This design allows to keep the configuration code in one place: the
> kernel.
> > > In the meantime, we are trying to add a lot of code to configure the
> > > NICs, which looks to be a duplication of effort.
> > > Why should we have two ways of configuring e.g. flow director?

[heqing] There will be multiple choices for DPDK usage model if/after this 
feature is available, 
the customer can choose the DPDK with or without the bifurcated driver. 

> [Liang, Cunming] The HW sometimes provides additional ability than existing
> abstraction API.
> On that time(HW ability is a superset to the abstraction wrapper, e.g. flow
> director), we need to provide another choice.
> Ethtools is good, but can't apply anything supported in NIC.
> Bifurcated driver considers a lot on reusing existing rx/tx routine.
> We'll send RFC patch soon if kernel patch moving fast.
> 
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Zhou, Danny
> > Sent: Wednesday, October 22, 2014 10:44 PM
> > To: Thomas Monjalon; O'driscoll, Tim
> > Cc: dev at dpdk.org; Fastabend, John R; Ronciak, John
> > Subject: Re: [dpdk-dev] DPDK Features for Q1 2015
> >
> > Thomas,
> >
> > In terms of the bifurcated driver, it is actually the same thing.
> > Specifically, the bifurcated driver PMD in DPDK depends on kernel
> > code(af_packet and 10G/40G NIC) changes. Once the kernel patches are
> > upstreamed, the corresponding DPDK PMDs patches will be submitted to
> > dpdk.org. John Fastabend and John Ronciak are working with very
> > closely to achieve the same goal.
> >
> > -Danny
> >
> > > -Original Message-
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Thomas
> Monjalon
> > > Sent: Wednesday, October 22, 2014 10:21 PM
> > > To: O'driscoll, Tim
> > > Cc: dev at dpdk.org; Fastabend, John R; Ronciak, John
> > > Subject: Re: [dpdk-dev] DPDK Features for Q1 2015
> > >
> > > Thanks Tim for sharing your plan.
> > > It's really helpful to improve community collaboration.
> > >
> > > I'm sure it's going to generate some interesting discussions.
> > > Please take care to discuss such announce on dev list only.
> > > The announce at dpdk.org list is moderated to keep a low traffic.
> > >
> > > I would like to open discussion about a really important feature,
> > > showed last week by John Fastabend and John Ronciak during LinuxCon:
> > >
> > > > Bifurcated Driver: With the Bifurcated Driver, the kernel will
> > > > retain direct control of the NIC, and will assign specific queue pairs 
> > > > to
> DPDK.
> > > > Configuration of the NIC is controlled by the kernel via ethtool.
> > >
> > > This design allows to keep the configuration code in one place: the
> kernel.
> > > In the meantime, we are trying to add a lot of code to configure the
> > > NICs, which looks to be a duplication of effort.
> > > Why should we have two ways of configuring e.g. flow director?
> > >
> > > Since you at Intel, you'll be supporting your code, I am fine for
> > > duplication, but I feel it's worth arguing why both should be available
> instead of one.
> > >
> > > --
> > > Thomas


[dpdk-dev] multiqueue is supported by vhost user?

2014-09-25 Thread Zhu, Heqing
It is not supported yet. Do you want to send patches to add this support? 

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Zhangkun (K)
Sent: Thursday, September 25, 2014 11:28 AM
To: dev at dpdk.org
Subject: [dpdk-dev] multiqueue is supported by vhost user?

Hi,
  Multiqueue is supported by vhost user?
  If not supported now , will it be supported in the future?


[dpdk-dev] Regarding Crypto Accelerators

2014-09-17 Thread Zhu, Heqing
8086:0435 is also valid for the new generation.

-Sent via Intel smartphone.


-Original Message-
From: De Lara Guarch, Pablo [pablo.de.lara.guarch at 
intel.com]
Sent: Wednesday, September 17, 2014 04:19 AM Pacific Standard Time
To: Prashant Upadhyaya; dev at dpdk.org
Subject: Re: [dpdk-dev] Regarding Crypto Accelerators


Hi Prashant,

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Prashant
> Upadhyaya
> Sent: Wednesday, September 17, 2014 7:32 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] Regarding Crypto Accelerators
>
> Hi,
>
> I am planning to explore the usage of hardware crypto accelerators on Intel
> machine with Quick Assist libraries and DPDK.
> My first question is -- how do I find out whether my machine has the
> hardware crypto accelerators or not.

Use lspci to do so. Search for DH89, and if you get anything from there, that 
means that you have a crypto accelerator.
Anyway, if I am not wrong, the device IDs are 8086:0434 and 8086:0438 (or 
similar).

Regards,
Pablo
>
> Regards
> -Prashant


[dpdk-dev] fedora 19 / 20

2014-03-04 Thread Zhu, Heqing
I think so. 

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Sundeep Singatwaria
Sent: Tuesday, March 04, 2014 4:36 AM
To: dev at dpdk.org
Subject: [dpdk-dev] fedora 19 / 20

Does DPDK support fedora 19 and/or fedora 20. Has anyone tested with these 
distributions.

Thanks.



[dpdk-dev] [PATCH] Request for comments on ixgbe TSO support

2013-10-08 Thread Zhu, Heqing
Hi Qinglai, 

>> Besides, as you mentioned, the ixgbe driver doesn't leverage the hardware 
>> receive checksum offloading at all.

On the Rx side, ixgbe driver support the Rx checksum validation via hardware 
offload. There is a simple example @ DPDK\app\test-pmd\csumonly.c to check 
IPv4/L4 checksum error~ 

/* Update the L3/L4 checksum error packet count  */
rx_bad_ip_csum += (uint16_t) ((pkt_ol_flags & 
PKT_RX_IP_CKSUM_BAD) != 0);
rx_bad_l4_csum += (uint16_t) ((pkt_ol_flags & 
PKT_RX_L4_CKSUM_BAD) != 0);

-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of jigsaw
Sent: Saturday, October 05, 2013 3:11 AM
To: Venkatesan, Venky
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH] Request for comments on ixgbe TSO support

Hi Stephen,

Thanks for showing a bigger picture.

GSO is quite big implementation, that I think it won't be easily ported to 
DPDK. The mbuf needs to be equipped with many fields from skb to be able to 
deal with GSO.
Do you have the plan to port GSO to DPDK, or you would like to keep GSO in 
scope of virtio?

Regarding checksum flags, actually I was also thinking of extending ol_flags 
but then I gave it up coz I was worried about the size of mbuf.
My current patch has to push some work to user, due to the fact that mbuf 
delivers too few info (such as L2 and L3 protocol details).

Besides, as you mentioned, the ixgbe driver doesn't leverage the hardware 
receive checksum offloading at all. And if this is to be supported, the 
checksum flag need further extension.
(On the other hand, TSO doesn't care about receive checksum offloading).
Again, do you have plans to extend cksum flags so that virio feels more 
comfortable with DPDK?

Hi Venky,

I can either make the commit now as is, or wait till the cksum flags extension 
is in place. If Stephen (or somebody else) has the plan for better support for 
cksum offloading or GSO, it is perhaps better to implement TSO on top of that.

BTW, I have another small question. Current TSO patch offloads the TCP/IP 
pseudo cksum work to user. Do you think DPDK could provide some utility 
functions for TCP/IPv4/IPv6 pseudo cksum calculation and updating?

thx &
rgds,
-Qinglai


On Fri, Oct 4, 2013 at 9:38 PM, Venkatesan, Venky  wrote:
> Stephen,
>
> Agree on the checksum flag definition. I'm presuming that we should do this 
> on the L3 and L4 checksums separately (that ol_flags field is another one 
> that needs extension in the mbuf).
>
> Regards,
> -Venky
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Friday, October 04, 2013 11:23 AM
> To: jigsaw
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] Request for comments on ixgbe TSO 
> support
>
> On Fri, 4 Oct 2013 20:54:31 +0300
> jigsaw  wrote:
>
>> Hi Stephen,
>>
>>
>> >>This will work for local generated packets but overlapping existing field 
>> >>won't work well for forwarding.
>> So adding a new mss field in mbuf could be the way out? or I 
>> misunderstand something.
>>
>> >> What we want to be able to do is to take offload (jumbo) packets 
>> >> in with from virtio
>> Sorry I don't understand why TSO is connected to virtio. Could you 
>> give more details here?
>> Are you suggesting this TSO patch overlaps your work, or it should be 
>> based on your work?
>
> I am working on a better virtio driver. Already have lots more features 
> working, and doing better offload support is planned.
>
> TSO is a subset of the more generic segment offload (GSO) on Linux.
> With virtio is possible to receive GSO packets as well as send them.
> This feature is negotiated between guest and host.
>
> The idea is that between guests they can exchange jumbo (64K) packets even 
> with a smaller MTU. This helps in many ways. One example is only a single 
> route lookup is needed.
>
> Another issue is that the current DPDK model of offload flags for checksum is 
> problematic.
> It matches what is available in Intel hardware and is not easily 
> generalizable to other devices.
>
> Current DPDK flag is checksum bad. I would like to change it to 
> checksum known good. Then drivers which dont' do checksum would leave 
> it 0, but if receive checksum is known good set it to 1.  Basically 1 
> means known good, and
> 0 means unknown (or bad).  Higher level software can then do sw checksum if 
> necessary.