Re: signed packages

2014-01-27 Thread Giancarlo Razzolini
Em 27-01-2014 01:33, Nicolai escreveu:
 All the TLD and other massive outages say otherwise. I can think of
 one project that uses DNSSEC to verify files via TXT lookups. Their
 last DNSSEC outage? 3 days ago. Ed25519 in signify provides a 128-bit
 security level and is decentralized. DNSSEC provides 112 bits at best,
 via a government-controlled hierarchy. Nicolai 
I mentioned before that DNSSEC isn't perfect. Even IETF recognizes
this and they have indicated that they will improve the situation. When,
and how is a mystery.  The thing is, that DNSSEC adds in security. Even
with all it's problems. It would be one thing else to compromise. There
is no ultimately trust in real life, why there would be in the internet?
I wont die if they don't add it. Just will keep doing the same things
and hoping that I did not were compromised.

Cheers,

-- 
Giancarlo Razzolini
GPG: 4096R/77B981BC



bge(4): IPv6 checksum offload

2014-01-27 Thread Christian Weisgerber
Some bge(4) chips support IPv6 TCP checksum transmit offload.
Unfortunately, I have no idea which.  My best guess is that this
is symmetrical with the receive offload capability:

if (BGE_IS_5755_PLUS(sc))
mode |= BGE_RXMODE_IPV6_ENABLE;

So here is an experimental patch to enable it on 5755 and later
chips.  Courageous people who use both IPv6 and bge(4) might want
to try it.  If you do, please let me know about success and failure
and what chip.

$ dmesg | grep ^bge
bge0 at pci2 dev 0 function 0 Broadcom BCM5761 rev 0x10, BCM5761 A1 
(0x5761100)

That's one that works.

I have also verified that UDP checksum offload is indeed broken as
the comment in the driver claims.  For both IPv4 and IPv6, some
packets with checksum 0 are generated.


Index: if_bge.c
===
RCS file: /cvs/src/sys/dev/pci/if_bge.c,v
retrieving revision 1.346
diff -u -p -r1.346 if_bge.c
--- if_bge.c30 Dec 2013 18:47:45 -  1.346
+++ if_bge.c27 Jan 2014 19:43:18 -
@@ -2942,8 +2942,11 @@ bge_attach(struct device *parent, struct
 * offloading is enabled. Generating UDP checksum value 0 is
 * a violation of RFC 768.
 */
-   if (sc-bge_chipid != BGE_CHIPID_BCM5700_B0)
+   if (sc-bge_chipid != BGE_CHIPID_BCM5700_B0) {
ifp-if_capabilities |= IFCAP_CSUM_IPv4 | IFCAP_CSUM_TCPv4;
+   if (BGE_IS_5755_PLUS(sc))
+   ifp-if_capabilities |= IFCAP_CSUM_TCPv6;
+   }
 
if (BGE_IS_JUMBO_CAPABLE(sc))
ifp-if_hardmtu = BGE_JUMBO_MTU;
-- 
Christian naddy Weisgerber  na...@mips.inka.de



Re: bge(4): IPv6 checksum offload

2014-01-27 Thread Brad Smith

On 27/01/14 3:30 PM, Christian Weisgerber wrote:

Some bge(4) chips support IPv6 TCP checksum transmit offload.
Unfortunately, I have no idea which.  My best guess is that this
is symmetrical with the receive offload capability:

 if (BGE_IS_5755_PLUS(sc))
 mode |= BGE_RXMODE_IPV6_ENABLE;


After taking a look around at the various datasheets I was
going to propose more or less the same diff. The only difference
I had was moving it outside of the if (sc-bge_chipid != 
BGE_CHIPID_BCM5700_B0) check as 5755 and newer ASICs are

newer than the BCM5700 anyway.


So here is an experimental patch to enable it on 5755 and later
chips.  Courageous people who use both IPv6 and bge(4) might want
to try it.  If you do, please let me know about success and failure
and what chip.

$ dmesg | grep ^bge
bge0 at pci2 dev 0 function 0 Broadcom BCM5761 rev 0x10, BCM5761 A1 
(0x5761100)

That's one that works.

I have also verified that UDP checksum offload is indeed broken as
the comment in the driver claims.  For both IPv4 and IPv6, some
packets with checksum 0 are generated.


Index: if_bge.c
===
RCS file: /cvs/src/sys/dev/pci/if_bge.c,v
retrieving revision 1.346
diff -u -p -r1.346 if_bge.c
--- if_bge.c30 Dec 2013 18:47:45 -  1.346
+++ if_bge.c27 Jan 2014 19:43:18 -
@@ -2942,8 +2942,11 @@ bge_attach(struct device *parent, struct
 * offloading is enabled. Generating UDP checksum value 0 is
 * a violation of RFC 768.
 */
-   if (sc-bge_chipid != BGE_CHIPID_BCM5700_B0)
+   if (sc-bge_chipid != BGE_CHIPID_BCM5700_B0) {
ifp-if_capabilities |= IFCAP_CSUM_IPv4 | IFCAP_CSUM_TCPv4;
+   if (BGE_IS_5755_PLUS(sc))
+   ifp-if_capabilities |= IFCAP_CSUM_TCPv6;
+   }

if (BGE_IS_JUMBO_CAPABLE(sc))
ifp-if_hardmtu = BGE_JUMBO_MTU;




--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: em(4): Don't count RX overruns and missed packets as input errros

2014-01-27 Thread David Gwynne

On 26 Jan 2014, at 11:31 am, Brad Smith b...@comstyle.com wrote:

 On 31/12/13 5:50 AM, Mike Belopuhov wrote:
 On 31 December 2013 09:46, Brad Smith b...@comstyle.com wrote:
 On 31/12/13 3:14 AM, Mark Kettenis wrote:
 
 Date: Tue, 31 Dec 2013 01:28:04 -0500
 From: Brad Smith b...@comstyle.com
 
 Don't count RX overruns and missed packets as inputs errors. They're
 expected to increment when using MCLGETI.
 
 OK?
 
 
 These may be expected, but they're still packets that were not
 received.  And it is useful to know about these, for example when
 debugging TCP performance issues.
 
 
 Well do we want to keep just the missed packets or both? Part of the
 diff was inspired by this commit when I was looking at what counters
 were incrementing..
 
 for bge(4)..
 
 revision 1.334
 date: 2013/06/06 00:05:30;  author: dlg;  state: Exp;  lines: +2 -4;
 dont count rx ring overruns as input errors. with MCLGETI controlling the
 ring we expect to run out of rx descriptors as a matter of course, its not
 an error.
 
 ok mikeb@
 
 
 
 it does screws up statistics big time.  does mpc counter follow rx_overruns?
 why did we add them up both previously?
 
 Yes, it does. I can't say why exactly but before MCLGETI for most 
 environments it was unlikely to have RX overruns.

its not obvious?

rx rings are usually massively over provisioned. eg, my myx has 512 entries in 
its rx ring. however, its interrupt mitigation is currently configured for 
approximately 16k interrupts a second. if you're sustaining 1m pps, then you 
can divide that by the interrupt rate to figure out the average usage of the rx 
ring. so 1000 / 16 is about 60-65 descriptors per interrupt. 512 is roughly an 
order of magnitude more than what you need for that workload.

if you were hitting the ring limits before MCLGETI, then that means you dont 
have enough cpu to process the pps. increasing the ring size would make it 
worse cos you'd spend more time freeing mbufs because you were too far behind 
on the pps you could deal with.

 
 
 -- 
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.
 




Re: em(4): Don't count RX overruns and missed packets as input errros

2014-01-27 Thread Brad Smith
On Tue, Jan 28, 2014 at 01:21:46PM +1000, David Gwynne wrote:
 
 On 26 Jan 2014, at 11:31 am, Brad Smith b...@comstyle.com wrote:
 
  On 31/12/13 5:50 AM, Mike Belopuhov wrote:
  On 31 December 2013 09:46, Brad Smith b...@comstyle.com wrote:
  On 31/12/13 3:14 AM, Mark Kettenis wrote:
  
  Date: Tue, 31 Dec 2013 01:28:04 -0500
  From: Brad Smith b...@comstyle.com
  
  Don't count RX overruns and missed packets as inputs errors. They're
  expected to increment when using MCLGETI.
  
  OK?
  
  
  These may be expected, but they're still packets that were not
  received.  And it is useful to know about these, for example when
  debugging TCP performance issues.
  
  
  Well do we want to keep just the missed packets or both? Part of the
  diff was inspired by this commit when I was looking at what counters
  were incrementing..
  
  for bge(4)..
  
  revision 1.334
  date: 2013/06/06 00:05:30;  author: dlg;  state: Exp;  lines: +2 -4;
  dont count rx ring overruns as input errors. with MCLGETI controlling the
  ring we expect to run out of rx descriptors as a matter of course, its not
  an error.
  
  ok mikeb@
  
  
  
  it does screws up statistics big time.  does mpc counter follow 
  rx_overruns?
  why did we add them up both previously?
  
  Yes, it does. I can't say why exactly but before MCLGETI for most 
  environments
  it was unlikely to have RX overruns.
 
 its not obvious?
 
 rx rings are usually massively over provisioned. eg, my myx has 512 entries 
 in its
 rx ring. however, its interrupt mitigation is currently configured for 
 approximately
 16k interrupts a second. if you're sustaining 1m pps, then you can divide 
 that by the
 interrupt rate to figure out the average usage of the rx ring. so 1000 / 16 
 is about
 60-65 descriptors per interrupt. 512 is roughly an order of magnitude more 
 than what
 you need for that workload.
 
 if you were hitting the ring limits before MCLGETI, then that means you dont 
 have
 enough cpu to process the pps. increasing the ring size would make it worse 
 cos you'd
 spend more time freeing mbufs because you were too far behind on the pps you 
 could
 deal with.

Ya, I don't know why I ultimately said I can't say why exactly as I was 
thinking about
what you said regaring having a lot of buffers allocated and that's why I said 
it was
unlikely to have RX overruns.

Since this was changed for bge(4) then the other drivers using MCLGETI should 
be changed
as well if there is consensus about not adding the RX overruns to the 
interfaces input
errors counter. But I think kettenis has a point as well that this information 
is useful
its just we don't have any way of exposing that info to userland.

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.