stricter bioctl -c option parsing

2021-02-08 Thread Stefan Sperling
bioctl(8) only looks at the first character of the -c option argument
and ignores any trailing characters in the argument. Following the
addition of the RAID1C discipline this behaviour can lead to confusion.

This command with a typo ("C1" vs "1C") attempts to create a CRYPTO volume:

# bioctl -cC1 -l /dev/sd1d,/dev/sd2d softraid0  
   
bioctl: not exactly one partition

With the patch below, "C1" is instead rejected as an invalid raid level:

# /tmp/bioctl -cC1 -l /dev/sd1d,/dev/sd2d softraid0
bioctl: Invalid RAID level

ok?

diff 5ee6e27345c7d0c6d947bafadc8160eb9da3f73b /usr/src
blob - 24d4042d6d6dd2ee997eac511b0816ad47441d35
file + sbin/bioctl/bioctl.c
--- sbin/bioctl/bioctl.c
+++ sbin/bioctl/bioctl.c
@@ -133,6 +133,8 @@ main(int argc, char *argv[])
func |= BIOC_CREATERAID;
if (strcmp(optarg, "1C") == 0) {
cr_level = 0x1C;
+   } else if (strlen(optarg) != 1) {
+   errx(1, "Invalid RAID level");
} else if (isdigit((unsigned char)*optarg)) {
cr_level = strtonum(optarg, 0, 10, &errstr);
if (errstr != NULL)



Re: diff: tcp ack improvement

2021-02-08 Thread Alexander Bluhm
On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> Just commit it. OK claudio@
> If people see problems we can back it out again.

This has huge impact on TCP performance.

http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html

For a single TCP connection between to OpenBSD boxes, througput
drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
parallel connections the througput over all increases by 5%.

Sending from Linux to OpenBSD increases by 72% from 3.5 GBit/sec
to 6.0 GBit/sec.

Socket splicing from Linux to Linux via OpenBSD with 10 parallel
TCP connections increases by 25% from 3.5 GBit/sec from 1.8 GBit/sec
to 2.3 GBit/sec.

It seems that sending less ACK packets improves performance if the
machine is limited by the CPU.  But the TCP stack of OpenBSD is
sending 77% percent slower, if it does not receive enough ACKs.
This has no impact if we are measuring the combined througput of
many parallel connections.  The Linux packet sending algorithm looks
unaffected by our more delayed acks.

I think 77% slower between two OpenBSDs is not acceptable.
Do others see that, too?

bluhm



Re: diff: tcp ack improvement

2021-02-08 Thread Theo de Raadt
Yes it is unacceptable.

Alexander Bluhm  wrote:

> On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> > Just commit it. OK claudio@
> > If people see problems we can back it out again.
> 
> This has huge impact on TCP performance.
> 
> http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html
> 
> For a single TCP connection between to OpenBSD boxes, througput
> drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
> parallel connections the througput over all increases by 5%.
> 
> Sending from Linux to OpenBSD increases by 72% from 3.5 GBit/sec
> to 6.0 GBit/sec.
> 
> Socket splicing from Linux to Linux via OpenBSD with 10 parallel
> TCP connections increases by 25% from 3.5 GBit/sec from 1.8 GBit/sec
> to 2.3 GBit/sec.
> 
> It seems that sending less ACK packets improves performance if the
> machine is limited by the CPU.  But the TCP stack of OpenBSD is
> sending 77% percent slower, if it does not receive enough ACKs.
> This has no impact if we are measuring the combined througput of
> many parallel connections.  The Linux packet sending algorithm looks
> unaffected by our more delayed acks.
> 
> I think 77% slower between two OpenBSDs is not acceptable.
> Do others see that, too?
> 
> bluhm
> 



change rpki-client repository code

2021-02-08 Thread Claudio Jeker
Split the repository code into two parts:

- fetch of the trust anchors (the certs referenced by TAL files)
- fetch of the MFT files of a repository

While the two things kind of look similar there are some differences.

- TA files are loaded via rsync or https URI (only one file needs to be
  loaded)
- MFT files need everything inside the repository to be loaded since they
  reference to other files (.roa, .cer, .crl). These repositories are
  synced once with rsync and many mft may be part of a repo. Also these
  repositories can be synced via rsync or RRDP

To simplify these diverse options it is time to split the code up.
Introduce a ta_lookup() along with repo_lookup(). Refactor the repo_lookup
code into subfunctions repo_alloc() and repo_fetch() (both are also used
by ta_lookup()). Use the caRepository URI to figure out the base URI.
Simplify rsync_uri_parse() into rsync_base_uri() which clips of excess
directories from the URI (else thousends of individual rsync calls would
be made against the RIR's CA repos).

The big change is that the layout of the cache directory is changed.
The cache will now have two base directories:
- ta/ (for all trust anchors)
- rsync/ (for all other repositories)

-- 
:wq Claudio

Index: extern.h
===
RCS file: /cvs/src/usr.sbin/rpki-client/extern.h,v
retrieving revision 1.42
diff -u -p -r1.42 extern.h
--- extern.h8 Feb 2021 09:22:53 -   1.42
+++ extern.h8 Feb 2021 13:44:22 -
@@ -392,9 +392,7 @@ void proc_parser(int) __attribute__((n
 
 /* Rsync-specific. */
 
-int rsync_uri_parse(const char **, size_t *,
-   const char **, size_t *, const char **, size_t *,
-   enum rtype *, const char *);
+char   *rsync_base_uri(const char *);
 voidproc_rsync(char *, char *, int) __attribute__((noreturn));
 
 /* Logging (though really used for OpenSSL errors). */
Index: main.c
===
RCS file: /cvs/src/usr.sbin/rpki-client/main.c,v
retrieving revision 1.98
diff -u -p -r1.98 main.c
--- main.c  5 Feb 2021 12:26:52 -   1.98
+++ main.c  8 Feb 2021 13:50:20 -
@@ -78,11 +78,12 @@
  * An rsync repository.
  */
 struct repo {
-   char*repo;  /* repository rsync URI */
-   char*local; /* local path name */
-   char*notify; /* RRDB notify URI if available */
-   size_t   id; /* identifier (array index) */
-   int  loaded; /* whether loaded or not */
+   char*repouri;   /* CA repository base URI */
+   char*local; /* local path name */
+   char*uris[2];   /* URIs to fetch from */
+   size_t   id;/* identifier (array index) */
+   int  uriidx;/* which URI is fetched */
+   int  loaded;/* whether loaded or not */
 };
 
 size_t entity_queue;
@@ -284,33 +285,12 @@ entityq_add(struct entityq *q, char *fil
 }
 
 /*
- * Look up a repository, queueing it for discovery if not found.
+ * Allocat a new repository be extending the repotable.
  */
-static const struct repo *
-repo_lookup(const char *uri)
+static struct repo *
+repo_alloc(void)
 {
-   const char  *host, *mod;
-   size_t   hostsz, modsz, i;
-   char*local;
-   struct repo *rp;
-   struct ibuf *b;
-
-   if (!rsync_uri_parse(&host, &hostsz,
-   &mod, &modsz, NULL, NULL, NULL, uri))
-   errx(1, "%s: malformed", uri);
-
-   if (asprintf(&local, "%.*s/%.*s", (int)hostsz, host,
-   (int)modsz, mod) == -1)
-   err(1, "asprintf");
-
-   /* Look up in repository table. */
-
-   for (i = 0; i < rt.reposz; i++) {
-   if (strcmp(rt.repos[i].local, local))
-   continue;
-   free(local);
-   return &rt.repos[i];
-   }
+   struct repo *rp;
 
rt.repos = reallocarray(rt.repos,
rt.reposz + 1, sizeof(struct repo));
@@ -320,28 +300,99 @@ repo_lookup(const char *uri)
rp = &rt.repos[rt.reposz++];
memset(rp, 0, sizeof(struct repo));
rp->id = rt.reposz - 1;
-   rp->local = local;
 
-   if ((rp->repo = strndup(uri, mod + modsz - uri)) == NULL)
-   err(1, "strdup");
+   return rp;
+}
 
-   if (!noop) {
-   if (asprintf(&local, "%s", rp->local) == -1)
-   err(1, "asprintf");
-   logx("%s: pulling from network", local);
-   if ((b = ibuf_dynamic(256, UINT_MAX)) == NULL)
-   err(1, NULL);
-   io_simple_buffer(b, &rp->id, sizeof(rp->id));
-   io_str_buffer(b, local);
-   io_str_buffer(b, rp->repo);
-   ibuf_close(&rsyncq, b);
-   free(local);

Re: diff: tcp ack improvement

2021-02-08 Thread Alexander Bluhm
On Mon, Feb 08, 2021 at 07:03:59PM +0100, Jan Klemkow wrote:
> On Mon, Feb 08, 2021 at 03:42:54PM +0100, Alexander Bluhm wrote:
> > On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> > > Just commit it. OK claudio@
> > > If people see problems we can back it out again.
> > 
> > This has huge impact on TCP performance.
> > 
> > http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html
> > 
> > For a single TCP connection between to OpenBSD boxes, througput
> > drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
> > parallel connections the througput over all increases by 5%.
> 
> For single connections our kernel is limited to send out 4 max TCP
> segments.  I don't see that, because I just measured with 10 and 30
> streams in parallel.
> 
> FreeBSD disabled it 20 yeas ago.
> https://github.com/freebsd/freebsd-src/commit/d912c694ee00de5ea0f46743295a0fc603cab562

TCP_MAXBURST was added together with SACK in rev 1.12 of tcp_output.c
to our code base.


revision 1.12
date: 1998/11/17 19:23:02;  author: provos;  state: Exp;  lines: +239 -14;
NewReno, SACK and FACK support for TCP, adapted from code for BSDI
by Hari Balakrishnan (h...@lcs.mit.edu), Tom Henderson (t...@cs.berkeley.edu)
and Venkat Padmanabhan (padma...@cs.berkeley.edu) as part of the
Daedalus research group at the University of California,
(http://daedalus.cs.berkeley.edu). [I was able to do this on time spent
at the Center for Information Technology Integration (citi.umich.edu)]


> I would suggest to remove the whole feature.

Sending 4 segments per call to tcp_output() cannot scale.  Bandwith
increases, window size grows, but segment size is 1500 for decades.

With this diff on top of jan's delay ACK behavior I get 4.1 GBit/sec
over a single TCP connection using tcpbench -S100.  Before both
changes it was only 3.0.

I recommend removing TCP_MAXBURST like FreeBSD did.

bluhm

> Index: tcp.h
> ===
> RCS file: /cvs/src/sys/netinet/tcp.h,v
> retrieving revision 1.21
> diff -u -p -r1.21 tcp.h
> --- tcp.h 10 Jul 2019 18:45:31 -  1.21
> +++ tcp.h 8 Feb 2021 17:52:38 -
> @@ -105,8 +105,6 @@ struct tcphdr {
>  #define  TCP_MAX_SACK3   /* Max # SACKs sent in any segment */
>  #define  TCP_SACKHOLE_LIMIT 128  /* Max # SACK holes per connection */
>  
> -#define  TCP_MAXBURST4   /* Max # packets after leaving Fast 
> Rxmit */
> -
>  /*
>   * Default maximum segment size for TCP.
>   * With an IP MSS of 576, this is 536,
> Index: tcp_output.c
> ===
> RCS file: /cvs/src/sys/netinet/tcp_output.c,v
> retrieving revision 1.129
> diff -u -p -r1.129 tcp_output.c
> --- tcp_output.c  25 Jan 2021 03:40:46 -  1.129
> +++ tcp_output.c  8 Feb 2021 17:53:07 -
> @@ -203,7 +203,6 @@ tcp_output(struct tcpcb *tp)
>   int idle, sendalot = 0;
>   int i, sack_rxmit = 0;
>   struct sackhole *p;
> - int maxburst = TCP_MAXBURST;
>  #ifdef TCP_SIGNATURE
>   unsigned int sigoff;
>  #endif /* TCP_SIGNATURE */
> @@ -1120,7 +1119,7 @@ out:
>   tp->last_ack_sent = tp->rcv_nxt;
>   tp->t_flags &= ~TF_ACKNOW;
>   TCP_TIMER_DISARM(tp, TCPT_DELACK);
> - if (sendalot && --maxburst)
> + if (sendalot)
>   goto again;
>   return (0);
>  }



Re: diff: tcp ack improvement

2021-02-08 Thread Claudio Jeker
On Mon, Feb 08, 2021 at 07:46:46PM +0100, Alexander Bluhm wrote:
> On Mon, Feb 08, 2021 at 07:03:59PM +0100, Jan Klemkow wrote:
> > On Mon, Feb 08, 2021 at 03:42:54PM +0100, Alexander Bluhm wrote:
> > > On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> > > > Just commit it. OK claudio@
> > > > If people see problems we can back it out again.
> > > 
> > > This has huge impact on TCP performance.
> > > 
> > > http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html
> > > 
> > > For a single TCP connection between to OpenBSD boxes, througput
> > > drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
> > > parallel connections the througput over all increases by 5%.
> > 
> > For single connections our kernel is limited to send out 4 max TCP
> > segments.  I don't see that, because I just measured with 10 and 30
> > streams in parallel.
> > 
> > FreeBSD disabled it 20 yeas ago.
> > https://github.com/freebsd/freebsd-src/commit/d912c694ee00de5ea0f46743295a0fc603cab562
> 
> TCP_MAXBURST was added together with SACK in rev 1.12 of tcp_output.c
> to our code base.
> 
> 
> revision 1.12
> date: 1998/11/17 19:23:02;  author: provos;  state: Exp;  lines: +239 -14;
> NewReno, SACK and FACK support for TCP, adapted from code for BSDI
> by Hari Balakrishnan (h...@lcs.mit.edu), Tom Henderson (t...@cs.berkeley.edu)
> and Venkat Padmanabhan (padma...@cs.berkeley.edu) as part of the
> Daedalus research group at the University of California,
> (http://daedalus.cs.berkeley.edu). [I was able to do this on time spent
> at the Center for Information Technology Integration (citi.umich.edu)]
> 
> 
> > I would suggest to remove the whole feature.
> 
> Sending 4 segments per call to tcp_output() cannot scale.  Bandwith
> increases, window size grows, but segment size is 1500 for decades.
> 
> With this diff on top of jan's delay ACK behavior I get 4.1 GBit/sec
> over a single TCP connection using tcpbench -S100.  Before both
> changes it was only 3.0.
> 
> I recommend removing TCP_MAXBURST like FreeBSD did.
> 

I agree that this maxburst limit is no longer adequate. TCP New Reno
RFC6582 has the following:

   In Section 3.2, step 3 above, it is noted that implementations should
   take measures to avoid a possible burst of data when leaving fast
   recovery, in case the amount of new data that the sender is eligible
   to send due to the new value of the congestion window is large.  This
   can arise during NewReno when ACKs are lost or treated as pure window
   updates, thereby causing the sender to underestimate the number of
   new segments that can be sent during the recovery procedure.
   Specifically, bursts can occur when the FlightSize is much less than
   the new congestion window when exiting from fast recovery.  One
   simple mechanism to avoid a burst of data when leaving fast recovery
   is to limit the number of data packets that can be sent in response
   to a single acknowledgment.  (This is known as "maxburst_" in ns-2
   [NS].)  Other possible mechanisms for avoiding bursts include rate-
   based pacing, or setting the slow start threshold to the resultant
   congestion window and then resetting the congestion window to
   FlightSize.  A recommendation on the general mechanism to avoid
   excessively bursty sending patterns is outside the scope of this
   document.

While I agree that bursts need to be limited I think the implementation of
TCP_MAXBURST is bad. Since FreeBSD removed the code I guess nobody really
ran into issues of additional packet loss because of the burts. So go
ahead and remove it. OK claudio@

-- 
:wq Claudio



Re: diff: tcp ack improvement

2021-02-08 Thread Theo de Raadt
Claudio Jeker  wrote:

> On Mon, Feb 08, 2021 at 07:46:46PM +0100, Alexander Bluhm wrote:
> > On Mon, Feb 08, 2021 at 07:03:59PM +0100, Jan Klemkow wrote:
> > > On Mon, Feb 08, 2021 at 03:42:54PM +0100, Alexander Bluhm wrote:
> > > > On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> > > > > Just commit it. OK claudio@
> > > > > If people see problems we can back it out again.
> > > > 
> > > > This has huge impact on TCP performance.
> > > > 
> > > > http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html
> > > > 
> > > > For a single TCP connection between to OpenBSD boxes, througput
> > > > drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
> > > > parallel connections the througput over all increases by 5%.
> > > 
> > > For single connections our kernel is limited to send out 4 max TCP
> > > segments.  I don't see that, because I just measured with 10 and 30
> > > streams in parallel.
> > > 
> > > FreeBSD disabled it 20 yeas ago.
> > > https://github.com/freebsd/freebsd-src/commit/d912c694ee00de5ea0f46743295a0fc603cab562
> > 
> > TCP_MAXBURST was added together with SACK in rev 1.12 of tcp_output.c
> > to our code base.
> > 
> > 
> > revision 1.12
> > date: 1998/11/17 19:23:02;  author: provos;  state: Exp;  lines: +239 -14;
> > NewReno, SACK and FACK support for TCP, adapted from code for BSDI
> > by Hari Balakrishnan (h...@lcs.mit.edu), Tom Henderson 
> > (t...@cs.berkeley.edu)
> > and Venkat Padmanabhan (padma...@cs.berkeley.edu) as part of the
> > Daedalus research group at the University of California,
> > (http://daedalus.cs.berkeley.edu). [I was able to do this on time spent
> > at the Center for Information Technology Integration (citi.umich.edu)]
> > 
> > 
> > > I would suggest to remove the whole feature.
> > 
> > Sending 4 segments per call to tcp_output() cannot scale.  Bandwith
> > increases, window size grows, but segment size is 1500 for decades.
> > 
> > With this diff on top of jan's delay ACK behavior I get 4.1 GBit/sec
> > over a single TCP connection using tcpbench -S100.  Before both
> > changes it was only 3.0.
> > 
> > I recommend removing TCP_MAXBURST like FreeBSD did.
> > 
> 
> I agree that this maxburst limit is no longer adequate. TCP New Reno
> RFC6582 has the following:
> 
>In Section 3.2, step 3 above, it is noted that implementations should
>take measures to avoid a possible burst of data when leaving fast
>recovery, in case the amount of new data that the sender is eligible
>to send due to the new value of the congestion window is large.  This
>can arise during NewReno when ACKs are lost or treated as pure window
>updates, thereby causing the sender to underestimate the number of
>new segments that can be sent during the recovery procedure.
>Specifically, bursts can occur when the FlightSize is much less than
>the new congestion window when exiting from fast recovery.  One
>simple mechanism to avoid a burst of data when leaving fast recovery
>is to limit the number of data packets that can be sent in response
>to a single acknowledgment.  (This is known as "maxburst_" in ns-2
>[NS].)  Other possible mechanisms for avoiding bursts include rate-
>based pacing, or setting the slow start threshold to the resultant
>congestion window and then resetting the congestion window to
>FlightSize.  A recommendation on the general mechanism to avoid
>excessively bursty sending patterns is outside the scope of this
>document.
> 
> While I agree that bursts need to be limited I think the implementation of
> TCP_MAXBURST is bad. Since FreeBSD removed the code I guess nobody really
> ran into issues of additional packet loss because of the burts. So go
> ahead and remove it. OK claudio@

that makes sense.  ok deraadt



Re: diff: tcp ack improvement

2021-02-08 Thread Jan Klemkow
On Mon, Feb 08, 2021 at 03:42:54PM +0100, Alexander Bluhm wrote:
> On Wed, Feb 03, 2021 at 11:20:04AM +0100, Claudio Jeker wrote:
> > Just commit it. OK claudio@
> > If people see problems we can back it out again.
> 
> This has huge impact on TCP performance.
> 
> http://bluhm.genua.de/perform/results/2021-02-07T00%3A01%3A40Z/perform.html
> 
> For a single TCP connection between to OpenBSD boxes, througput
> drops by 77% from 3.1 GBit/sec to 710 MBit/sec.  But with 100
> parallel connections the througput over all increases by 5%.

For single connections our kernel is limited to send out 4 max TCP
segments.  I don't see that, because I just measured with 10 and 30
streams in parallel.

FreeBSD disabled it 20 yeas ago.
https://github.com/freebsd/freebsd-src/commit/d912c694ee00de5ea0f46743295a0fc603cab562

I would suggest to remove the whole feature.

bye,
Jan

Index: tcp.h
===
RCS file: /cvs/src/sys/netinet/tcp.h,v
retrieving revision 1.21
diff -u -p -r1.21 tcp.h
--- tcp.h   10 Jul 2019 18:45:31 -  1.21
+++ tcp.h   8 Feb 2021 17:52:38 -
@@ -105,8 +105,6 @@ struct tcphdr {
 #defineTCP_MAX_SACK3   /* Max # SACKs sent in any segment */
 #defineTCP_SACKHOLE_LIMIT 128  /* Max # SACK holes per connection */
 
-#defineTCP_MAXBURST4   /* Max # packets after leaving Fast 
Rxmit */
-
 /*
  * Default maximum segment size for TCP.
  * With an IP MSS of 576, this is 536,
Index: tcp_output.c
===
RCS file: /cvs/src/sys/netinet/tcp_output.c,v
retrieving revision 1.129
diff -u -p -r1.129 tcp_output.c
--- tcp_output.c25 Jan 2021 03:40:46 -  1.129
+++ tcp_output.c8 Feb 2021 17:53:07 -
@@ -203,7 +203,6 @@ tcp_output(struct tcpcb *tp)
int idle, sendalot = 0;
int i, sack_rxmit = 0;
struct sackhole *p;
-   int maxburst = TCP_MAXBURST;
 #ifdef TCP_SIGNATURE
unsigned int sigoff;
 #endif /* TCP_SIGNATURE */
@@ -1120,7 +1119,7 @@ out:
tp->last_ack_sent = tp->rcv_nxt;
tp->t_flags &= ~TF_ACKNOW;
TCP_TIMER_DISARM(tp, TCPT_DELACK);
-   if (sendalot && --maxburst)
+   if (sendalot)
goto again;
return (0);
 }



Mention cvschroot in anoncvs.html

2021-02-08 Thread Daniel Jakots
Hi,

My usual mirror died apparently. stsp kindly pointed out the cvschroot
in cvsutils package to switch to a new mirror. Here's a diff to add
this where I looked at how to do it.

Comments? OK?


Index: anoncvs.html
===
RCS file: /cvs/www/anoncvs.html,v
retrieving revision 1.512
diff -u -p -r1.512 anoncvs.html
--- anoncvs.html18 Oct 2020 02:37:42 -  1.512
+++ anoncvs.html8 Feb 2021 22:34:46 -
@@ -202,6 +202,9 @@ $ cd /usr/src
 $ cvs -d anon...@anoncvs.ca.openbsd.org:/cvs -q up -Pd
 
 
+
+If you want to switch to a new server, you can use cvschroot from the cvsutils 
package.
+
 Getting the ports and xenocara trees
 
 



Re: uhidpp(4): logitech hid++ device driver

2021-02-08 Thread Anindya Mukherjee
Hi, I have a Logitech M570 which seems to be handled by this new driver.
However I don't see any battery level information.

dmesg:
uhidpp0 at uhidev4 device 1 mouse "M570" serial ef-81-ff-80

sysctl kern.version:
kern.version=OpenBSD 6.8-current (GENERIC.MP) #313: Fri Feb  5 18:31:44 MST 2021
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

hw.sensors.uhidpp0 does not seem to exist.

Regards,
Anindya



iked(8): roadwarrior client support

2021-02-08 Thread Tobias Heider
This diff adds support for automatic installation of virtual IPs and
routes for iked roadwarrior clients.

Requesting addresses from the peer has been supported for some time now
but is by itself pretty useless.
This change adds a new 'iface' config option to specify an interface on
which the address should be added.
In addition to addresses, iked will also automatically add routes for
each flow specified with 'from dynamic'.

Below is a client config that combines all of these features:

ikev2 "roadw-client" active \
from dynamic to any \
peer $IP \
request address any \
iface lo0

After a successful handshake, iked should set the received IP on lo0
and add a new default route from this new IP (matching 'from dynamic to any').
Depending on the server config the addresses and routes may be IPv4, IPv6
or both.  On exit, all added routes and addresses are automatically removed 
(make sure to test with the latest iked commit or this won't work).

Tests and comments greatly appreciated.

diff --git a/sbin/iked/Makefile b/sbin/iked/Makefile
index 6b8e3013aec..01a04b95dc4 100644
--- a/sbin/iked/Makefile
+++ b/sbin/iked/Makefile
@@ -4,7 +4,7 @@ PROG=   iked
 SRCS=  ca.c chap_ms.c config.c control.c crypto.c dh.c \
eap.c iked.c ikev2.c ikev2_msg.c ikev2_pld.c \
log.c ocsp.c pfkey.c policy.c proc.c timer.c util.c \
-   imsg_util.c smult_curve25519_ref.c
+   imsg_util.c smult_curve25519_ref.c vroute.c
 SRCS+= eap_map.c ikev2_map.c
 SRCS+= parse.y
 MAN=   iked.conf.5 iked.8
diff --git a/sbin/iked/config.c b/sbin/iked/config.c
index 3bafad89963..eb4edb46bd1 100644
--- a/sbin/iked/config.c
+++ b/sbin/iked/config.c
@@ -117,6 +117,7 @@ config_free_sa(struct iked *env, struct iked_sa *sa)
config_free_fragments(&sa->sa_fragments);
config_free_proposals(&sa->sa_proposals, 0);
config_free_childsas(env, &sa->sa_childsas, NULL, NULL);
+   sa_configure_iface(env, sa, 0);
sa_free_flows(env, &sa->sa_flows);
 
if (sa->sa_addrpool) {
diff --git a/sbin/iked/iked.c b/sbin/iked/iked.c
index c1b5fb7a28c..8052fdc2b39 100644
--- a/sbin/iked/iked.c
+++ b/sbin/iked/iked.c
@@ -199,6 +199,8 @@ main(int argc, char *argv[])
 
proc_listen(ps, procs, nitems(procs));
 
+   vroute_init(env);
+
if (parent_configure(env) == -1)
fatalx("configuration failed");
 
@@ -265,10 +267,10 @@ parent_configure(struct iked *env)
 * proc - run kill to terminate its children safely.
 * dns - for reload and ocsp connect.
 * inet - for ocsp connect.
-* route - for using interfaces in iked.conf (SIOCGIFGMEMB)
+* wroute - for using interfaces in iked.conf (SIOCAIFGMEMB)
 * sendfd - for ocsp sockets.
 */
-   if (pledge("stdio rpath proc dns inet route sendfd", NULL) == -1)
+   if (pledge("stdio rpath proc dns inet wroute sendfd", NULL) == -1)
fatal("pledge");
 
config_setstatic(env);
@@ -454,6 +456,14 @@ parent_dispatch_ikev2(int fd, struct privsep_proc *p, 
struct imsg *imsg)
struct iked *env = p->p_ps->ps_env;
 
switch (imsg->hdr.type) {
+   case IMSG_IF_ADDADDR:
+   case IMSG_IF_DELADDR:
+   return (vroute_getaddr(env, imsg));
+   case IMSG_VROUTE_ADD:
+   case IMSG_VROUTE_DEL:
+   return (vroute_getroute(env, imsg));
+   case IMSG_VROUTE_CLONE:
+   return (vroute_getcloneroute(env, imsg));
case IMSG_CTL_EXIT:
parent_shutdown(env);
default:
diff --git a/sbin/iked/iked.conf.5 b/sbin/iked/iked.conf.5
index 695f0efb618..43f2424b6bc 100644
--- a/sbin/iked/iked.conf.5
+++ b/sbin/iked/iked.conf.5
@@ -659,6 +659,9 @@ included.
 .It Ic access-server Ar address
 The address of an internal remote access server.
 .El
+.It Ic iface Ar interface
+Configure requested addresses and routes on the specified
+.Ar interface .
 .It Ic tag Ar string
 Add a
 .Xr pf 4
diff --git a/sbin/iked/iked.h b/sbin/iked/iked.h
index e748ee6020e..d270210ccde 100644
--- a/sbin/iked/iked.h
+++ b/sbin/iked/iked.h
@@ -153,6 +153,7 @@ struct iked_flow {
unsigned int flow_dir;  /* in/out */
int  flow_rdomain;
struct iked_addr flow_prenat;
+   int  flow_fixed;
 
unsigned int flow_loaded;   /* pfkey done */
 
@@ -236,6 +237,7 @@ struct iked_lifetime {
 struct iked_policy {
unsigned int pol_id;
char pol_name[IKED_ID_SIZE];
+   unsigned int pol_iface;
 
 #define IKED_SKIP_FLAGS 0
 #define IKED_SKIP_AF1
@@ -751,6 +753,7 @@ struct iked {
struct event sc_pfkeyev;
uint8_t  sc_cer

Re: uhidpp(4): logitech hid++ device driver

2021-02-08 Thread Anton Lindqvist
Hi,

On Mon, Feb 08, 2021 at 02:50:39PM -0800, Anindya Mukherjee wrote:
> Hi, I have a Logitech M570 which seems to be handled by this new driver.
> However I don't see any battery level information.
> 
> dmesg:
> uhidpp0 at uhidev4 device 1 mouse "M570" serial ef-81-ff-80
> 
> sysctl kern.version:
> kern.version=OpenBSD 6.8-current (GENERIC.MP) #313: Fri Feb  5 18:31:44 MST 
> 2021
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
> hw.sensors.uhidpp0 does not seem to exist.
> 
> Regards,
> Anindya
> 

Could you try the following diff and send me the complete dmesg. I've
also prepared an amd64 kernel with the patch applied:

https://www.basename.se/tmp/bsd.uhidpp.battery

diff --git sys/dev/usb/uhidpp.c sys/dev/usb/uhidpp.c
index b041d86fecd..27f6137ec06 100644
--- sys/dev/usb/uhidpp.c
+++ sys/dev/usb/uhidpp.c
@@ -29,7 +29,7 @@
 #include 
 #include 
 
-/* #define UHIDPP_DEBUG */
+#define UHIDPP_DEBUG
 #ifdef UHIDPP_DEBUG
 
 #define DPRINTF(x...) do { \
@@ -84,12 +84,16 @@ int uhidpp_debug = 1;
 #define HIDPP_GET_LONG_REGISTER0x83
 
 #define HIDPP_REG_ENABLE_REPORTS   0x00
+#define HIDPP_REG_BATTERY_STATUS   0x07
 #define HIDPP_REG_PAIRING_INFORMATION  0xb5
 
 #define HIDPP_NOTIF_DEVICE_BATTERY_STATUS  (1 << 4)
 #define HIDPP_NOTIF_RECEIVER_WIRELESS  (1 << 0)
 #define HIDPP_NOTIF_RECEIVER_SOFTWARE_PRESENT  (1 << 3)
 
+/* Number of battery levels supported by HID++ 1.0 devices. */
+#define HIDPP_BATTERY_NLEVELS  7
+
 /* HID++ 1.0 error codes. */
 #define HIDPP_ERROR0x8f
 #define HIDPP_ERROR_SUCCESS0x00
@@ -112,7 +116,6 @@ int uhidpp_debug = 1;
  * greater than zero which is reserved for notifications.
  */
 #define HIDPP_SOFTWARE_ID  0x01
-#define HIDPP_SOFTWARE_ID_MASK 0x0f
 #define HIDPP_SOFTWARE_ID_LEN  4
 
 #define HIDPP20_FEAT_ROOT_IDX  0x00
@@ -154,8 +157,8 @@ int uhidpp_debug = 1;
 
 /* Feature access report used by the HID++ 2.0 (and greater) protocol. */
 struct fap {
-   uint8_t feature_index;
-   uint8_t funcindex_clientid;
+   uint8_t feature_idx;
+   uint8_t funcidx_swid;
uint8_t params[HIDPP_REPORT_LONG_PARAMS_MAX];
 };
 
@@ -185,6 +188,8 @@ struct uhidpp_notification {
 struct uhidpp_device {
uint8_t d_id;
uint8_t d_connected;
+   uint8_t d_major;
+   uint8_t d_minor;
struct {
struct ksensor b_sens[UHIDPP_NSENSORS];
uint8_t b_feature_idx;
@@ -237,8 +242,10 @@ struct uhidpp_notification 
*uhidpp_claim_notification(struct uhidpp_softc *);
 int uhidpp_consume_notification(struct uhidpp_softc *, struct uhidpp_report *);
 int uhidpp_is_notification(struct uhidpp_softc *, struct uhidpp_report *);
 
-int hidpp_get_protocol_version(struct uhidpp_softc  *, uint8_t, int *, int *);
+int hidpp_get_protocol_version(struct uhidpp_softc  *, uint8_t, uint8_t *,
+uint8_t *);
 
+int hidpp10_get_battery_status(struct uhidpp_softc *, uint8_t, uint8_t *);
 int hidpp10_get_name(struct uhidpp_softc *, uint8_t, char *, size_t);
 int hidpp10_get_serial(struct uhidpp_softc *, uint8_t, uint8_t *, size_t);
 int hidpp10_get_type(struct uhidpp_softc *, uint8_t, const char **);
@@ -520,7 +527,7 @@ void
 uhidpp_device_connect(struct uhidpp_softc *sc, struct uhidpp_device *dev)
 {
struct ksensor *sens;
-   int error, major, minor;
+   int error;
uint8_t feature_type;
 
MUTEX_ASSERT_LOCKED(&sc->sc_mtx);
@@ -529,30 +536,40 @@ uhidpp_device_connect(struct uhidpp_softc *sc, struct 
uhidpp_device *dev)
if (dev->d_connected)
return;
 
-   error = hidpp_get_protocol_version(sc, dev->d_id, &major, &minor);
+   error = hidpp_get_protocol_version(sc, dev->d_id,
+   &dev->d_major, &dev->d_minor);
if (error) {
-   DPRINTF("%s: protocol version failure: device_id=%d, 
error=%d\n",
+   DPRINTF("%s: protocol version failure: device_id=%d, "
+   "error=%d\n",
__func__, dev->d_id, error);
return;
}
 
DPRINTF("%s: device_id=%d, version=%d.%d\n",
-   __func__, dev->d_id, major, minor);
+   __func__, dev->d_id, dev->d_major, dev->d_minor);
 
-   error = hidpp20_root_get_feature(sc, dev->d_id,
-   HIDPP20_FEAT_BATTERY_IDX,
-   &dev->d_battery.b_feature_idx, &feature_type);
-   if (error) {
-   DPRINTF("%s: battery feature index failure: device_id=%d, "
-   "error=%d\n", __func__, dev->d_id, error);
-   return;
-   }
+   if (dev->d_major >= 2) {
+   error = hidpp20_root_get_feature(sc, dev->d_id,
+   HIDPP20_FEAT_BATTERY_IDX,
+   &dev->d_battery.b_feature_idx, &feature_type);
+  

Re: some Ryzen, AMD 500 Chipset, Navi 10 and Kingson pcidev

2021-02-08 Thread Stuart Henderson
On 2021/02/08 10:54, Jonathan Gray wrote:
> On Sun, Feb 07, 2021 at 07:58:52PM +0100, Sven Wolf wrote:
> > Hi,
> > 
> > I've added some Ryzen 3xxx, AMD 500 Chipset, Navi 10 and Kingston ids to
> > pcidev. I've taken the description from the Linux PCI device ids
> > https://pci-ids.ucw.cz/v2.2/pci.ids

The names don't match the entries in pci.ids:

> > +product AMD 17_3X_DEV_10x1440  17h/3xh Device 24
> > +product AMD 17_3X_DEV_20x1441  17h/3xh Device 24
> > +product AMD 17_3X_DEV_30x1442  17h/3xh Device 24
> > +product AMD 17_3X_DEV_40x1443  17h/3xh Device 24
> > +product AMD 17_3X_DEV_50x1444  17h/3xh Device 24
> > +product AMD 17_3X_DEV_60x1445  17h/3xh Device 24
> > +product AMD 17_3X_DEV_70x1446  17h/3xh Device 24
> > +product AMD 17_3X_DEV_80x1447  17h/3xh Device 24
> 
> "Device 24"?
> 
> I would expect desktop Ryzen 3xxx to be 17h/71h.

pci.ids has these as "Matisse Device 24: Function 0..7"

> > +product AMD 17_3X_IOMMU0x1481  17h/3xh IOMMU
> > +product AMD 17_3X_PCIE_1   0x1482  17h/3xh PCIE
> > +product AMD 17_3X_GPP_10x1483  17h/3xh GPP
> > +product AMD 17_3X_GPP_20x1484  17h/3xh GPP
> >  product AMD 17_3X_CCP  0x1486  17h/3xh Crypto
> >  product AMD 17_3X_HDA  0x1487  17h/3xh HD Audio

1480  Starship/Matisse Root Complex
1462 7c37  X570-A PRO motherboard
1481  Starship/Matisse IOMMU
1482  Starship/Matisse PCIe Dummy Host Bridge
1483  Starship/Matisse GPP Bridge
1484  Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
1485  Starship/Matisse Reserved SPP
1486  Starship/Matisse Cryptographic Coprocessor PSPCPP


> >  product AMD 400SERIES_PCIE_2   0x43c7  400 Series PCIE
> >  product AMD 400SERIES_AHCI 0x43c8  400 Series AHCI
> >  product AMD 400SERIES_XHCI 0x43d0  400 Series xHCI
> > +product AMD 500SERIES_AHCI 0x43eb  500 Series AHCI
> > +product AMD 500SERIES_XHCI 0x43ee  500 Series xHCI
> > +product AMD 500SERIES_PCIE_1   0x43e9  500 Series PCI Bridge
> > +product AMD 500SERIES_PCIE_2   0x43ea  500 Series PCI Bridge

missing in pci.ids, but so is 43d0.
The new entries here are out of order.

> > +product ATI NAVI10_9   0x1478  Radeon RX5000
> > +product ATI NAVI10_10  0x1479  Radeon RX5000
> 
> Where are these from?  They are not matched in amdgpu.

1478  Navi 10 XL Upstream Port of PCI Express Switch
1479  Navi 10 XL Downstream Port of PCI Express Switch



Re: some Ryzen, AMD 500 Chipset, Navi 10 and Kingson pcidev

2021-02-08 Thread Sven Wolf

Hi,

thanks for your fast response.
I've chosen "AMD 17_3X_IOMMU" instead of "Starship/Matisse" because in 
my point of view, also the other Ryzen Zen 2 pci devices were named 
17_3x (e.g. PCI_PRODUCT_AMD_17_3X_RC 0x1480 /* 17h/3xh Root Complex */)


Maybe I'm wrong, but we should choose the same descriptions for all of 
the Ryzen/Zen devices. We should take the code name e.g. 
Starship/Matisse or we should take the vendor codes e.g. 17h/3x 
(http://developer.amd.com/wp-content/resources/56323-PUB_0.74.pdf) but 
we should not mix them.


Regarding to the NAVI10 entry. In my case its an RX5700.
For me it's not clear, when we choose the architecture name and when we 
choose the product name in the pcidevs :(

I find both of them in the pcidevs
E.g.
define  PCI_PRODUCT_ATI_RADEON_HD7990   0x679b /* Radeon HD 7990 */
define  PCI_PRODUCT_ATI_TAHITI_50x679f  /* Tahiti */


Here is the lscpi output from Linux:
00:00.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse Root Complex [1022:1480]
00:00.2 IOMMU [0806]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse IOMMU [1022:1481]
00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse GPP Bridge [1022:1483]
00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse GPP Bridge [1022:1483]
00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse GPP Bridge [1022:1483]
00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 
Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus 
Controller [1022:790b] (rev 61)
00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC 
Bridge [1022:790e] (rev 51)
00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 0 [1022:1440]
00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 1 [1022:1441]
00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 2 [1022:1442]
00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 3 [1022:1443]
00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 4 [1022:1444]
00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 5 [1022:1445]
00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 6 [1022:1446]
00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse 
Device 24: Function 7 [1022:1447]
01:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co 
Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
02:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43ee]
02:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 
Device [1022:43eb]
02:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43e9]
03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43ea]
03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43ea]
03:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43ea]
03:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device 
[1022:43ea]
05:00.0 Non-Volatile memory controller [0108]: Kingston Technology 
Company, Inc. A2000 NVMe SSD [2646:2263] (rev 03)
06:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 
[8086:2723] (rev 1a)
07:00.0 Ethernet controller [0200]: Intel Corporation Ethernet 
Controller I225-V [8086:15f3] (rev 02)
08:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 
10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c1)
09:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 
10 XL Downstream Port of PCI Express Switch [1002:1479]
0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. 
[AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] 
[1002:731f] (rev c1)
0a:00.1