Re: printf.1: fix incorrect conversion of apostrophe

2019-06-01 Thread Anthony J. Bentley
Stephen Gregoratto writes:
> In the escape sequences section of printf.1, the 
> character is represented using "\e\'".  In UTF-8 mode, mandoc converts
> this to an acute accent. To fix this I explicitly used "\(aq" as per the
> Accents section of mandoc_char(7), although using "\e'" works as well.

You're correct, thanks.



printf.1: fix incorrect conversion of apostrophe

2019-06-01 Thread Stephen Gregoratto
In the escape sequences section of printf.1, the 
character is represented using "\e\'".  In UTF-8 mode, mandoc converts
this to an acute accent. To fix this I explicitly used "\(aq" as per the
Accents section of mandoc_char(7), although using "\e'" works as well.

Index: printf.1
===
RCS file: /cvs/src/usr.bin/printf/printf.1,v
retrieving revision 1.31
diff -u -p -r1.31 printf.1
--- printf.113 Mar 2018 14:53:05 -  1.31
+++ printf.12 Jun 2019 04:47:48 -
@@ -95,7 +95,7 @@ Write a  character.
 Write a  character.
 .It Cm \ev
 Write a  character.
-.It Cm \e\'
+.It Cm \e\(aq
 Write a  character.
 .It Cm \e\e
 Write a backslash character.
-- 
Stephen Gregoratto
PGP: 3FC6 3D0E 2801 C348 1C44 2D34 A80C 0F8E 8BAB EC8B



Re: mtx_enter_try(9) & recursion

2019-06-01 Thread Visa Hankala
On Sat, Jun 01, 2019 at 07:04:23PM -0300, Martin Pieuchot wrote:
> On 01/06/19(Sat) 23:22, Mark Kettenis wrote:
> > > Date: Sat, 1 Jun 2019 17:32:52 -0300
> > > From: Martin Pieuchot 
> > > 
> > > Currently it isn't safe to call mtx_enter_try(9) if you're already
> > > holding the mutex.  That means it isn't safe to call that function
> > > in hardclock(9), like with `windup_mtx'.  That's why the mutex needs
> > > to be initialized as IPL_CLOCK.
> > > 
> > > I'm working on removing the SCHED_LOCK() from inside hardclock(9).
> > > That leads me to wonder if I should initialize all mutexes to IPL_SCHED,
> > > possibly blocking clock interrupts, or if we should change the mutex API
> > > to allow mtx_enter_try(9) to deal with recursion.
> > > 
> > > The diff below removes the recursion check for mtx_enter_try(9).
> > > 
> > > Comments?  Oks?
> > 
> > My initial reaction is that if you're trying to lock when you already
> > have the lock, there is something wrong with your locking strategy and
> > that this is something we don't want.
> 
> Could you elaborate?  Are you saying that preventing hardclock(9) to run
> is the way to move forward to unlock its internals?  Why isn't that
> strategy wrong?
> 
> In the `windup_mtx' case, does it matter if the mutex is taken by
> another CPU or by myself?  What's the problem when CPU0 is one holding
> the lock?

mutex(9) is not and should not become recursive. Recursive locking
works when it is voluntary. If recursion was allowed with interrupts,
the CPU could re-enter the critical section at any moment, possibly
seeing inconsistent state or breaking assumptions made by the original
entry.



Re: vmd(8) i8042 device implementation questions

2019-06-01 Thread Katherine Rohl
Couple questions:

> This means no interrupt will be injected. I'm not sure if that's what you 
> want.
> See vm.c: vcpu_exit_inout(..). It looks like you may have manually asserted 
> the
> IRQ in this file, which is a bit different than what we do in other devices. 
> That
> may be okay, though.

The device can assert zero, one, or two IRQs depending on the state of the 
input ports. Are we capable of asserting two IRQs at once through 
vcpu_exit_i8042?

> For this IRQ, if it's edge triggered, please assert then deassert the line.
> The i8259 code should handle that properly. What you have here is a level
> triggered interrupt (eg, the line will stay asserted until someone
> does a 1 -> 0 transition below). Same goes for the next few cases.

Would asserting the IRQs through the exit function handle this for me if that’s 
possible?

> Also, please bump the revision in the vcpu struct for send/receive
> as we will be sending a new struct layout now.

Where exactly? The file revision?



Re: mtx_enter_try(9) & recursion

2019-06-01 Thread Martin Pieuchot
On 01/06/19(Sat) 23:22, Mark Kettenis wrote:
> > Date: Sat, 1 Jun 2019 17:32:52 -0300
> > From: Martin Pieuchot 
> > 
> > Currently it isn't safe to call mtx_enter_try(9) if you're already
> > holding the mutex.  That means it isn't safe to call that function
> > in hardclock(9), like with `windup_mtx'.  That's why the mutex needs
> > to be initialized as IPL_CLOCK.
> > 
> > I'm working on removing the SCHED_LOCK() from inside hardclock(9).
> > That leads me to wonder if I should initialize all mutexes to IPL_SCHED,
> > possibly blocking clock interrupts, or if we should change the mutex API
> > to allow mtx_enter_try(9) to deal with recursion.
> > 
> > The diff below removes the recursion check for mtx_enter_try(9).
> > 
> > Comments?  Oks?
> 
> My initial reaction is that if you're trying to lock when you already
> have the lock, there is something wrong with your locking strategy and
> that this is something we don't want.

Could you elaborate?  Are you saying that preventing hardclock(9) to run
is the way to move forward to unlock its internals?  Why isn't that
strategy wrong?

In the `windup_mtx' case, does it matter if the mutex is taken by
another CPU or by myself?  What's the problem when CPU0 is one holding
the lock?



Pump my sched: fewer SCHED_LOCK() & kill p_priority

2019-06-01 Thread Martin Pieuchot
Diff below exists mainly for documentation and test purposes.  If
you're not interested about how to break the scheduler internals in
pieces, don't read further and go straight to testing!

- First change is to stop calling tsleep(9) at PUSER.  That makes
  it clear that all "sleeping priorities" are smaller than PUSER.
  That's important to understand for the diff below.  `p_priority'
  is currently a placeholder for the "sleeping priority" and the
  "runnqueue priority".  Both fields are separated by this diff.

- When a thread goes to sleep, the priority argument of tsleep(9) is
  now recorded in `p_slpprio'.  This argument can be considered as part
  of the sleep queue.  Its purpose is to place the thread into a higher
  runqueue when awoken.

- Currently, for stopped threads, `p_priority' correspond to `p_usrpri'. 
  So setrunnable() has been untangled to place SSTOP and SSLEEP threads
  in the preferred queue without having to use `p_priority'.  Note that
  `p_usrpri' is still recalculated *after* having called setrunqueue().
  This is currently fine because setrunnable() is called with SCHED_LOCK() 
  but it will be racy when we'll split it.

- A new field, `p_runprio' has been introduced.  It should be considered
  as part of the per-CPU runqueues.  It indicates where a current thread
  is placed.

- `spc_curpriority' is now updated at every context-switch.  That means
   need_resched() won't be called after comparing an out-of-date value.
   At the same time, `p_usrpri' is initialized to the highest possible
   value for idle threads.

- resched_proc() was calling need_resched() in the following conditions:
   - If the SONPROC thread has a higher priority that the current
 running thread (itself).
   - Twice in setrunnable() when we know that p_priority <= p_usrpri.
   - If schedcpu() considered that a thread, after updating its prio,
 should preempt the one running on the CPU pointed by `p_cpu'. 

  The diff below simplify all of that by calling need_resched() when:
   - A thread is inserted in a CPU runqueue at a higher priority than
 the one SONPROC.
   - schedcpu() decides that a thread in SRUN state should preempt the
 one SONPROC.

- `p_estcpu' `p_usrpri' and `p_slptime' which represent the "priority"
  of a thread are now updated while holding a per-thread mutex.  As a
  result schedclock() and donice() no longer takes the SCHED_LOCK(),
  and schedcpu() almost never take it.

- With this diff top(1) and ps(1) will report the "real" `p_usrpi' value
  when displaying priorities.  This is helpful to understand what's
  happening:

load averages:  0.99,  0.56,  0.25   two.lab.grenadille.net 23:42:10
70 threads: 68 idle, 2 on processorup  0:09
CPU0:  0.0% user,  0.0% nice, 51.0% sys,  2.0% spin,  0.0% intr, 47.1% idle
CPU1:  2.0% user,  0.0% nice, 51.0% sys,  3.9% spin,  0.0% intr, 43.1% idle
Memory: Real: 47M/1005M act/tot Free: 2937M Cache: 812M Swap: 0K/4323M

  PID  TID PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
81000   145101  7200K 1664K sleep/1   bored 1:15 36.96% softnet
47133   244097  730 2984K 4408K sleep/1   netio 1:06 35.06% cvs 
64749   522184  660  176K  148K onproc/1  - 0:55 28.81% nfsd
21615   602473 12700K 1664K sleep/0   - 7:22  0.00% idle0  
12413   606242 12700K 1664K sleep/1   - 7:08  0.00% idle1
85778   338258  500 4936K 7308K idle  select0:10  0.00% ssh  
22771   575513  500  176K  148K sleep/0   nfsd  0:02  0.00% nfsd 



- The removal of `p_priority' and the change that makes mi_switch()
  always update `spc_curpriority' might introduce some changes in
  behavior, especially with kernel threads that were not going through
  tsleep(9).  We currently have some situations where the priority of
  the running thread isn't correctly reflected.  This diff changes that
  which means we should be able to better understand where the problems
  are.

I'd be interested in comments/tests/reviews before continuing in this
direction.  Note that at least part of this diff are required to split
the accounting apart from the SCHED_LOCK() as well.

I'll also work on exporting scheduler statistics unless somebody wants
to beat me :)

This has been tested on amd64 and sparc64 and includes ze mtx_enter_try(9)
diff I just sent.

Index: arch/amd64/amd64/genassym.cf
===
RCS file: /cvs/src/sys/arch/amd64/amd64/genassym.cf,v
retrieving revision 1.40
diff -u -p -r1.40 genassym.cf
--- arch/amd64/amd64/genassym.cf17 May 2019 19:07:15 -  1.40
+++ arch/amd64/amd64/genassym.cf1 Jun 2019 16:27:46 -
@@ -32,7 +32,6 @@ exportVM_MIN_KERNEL_ADDRESS
 
 struct proc
 member p_addr
-member p_priority
 member p_stat
 member p_wchan
 member P_MD_REGS   p_md.md_regs
Index: arch/hppa/hppa/genassym.cf
===

Re: mtx_enter_try(9) & recursion

2019-06-01 Thread Mark Kettenis
> Date: Sat, 1 Jun 2019 17:32:52 -0300
> From: Martin Pieuchot 
> 
> Currently it isn't safe to call mtx_enter_try(9) if you're already
> holding the mutex.  That means it isn't safe to call that function
> in hardclock(9), like with `windup_mtx'.  That's why the mutex needs
> to be initialized as IPL_CLOCK.
> 
> I'm working on removing the SCHED_LOCK() from inside hardclock(9).
> That leads me to wonder if I should initialize all mutexes to IPL_SCHED,
> possibly blocking clock interrupts, or if we should change the mutex API
> to allow mtx_enter_try(9) to deal with recursion.
> 
> The diff below removes the recursion check for mtx_enter_try(9).
> 
> Comments?  Oks?

My initial reaction is that if you're trying to lock when you already
have the lock, there is something wrong with your locking strategy and
that this is something we don't want.

> Index: kern/kern_lock.c
> ===
> RCS file: /cvs/src/sys/kern/kern_lock.c,v
> retrieving revision 1.69
> diff -u -p -r1.69 kern_lock.c
> --- kern/kern_lock.c  23 Apr 2019 13:35:12 -  1.69
> +++ kern/kern_lock.c  1 Jun 2019 18:26:39 -
> @@ -251,6 +251,8 @@ __mtx_init(struct mutex *mtx, int wantip
>  }
>  
>  #ifdef MULTIPROCESSOR
> +int  _mtx_enter_try(struct mutex *, int);
> +
>  void
>  mtx_enter(struct mutex *mtx)
>  {
> @@ -263,7 +265,7 @@ mtx_enter(struct mutex *mtx)
>   LOP_EXCLUSIVE | LOP_NEWORDER, NULL);
>  
>   spc->spc_spinning++;
> - while (mtx_enter_try(mtx) == 0) {
> + while (_mtx_enter_try(mtx, 0) == 0) {
>   CPU_BUSY_CYCLE();
>  
>  #ifdef MP_LOCKDEBUG
> @@ -278,7 +280,7 @@ mtx_enter(struct mutex *mtx)
>  }
>  
>  int
> -mtx_enter_try(struct mutex *mtx)
> +_mtx_enter_try(struct mutex *mtx, int try)
>  {
>   struct cpu_info *owner, *ci = curcpu();
>   int s;
> @@ -292,7 +294,7 @@ mtx_enter_try(struct mutex *mtx)
>  
>   owner = atomic_cas_ptr(&mtx->mtx_owner, NULL, ci);
>  #ifdef DIAGNOSTIC
> - if (__predict_false(owner == ci))
> + if (!try && __predict_false(owner == ci))
>   panic("mtx %p: locking against myself", mtx);
>  #endif
>   if (owner == NULL) {
> @@ -310,6 +312,12 @@ mtx_enter_try(struct mutex *mtx)
>   splx(s);
>  
>   return (0);
> +}
> +
> +int
> +mtx_enter_try(struct mutex *mtx)
> +{
> + return _mtx_enter_try(mtx, 1);
>  }
>  #else
>  void
> 
> 



Re: sysupgrade(8): Adding ability to check if new release available

2019-06-01 Thread Andrew Klaus
Please ignore my last patch, since I had mixed up the -l and -c flags 
from syspatch. This new patch will work for both releases and snapshots 
as well.


When running on a snapshot, it still outputs the SHA256.sig transfer and 
"Signature Verified" to stdout, which isn't ideal, but it's a start.


Output:

# sysupgrade.sh -c -r

# sysupgrade.sh -c -r
New release available: 6.6

# sysupgrade.sh -c -s
SHA256.sig   100% 
|*| 
 2141   00:00

Signature Verified
New snapshot available.

# sysupgrade.sh -c -s
SHA256.sig   100% 
|*| 
 2141   00:00

Signature Verified



Andrew


Index: sysupgrade.8
===
RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.8,v
retrieving revision 1.8
diff -u -p -u -p -r1.8 sysupgrade.8
--- sysupgrade.89 May 2019 21:09:37 -   1.8
+++ sysupgrade.81 Jun 2019 21:04:50 -
@@ -14,7 +14,7 @@
 .\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING 
OUT OF

 .\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 .\"
-.Dd $Mdocdate: May 9 2019 $
+.Dd $Mdocdate: June 1 2019 $
 .Dt SYSUPGRADE 8
 .Os
 .Sh NAME
@@ -22,7 +22,7 @@
 .Nd upgrade system to the next release or a new snapshot
 .Sh SYNOPSIS
 .Nm
-.Op Fl fkn
+.Op Fl cfkn
 .Op Fl r | s
 .Op Ar installurl
 .Sh DESCRIPTION
@@ -48,6 +48,8 @@ triggering a one-shot upgrade using the
 .Pp
 The options are as follows:
 .Bl -tag -width Ds
+.It Fl c
+Show if there's an available upgrade; suitable for cron(8).
 .It Fl f
 Force an already applied upgrade.
 The default is to upgrade to latest snapshot only if available.
Index: sysupgrade.sh
===
RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.sh,v
retrieving revision 1.21
diff -u -p -u -p -r1.21 sysupgrade.sh
--- sysupgrade.sh   14 May 2019 14:27:49 -  1.21
+++ sysupgrade.sh   1 Jun 2019 21:04:50 -
@@ -33,7 +33,7 @@ ug_err()

 usage()
 {
-   ug_err "usage: ${0##*/} [-fkn] [-r | -s] [installurl]"
+   ug_err "usage: ${0##*/} [-cfkn] [-r | -s] [installurl]"
 }

 unpriv()
@@ -73,10 +73,12 @@ RELEASE=false
 SNAP=false
 FORCE=false
 KEEP=false
+CRON=false
 REBOOT=true

-while getopts fknrs arg; do
+while getopts cfknrs arg; do
case ${arg} in
+   c)  CRON=true;;
f)  FORCE=true;;
k)  KEEP=true;;
n)  REBOOT=false;;
@@ -118,6 +120,14 @@ else
URL=${MIRROR}/${NEXT_VERSION}/${ARCH}/
 fi

+if $CRON && $RELEASE; then
+   set +e
+	if unpriv -f SHA256.sig ftp -Vmo /dev/null ${URL}SHA256.sig 
2>/dev/null; then

+   echo "New release available: ${NEXT_VERSION}"
+   fi
+   exit 0
+fi
+
 if [[ -e ${SETSDIR} ]]; then
eval $(stat -s ${SETSDIR})
[[ $st_uid -eq 0 ]] ||
@@ -150,8 +160,13 @@ unpriv -f SHA256 signify -Ve -p "${SIGNI
 rm SHA256.sig

 if cmp -s /var/db/installed.SHA256 SHA256 && ! $FORCE; then
-   echo "Already on latest snapshot."
+   if ! $CRON; then
+   echo "Already on latest snapshot."
+   fi
exit 0
+elif $CRON; then
+   echo "New snapshot available."
+exit 0
 fi

 # INSTALL.*, bsd*, *.tgz



On 2019-06-01 1:47 a.m., Andrew Klaus wrote:
This adds the ability to check if you're running the latest release, 
without actually upgrading. I'd like to use this functionality when 
writing an Ansible module for sysupgrade soon. I already have one for 
syspatch that's been accepted today.


This follows the same usage (-l) as syspatch(8) to list if an update is 
available.


Andrew

Index: sysupgrade.sh
===
RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.sh,v
retrieving revision 1.21
diff -u -p -u -r1.21 sysupgrade.sh
--- sysupgrade.sh    14 May 2019 14:27:49 -    1.21
+++ sysupgrade.sh    1 Jun 2019 07:28:10 -
@@ -33,7 +33,7 @@ ug_err()

  usage()
  {
-    ug_err "usage: ${0##*/} [-fkn] [-r | -s] [installurl]"
+    ug_err "usage: ${0##*/} [-fkln] [-r | -s] [installurl]"
  }

  unpriv()
@@ -73,12 +73,14 @@ RELEASE=false
  SNAP=false
  FORCE=false
  KEEP=false
+LIST=false
  REBOOT=true

-while getopts fknrs arg; do
+while getopts fklnrs arg; do
  case ${arg} in
  f)    FORCE=true;;
  k)    KEEP=true;;
+    l)  LIST=true;;
  n)    REBOOT=false;;
  r)    RELEASE=true;;
  s)    SNAP=true;;
@@ -116,6 +118,16 @@ if $SNAP; then
  URL=${MIRROR}/snapshots/${ARCH}/
  else
  URL=${MIRROR}/${NEXT_VERSION}/${ARCH}/
+fi
+
+if ${LIST} && ${RELEASE}; then
+    set +e
+    if unpriv -f SHA256.sig ftp -Vmo /dev/null ${URL}SHA256.sig 
2>/dev/null; then

+    echo "Release available: ${NEXT_VERSION}."
+    else
+    echo "Already on latest release."
+    fi
+  

mtx_enter_try(9) & recursion

2019-06-01 Thread Martin Pieuchot
Currently it isn't safe to call mtx_enter_try(9) if you're already
holding the mutex.  That means it isn't safe to call that function
in hardclock(9), like with `windup_mtx'.  That's why the mutex needs
to be initialized as IPL_CLOCK.

I'm working on removing the SCHED_LOCK() from inside hardclock(9).
That leads me to wonder if I should initialize all mutexes to IPL_SCHED,
possibly blocking clock interrupts, or if we should change the mutex API
to allow mtx_enter_try(9) to deal with recursion.

The diff below removes the recursion check for mtx_enter_try(9).

Comments?  Oks?

Index: kern/kern_lock.c
===
RCS file: /cvs/src/sys/kern/kern_lock.c,v
retrieving revision 1.69
diff -u -p -r1.69 kern_lock.c
--- kern/kern_lock.c23 Apr 2019 13:35:12 -  1.69
+++ kern/kern_lock.c1 Jun 2019 18:26:39 -
@@ -251,6 +251,8 @@ __mtx_init(struct mutex *mtx, int wantip
 }
 
 #ifdef MULTIPROCESSOR
+int_mtx_enter_try(struct mutex *, int);
+
 void
 mtx_enter(struct mutex *mtx)
 {
@@ -263,7 +265,7 @@ mtx_enter(struct mutex *mtx)
LOP_EXCLUSIVE | LOP_NEWORDER, NULL);
 
spc->spc_spinning++;
-   while (mtx_enter_try(mtx) == 0) {
+   while (_mtx_enter_try(mtx, 0) == 0) {
CPU_BUSY_CYCLE();
 
 #ifdef MP_LOCKDEBUG
@@ -278,7 +280,7 @@ mtx_enter(struct mutex *mtx)
 }
 
 int
-mtx_enter_try(struct mutex *mtx)
+_mtx_enter_try(struct mutex *mtx, int try)
 {
struct cpu_info *owner, *ci = curcpu();
int s;
@@ -292,7 +294,7 @@ mtx_enter_try(struct mutex *mtx)
 
owner = atomic_cas_ptr(&mtx->mtx_owner, NULL, ci);
 #ifdef DIAGNOSTIC
-   if (__predict_false(owner == ci))
+   if (!try && __predict_false(owner == ci))
panic("mtx %p: locking against myself", mtx);
 #endif
if (owner == NULL) {
@@ -310,6 +312,12 @@ mtx_enter_try(struct mutex *mtx)
splx(s);
 
return (0);
+}
+
+int
+mtx_enter_try(struct mutex *mtx)
+{
+   return _mtx_enter_try(mtx, 1);
 }
 #else
 void



Re: PCI interrupt functions

2019-06-01 Thread Mark Kettenis
> From: "Theo de Raadt" 
> Date: Fri, 31 May 2019 15:34:18 -0600
> 
> > On arm64, pci_intr_handle_t is a pointer to an opaque struct.
> 
> That's a subtle trap.  How would someone realize the order is wrong...
> 
> Would it not be better if this was done like the other architectures,
> where the pci_intr_handle_t is a structure, not a pointer.
> 
> On arm64, the pci subsystems seem to use structures which are
> the same, and additional fields could be added easily if there
> was a need later.

Yes.  I suppose we anticipated that we would need different structs
for different host bridge drivers, but so far that hasn't happened.
I'm fairly confident that what we have now is sufficient for most
other hardware that we would like to support.

This actually allows me to remove some duplicated code.

ok?


Index: dev/fdt/dwpcie.c
===
RCS file: /cvs/src/sys/dev/fdt/dwpcie.c,v
retrieving revision 1.13
diff -u -p -r1.13 dwpcie.c
--- dev/fdt/dwpcie.c31 May 2019 10:35:49 -  1.13
+++ dev/fdt/dwpcie.c1 Jun 2019 18:00:30 -
@@ -224,9 +224,6 @@ pcireg_t dwpcie_conf_read(void *, pcitag
 void   dwpcie_conf_write(void *, pcitag_t, int, pcireg_t);
 
 intdwpcie_intr_map(struct pci_attach_args *, pci_intr_handle_t *);
-intdwpcie_intr_map_msi(struct pci_attach_args *, pci_intr_handle_t *);
-intdwpcie_intr_map_msix(struct pci_attach_args *, int,
-   pci_intr_handle_t *);
 const char *dwpcie_intr_string(void *, pci_intr_handle_t);
 void   *dwpcie_intr_establish(void *, pci_intr_handle_t, int,
int (*)(void *), void *, char *);
@@ -453,8 +450,8 @@ dwpcie_attach(struct device *parent, str
 
sc->sc_pc.pc_intr_v = sc;
sc->sc_pc.pc_intr_map = dwpcie_intr_map;
-   sc->sc_pc.pc_intr_map_msi = dwpcie_intr_map_msi;
-   sc->sc_pc.pc_intr_map_msix = dwpcie_intr_map_msix;
+   sc->sc_pc.pc_intr_map_msi = _pci_intr_map_msi;
+   sc->sc_pc.pc_intr_map_msix = _pci_intr_map_msix;
sc->sc_pc.pc_intr_string = dwpcie_intr_string;
sc->sc_pc.pc_intr_establish = dwpcie_intr_establish;
sc->sc_pc.pc_intr_disestablish = dwpcie_intr_disestablish;
@@ -903,21 +900,9 @@ dwpcie_conf_write(void *v, pcitag_t tag,
sc->sc_io_bus_addr, sc->sc_io_size);
 }
 
-#define PCI_INTX   0
-#define PCI_MSI1
-#define PCI_MSIX   2
-
-struct dwpcie_intr_handle {
-   pci_chipset_tag_t   ih_pc;
-   pcitag_tih_tag;
-   int ih_intrpin;
-   int ih_type;
-};
-
 int
 dwpcie_intr_map(struct pci_attach_args *pa, pci_intr_handle_t *ihp)
 {
-   struct dwpcie_intr_handle *ih;
int pin = pa->pa_rawintrpin;
 
if (pin == 0 || pin > PCI_INTERRUPT_PIN_MAX)
@@ -926,68 +911,18 @@ dwpcie_intr_map(struct pci_attach_args *
if (pa->pa_tag == 0)
return -1;
 
-   ih = malloc(sizeof(struct dwpcie_intr_handle), M_DEVBUF, M_WAITOK);
-   ih->ih_pc = pa->pa_pc;
-   ih->ih_tag = pa->pa_intrtag;
-   ih->ih_intrpin = pa->pa_intrpin;
-   ih->ih_type = PCI_INTX;
-   *ihp = (pci_intr_handle_t)ih;
-
-   return 0;
-}
-
-int
-dwpcie_intr_map_msi(struct pci_attach_args *pa, pci_intr_handle_t *ihp)
-{
-   pci_chipset_tag_t pc = pa->pa_pc;
-   pcitag_t tag = pa->pa_tag;
-   struct dwpcie_intr_handle *ih;
-
-   if ((pa->pa_flags & PCI_FLAGS_MSI_ENABLED) == 0 ||
-   pci_get_capability(pc, tag, PCI_CAP_MSI, NULL, NULL) == 0)
-   return -1;
-
-   ih = malloc(sizeof(struct dwpcie_intr_handle), M_DEVBUF, M_WAITOK);
-   ih->ih_pc = pa->pa_pc;
-   ih->ih_tag = pa->pa_tag;
-   ih->ih_type = PCI_MSI;
-   *ihp = (pci_intr_handle_t)ih;
-
-   return 0;
-}
-
-int
-dwpcie_intr_map_msix(struct pci_attach_args *pa, int vec,
-pci_intr_handle_t *ihp)
-{
-   pci_chipset_tag_t pc = pa->pa_pc;
-   pcitag_t tag = pa->pa_tag;
-   struct dwpcie_intr_handle *ih;
-   pcireg_t reg;
-
-   if ((pa->pa_flags & PCI_FLAGS_MSI_ENABLED) == 0 ||
-   pci_get_capability(pc, tag, PCI_CAP_MSIX, NULL, ®) == 0)
-   return -1;
-
-   if (vec > PCI_MSIX_MC_TBLSZ(reg))
-   return -1;
-
-   ih = malloc(sizeof(struct dwpcie_intr_handle), M_DEVBUF, M_WAITOK);
-   ih->ih_pc = pa->pa_pc;
-   ih->ih_tag = pa->pa_tag;
-   ih->ih_intrpin = vec;
-   ih->ih_type = PCI_MSIX;
-   *ihp = (pci_intr_handle_t)ih;
+   ihp->ih_pc = pa->pa_pc;
+   ihp->ih_tag = pa->pa_intrtag;
+   ihp->ih_intrpin = pa->pa_intrpin;
+   ihp->ih_type = PCI_INTX;
 
return 0;
 }
 
 const char *
-dwpcie_intr_string(void *v, pci_intr_handle_t ihp)
+dwpcie_intr_string(void *v, pci_intr_handle_t ih)
 {
-   struct dwpcie_intr_handle *ih = (struct dwpcie_intr_handle *)ihp;
-
-   switch (ih->ih_type) {
+   switch (ih.ih_type) {
case PCI_MSI:
   

Re: Reduce the scope of SCHED_LOCK()

2019-06-01 Thread Martin Pieuchot
On 01/06/19(Sat) 15:54, Mark Kettenis wrote:
> > Date: Sat, 25 May 2019 15:57:44 -0300
> > From: Martin Pieuchot 
> > 
> > On 12/05/19(Sun) 18:17, Martin Pieuchot wrote:
> > > People started complaining that the SCHED_LOCK() is contended.  Here's a
> > > first round at reducing its scope.
> > > 
> > > Diff below introduces a per-process mutex to protect time accounting
> > > fields accessed in tuagg().  tuagg() is principally called in mi_switch()
> > > where the SCHED_LOCK() is currently held.  Moving these fields out of
> > > its scope allows us to drop some SCHED_LOCK/UNLOCK dances in accounting
> > > path.
> > > 
> > > Note that hardclock(9) still increments p_{u,s,i}ticks without holding a
> > > lock.  I doubt it's worth doing anything so this diff doesn't change
> > > anything in that regard.
> > 
> > Updated diff:
> > 
> > - Use struct assignment instead of timespecadd() to initialize
> >   `tu_runtime', pointed out by visa@.
> > - Do not use mtx_enter(9)/leave(9) in libkvm's version of FILL_PROC(),
> >   pointed out by guenther@ and lteo@.
> > 
> > Oks?
> 
> I fear that what was committed isn't quite right:

Yes.  I just reverted the diff.  The problem is that hardclock(9) can be
called with any mutex held.  So as long as schedclock() needs to grab
the SCHED_LOCK() to update `p_estcpu' and `p_usrpi' we'll have lock
ordering problems.

I'm already working on a fix (:



Re: Reduce the scope of SCHED_LOCK()

2019-06-01 Thread Mark Kettenis
> Date: Sat, 25 May 2019 15:57:44 -0300
> From: Martin Pieuchot 
> 
> On 12/05/19(Sun) 18:17, Martin Pieuchot wrote:
> > People started complaining that the SCHED_LOCK() is contended.  Here's a
> > first round at reducing its scope.
> > 
> > Diff below introduces a per-process mutex to protect time accounting
> > fields accessed in tuagg().  tuagg() is principally called in mi_switch()
> > where the SCHED_LOCK() is currently held.  Moving these fields out of
> > its scope allows us to drop some SCHED_LOCK/UNLOCK dances in accounting
> > path.
> > 
> > Note that hardclock(9) still increments p_{u,s,i}ticks without holding a
> > lock.  I doubt it's worth doing anything so this diff doesn't change
> > anything in that regard.
> 
> Updated diff:
> 
> - Use struct assignment instead of timespecadd() to initialize
>   `tu_runtime', pointed out by visa@.
> - Do not use mtx_enter(9)/leave(9) in libkvm's version of FILL_PROC(),
>   pointed out by guenther@ and lteo@.
> 
> Oks?

I fear that what was committed isn't quite right:

witness: lock order reversal:
 1st 0x800022dce100 &pr->ps_mtx (&pr->ps_mtx)
  2nd 0x81e8d4a0 &sched_lock (&sched_lock)
lock order "&sched_lock"(sched_lock) -> "&pr->ps_mtx"(mutex) first seen at:
#0  witness_checkorder+0x449
#1  mtx_enter+0x34
#2  tuagg+0x27
#3  mi_switch+0x10f
#4  sleep_finish+0x81
#5  tsleep+0xc7
#6  main+0x5c0
#7  longmode_hi+0x95
lock order "&pr->ps_mtx"(mutex) -> "&sched_lock"(sched_lock) first seen at:
#0  witness_checkorder+0x449
#1  __mp_lock+0x5f
#2  schedclock+0x69
#3  hardclock+0xe5
#4  lapic_clockintr+0x3f
#5  Xresume_lapic_ltimer+0x26
#6  sched_exit+0xc2
#7  exit1+0x563
#8  sys_exit+0x17
#9  syscall+0x2d5
#10 Xsyscall+0x128

I suspect this could be fixed by marking the mutex as IPL_SCHED?  Or
is this particular one harmless because schedlock is special?



Re: ftp.html: adjust mirror minimum space

2019-06-01 Thread Reyk Floeter
On Sat, Jun 01, 2019 at 12:18:33PM +0200, Theo Buehler wrote:
> On Sat, Jun 01, 2019 at 12:05:09PM +0200, Reyk Floeter wrote:
> > Hi,
> > 
> > a fresh rsync over night revealed that the minimum space for mirrors
> > should be adjusted.
> > 
> > OK?
> 
> I don't know whether the size is correct or whether it should be bumped
> further, but please note that ftp.html is generated. Edit
> www/build/mirrors/ftp.html.end instead and run 'make ftp' from
> www/build/:
> 
> $ head -1 ftp.html
> 
> 

Forget about this diff, we were mirroring from the wrong rsync source
which included old releases that aren't on the LF1 and LF2 anymore.
We'll switch to an LF2.

Background: ungleich glaurus is sponsoring a mirror to once again have
a full and up-to-date mirror in Switzerland.  These are the people
behind many nice things including digital glarus, datacenterlight, and
hack4glarus.

Thanks to Nico and the ungleich team!

The mirror in its current state (old releases will be deleted or moved
to OpenBSD-unsupported):

https://mirror.ungleich.ch/pub/OpenBSD/

Reyk



Re: ftp.html: adjust mirror minimum space

2019-06-01 Thread Theo Buehler
On Sat, Jun 01, 2019 at 12:05:09PM +0200, Reyk Floeter wrote:
> Hi,
> 
> a fresh rsync over night revealed that the minimum space for mirrors
> should be adjusted.
> 
> OK?

I don't know whether the size is correct or whether it should be bumped
further, but please note that ftp.html is generated. Edit
www/build/mirrors/ftp.html.end instead and run 'make ftp' from
www/build/:

$ head -1 ftp.html


> 
> Reyk
> 
> Index: ftp.html
> ===
> RCS file: /cvs/www/ftp.html,v
> retrieving revision 1.794
> diff -u -p -u -p -r1.794 ftp.html
> --- ftp.html  30 May 2019 21:05:37 -  1.794
> +++ ftp.html  1 Jun 2019 09:57:55 -
> @@ -1155,7 +1155,7 @@ Mirrors must carry the following:
>  In addition, mirrors must use a second-level mirror as their upstream.
>  
>  
> -As of 6.1, the minimum space required is approximately 700GB.
> +As of 6.5, the minimum space required is approximately 1.3TB.
>  However, to reduce problems when snapshot packages are updated, it is
>  recommended to use the rsync options --delete-delay 
> --delay-updates
>  which will use additional space.
> 



ftp.html: adjust mirror minimum space

2019-06-01 Thread Reyk Floeter
Hi,

a fresh rsync over night revealed that the minimum space for mirrors
should be adjusted.

OK?

Reyk

Index: ftp.html
===
RCS file: /cvs/www/ftp.html,v
retrieving revision 1.794
diff -u -p -u -p -r1.794 ftp.html
--- ftp.html30 May 2019 21:05:37 -  1.794
+++ ftp.html1 Jun 2019 09:57:55 -
@@ -1155,7 +1155,7 @@ Mirrors must carry the following:
 In addition, mirrors must use a second-level mirror as their upstream.
 
 
-As of 6.1, the minimum space required is approximately 700GB.
+As of 6.5, the minimum space required is approximately 1.3TB.
 However, to reduce problems when snapshot packages are updated, it is
 recommended to use the rsync options --delete-delay --delay-updates
 which will use additional space.



sysupgrade(8): Adding ability to check if new release available

2019-06-01 Thread Andrew Klaus
This adds the ability to check if you're running the latest release, 
without actually upgrading. I'd like to use this functionality when 
writing an Ansible module for sysupgrade soon. I already have one for 
syspatch that's been accepted today.


This follows the same usage (-l) as syspatch(8) to list if an update is 
available.


Andrew

Index: sysupgrade.sh
===
RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.sh,v
retrieving revision 1.21
diff -u -p -u -r1.21 sysupgrade.sh
--- sysupgrade.sh   14 May 2019 14:27:49 -  1.21
+++ sysupgrade.sh   1 Jun 2019 07:28:10 -
@@ -33,7 +33,7 @@ ug_err()

 usage()
 {
-   ug_err "usage: ${0##*/} [-fkn] [-r | -s] [installurl]"
+   ug_err "usage: ${0##*/} [-fkln] [-r | -s] [installurl]"
 }

 unpriv()
@@ -73,12 +73,14 @@ RELEASE=false
 SNAP=false
 FORCE=false
 KEEP=false
+LIST=false
 REBOOT=true

-while getopts fknrs arg; do
+while getopts fklnrs arg; do
case ${arg} in
f)  FORCE=true;;
k)  KEEP=true;;
+   l)  LIST=true;;
n)  REBOOT=false;;
r)  RELEASE=true;;
s)  SNAP=true;;
@@ -116,6 +118,16 @@ if $SNAP; then
URL=${MIRROR}/snapshots/${ARCH}/
 else
URL=${MIRROR}/${NEXT_VERSION}/${ARCH}/
+fi
+
+if ${LIST} && ${RELEASE}; then
+   set +e
+	if unpriv -f SHA256.sig ftp -Vmo /dev/null ${URL}SHA256.sig 
2>/dev/null; then

+   echo "Release available: ${NEXT_VERSION}."
+   else
+   echo "Already on latest release."
+   fi
+   exit
 fi

 if [[ -e ${SETSDIR} ]]; then