Re: rti_info and defines

2014-01-17 Thread Claudio Jeker
On Wed, Jan 08, 2014 at 12:02:25PM +0100, Martin Pieuchot wrote:
> I find it really difficult to understand and work with the code of
> rtsock.c because of the following defines:
> 
> 
>   /* Sleazy use of local variables throughout file, warning */
>   #define dstinfo.rti_info[RTAX_DST]
>   #define gate   info.rti_info[RTAX_GATEWAY]
>   ...
> 
> But since this code is a mess, simply removing these defines makes it
> harder to understand.  So the diff below introduces other defines to
> make it clear that we manipulate a rt_addrinfo structure while
> preserving the readability:
> 
>   #define rti_dstrti_info[RTAX_DST]
>   #define rti_gate   rti_info[RTAX_GATEWAY]
>   ...
> 
> I converted rtsock.c and route.c and there's no object change with this
> diff.  I'll happily convert the remaining files after putting this in.
> 
> Comments, ok?
> 
> Index: net/route.c
> ===
> RCS file: /home/ncvs/src/sys/net/route.c,v
> retrieving revision 1.147
> diff -u -p -r1.147 route.c
> --- net/route.c   20 Oct 2013 13:21:57 -  1.147
> +++ net/route.c   8 Jan 2014 10:47:40 -
> @@ -325,7 +325,7 @@ rtalloc1(struct sockaddr *dst, int flags
>   int  s = splsoftnet(), err = 0, msgtype = RTM_MISS;
>  
>   bzero(&info, sizeof(info));
> - info.rti_info[RTAX_DST] = dst;
> + info.rti_dst = dst;
>  
>   rnh = rt_gettable(dst->sa_family, tableid);
>   if (rnh && (rn = rnh->rnh_matchaddr((caddr_t)dst, rnh)) &&
> @@ -346,13 +346,13 @@ rtalloc1(struct sockaddr *dst, int flags
>   }
>   /* Inform listeners of the new route */
>   bzero(&info, sizeof(info));
> - info.rti_info[RTAX_DST] = rt_key(rt);
> - info.rti_info[RTAX_NETMASK] = rt_mask(rt);
> - info.rti_info[RTAX_GATEWAY] = rt->rt_gateway;
> + info.rti_dst = rt_key(rt);
> + info.rti_mask = rt_mask(rt);
> + info.rti_gate = rt->rt_gateway;

I don't like this at all. Why remove the obvious rti_info[RTAX_DST] with
a rti_dst macro? This makes to code even more obscure and harder to
understand.

So everything gets more confusing just so that rtsock.c can become a bit
less fucked up. I think this is reverse and the crazy defines in rtsock
should just be reversed.

-- 
:wq Claudio



Re: Request for Funding our Electricity

2014-01-17 Thread William Ahern
On Fri, Jan 17, 2014 at 08:38:05PM -0700, Theo de Raadt wrote:
> > I do use emulators, specifically for ARM, because it's just easier for me.
> > And one of my co-workers is a contributor to the Hercules emulator.
> 
> Then you know it is not sufficient for our needs, yet we keep getting
> the same message from some people.  The emulators are too slow, or they
> need to be run on super fast xeons and suddenly draw even more power.
> The suggestion is totally out of touch.

I don't know that personally. I do believe that the particular anecdote I
replied to is an insufficient premise to support the avowed need mentioned
in your ruBSD talk, namely the ability to stress core services like memory
management in diverse ways.

But I'm content taking your word for it. And I'm not trying to argue with
you. Obviously the issue is far more complex than an interview and anecdote
let on.

> > > Finally, we have people who want to work on those architectures.  You
> > > prefer they quit?
> > 
> > No, I don't prefer they quit.
> 
> But you've instructed us to power the machines off and move to emulators.

I never argued any such thing.

> > So, please don't misunderstand me. I'm not questioning why you guys use so
> > much power with old hardware.
> 
> It is not a lot of power; that is a myth.

It is a lot of power considering that my modern, 4-core Haswell Xeon 1U
servers draw less than 50W at maximum load. I used to run OpenBSD on Sparc
and Alpha, and they drew more power than that at idle.

But that's beside the point, because I'm not attacking OpenBSD's
infrastructure setup.

 
> > I'm not writing the code, so it's not my place to question.
> 
> You said it yourself, it is not your place to question.  Yet, you that
> is precisely what you are doing.

I disagree. I merely made a point about an anecdote. I apologize if my quip
about "coolness factor" struck a nerve.

> > And, FWIW, I love the idea of a CD subscription service. I often end up
> > forgetting to buy a CD. I upgrade most of my systems remotely (with a 13
> > year track record of never losing a machine--thanks!), so I never have to
> > actually use the CD.
> 
> Why do you need a subscription?  You can go order the ones you are
> missing (right now), and even save postage since a whole bunch fill
> arrive at once.  There is no need to setup the additional overhead of
> managing subscriptions, for people like you.
>
> Wow, so many crazy suggstions.

I never suggested a CD service. Somebody else suggested it and I
thought--apparently erroneously--that it received a favorable comment from
someone on the OpenBSD team.

In any event I just discovered the monthly donation subscription on the
Foundation website and have signed up for a $20 monthly donation. So the CD
subscription is less of a useful idea than it initially appeared.



Re: Request for Funding our Electricity

2014-01-17 Thread Theo de Raadt
> I do use emulators, specifically for ARM, because it's just easier for me.
> And one of my co-workers is a contributor to the Hercules emulator.

Then you know it is not sufficient for our needs, yet we keep getting
the same message from some people.  The emulators are too slow, or they
need to be run on super fast xeons and suddenly draw even more power.
The suggestion is totally out of touch.

> > Finally, we have people who want to work on those architectures.  You
> > prefer they quit?
> 
> No, I don't prefer they quit.

But you've instructed us to power the machines off and move to emulators.

> So, please don't misunderstand me. I'm not questioning why you guys use so
> much power with old hardware.

It is not a lot of power; that is a myth.

The power bill is around $1500/month, to run 2.5 racks of equipment
with really good air conditioning.  Relative to this, 1 full rack in a
Calgary datacenter is over $1000/month.  Considering this is 2.5 racks
the current operation is VERY COST EFFECTIVE RELATIVE TO THE
ALTERNATIVES.

Has anyone come up with an offer for 3 free racks in Calgary?  NO.
Even if someone would, would it make sense?  NO.

> I'm not writing the code, so it's not my place to question.

You said it yourself, it is not your place to question.  Yet, you that
is precisely what you are doing.

> And, FWIW, I love the idea of a CD subscription service. I often end up
> forgetting to buy a CD. I upgrade most of my systems remotely (with a 13
> year track record of never losing a machine--thanks!), so I never have to
> actually use the CD.

Why do you need a subscription?  You can go order the ones you are
missing (right now), and even save postage since a whole bunch fill
arrive at once.  There is no need to setup the additional overhead of
managing subscriptions, for people like you.

Wow, so many crazy suggstions.



Re: Request for Funding our Electricity

2014-01-17 Thread William Ahern
On Fri, Jan 17, 2014 at 07:33:01PM -0700, Theo de Raadt wrote:
> > > You may argue that, since the kernel has a workaround for this issue,
> > > this is a moot point. But if some developer has a better idea for the
> > > kernel heuristic, how can the new code be tested, if not on the real
> > > hardware?
> > > 
> > 
> > The problem with this story is that the purported reasons for supporting old
> > architectures is to shake out bugs. How do the bugs get shaken out? By
> > exercising shared, core functionality in distinctive ways.
> > 
> > Idiosyncracies such as the above are not the type of thing that helps shake
> > out core bugs.
> 
> You've missed the point.
> 
> These idiosyncracies must be stepped over, so that we can have working
> platforms different from x86, to then go discover the core bugs!
> 
> Luckily we have people in our group who support such other
> architectures in our tree, to give us this capability.
> 
> Let's face it.  OpenBSD has this as a bug reducing mechanism
> available, and most other systems do not anymore, having decided to
> chase only the market-chosen architectures.  It is a true many-eyes
> "machined" solution.
> 
> What other community has users who commonly run upstream software on
> 64-bit big-endian strict alignment platform with register windows
> adjusting the frames in odd ways, or 32-bit big-endian ones with mutex
> alignment requirements, or a pile of other requirements.
> 
> Quite frankly, I am not alone in being sick of people who don't use
> emulators, stepping in to tell we should use emulators.

I do use emulators, specifically for ARM, because it's just easier for me.
And one of my co-workers is a contributor to the Hercules emulator.
 
> Finally, we have people who want to work on those architectures.  You
> prefer they quit?

No, I don't prefer they quit. I donate to OpenBSD because you guys do the
hard work. And the golden rule of open source is that he who does the work
gets to make the decisions about how he's going to go about doing that work.

So, please don't misunderstand me. I'm not questioning why you guys use so
much power with old hardware. I'm not writing the code, so it's not my place
to question. And while emulators might, arguably, be more efficient in some
abstract sense, what matters is how the work is being done today. And if you
say using real hardware is easier for your workflow, so be it.

And, FWIW, I love the idea of a CD subscription service. I often end up
forgetting to buy a CD. I upgrade most of my systems remotely (with a 13
year track record of never losing a machine--thanks!), so I never have to
actually use the CD.



Re: Request for Funding our Electricity

2014-01-17 Thread Theo de Raadt
> OTOH, there's a strong case to be made for simply inventing crazy
> architectures out of whole cloth and writing an emulator for them.

I am looking forward to seeing yours.  How long do I have to wait?



Re: Request for Funding our Electricity

2014-01-17 Thread Theo de Raadt
> > You may argue that, since the kernel has a workaround for this issue,
> > this is a moot point. But if some developer has a better idea for the
> > kernel heuristic, how can the new code be tested, if not on the real
> > hardware?
> > 
> 
> The problem with this story is that the purported reasons for supporting old
> architectures is to shake out bugs. How do the bugs get shaken out? By
> exercising shared, core functionality in distinctive ways.
> 
> Idiosyncracies such as the above are not the type of thing that helps shake
> out core bugs.

You've missed the point.

These idiosyncracies must be stepped over, so that we can have working
platforms different from x86, to then go discover the core bugs!

Luckily we have people in our group who support such other
architectures in our tree, to give us this capability.

Let's face it.  OpenBSD has this as a bug reducing mechanism
available, and most other systems do not anymore, having decided to
chase only the market-chosen architectures.  It is a true many-eyes
"machined" solution.

What other community has users who commonly run upstream software on
64-bit big-endian strict alignment platform with register windows
adjusting the frames in odd ways, or 32-bit big-endian ones with mutex
alignment requirements, or a pile of other requirements.

Quite frankly, I am not alone in being sick of people who don't use
emulators, stepping in to tell we should use emulators.



Finally, we have people who want to work on those architectures.  You
prefer they quit?  You think their experience and the time they spend
will be better spent somewhere else, that they will continue to be
valuable additions in some other role?  First you are wrong, and
secondly, who gave you the moral authority to try to reassign their
time?

Why is there this effort to convince us to do less?



Re: Request for Funding our Electricity

2014-01-17 Thread William Ahern
On Fri, Jan 17, 2014 at 11:32:41PM +, Miod Vallat wrote:
> >And it's not full emulator if it doesn't emulate the
> > bugs.
> 
> It's almost bedtime in Europe. Do you mind if I tell you a bedtime
> story?
> 
> Years ago, a (back then) successful company selling high-end Unix-based
> workstations, having been designing its own systems and core components
> for years, started designing a new generation of workstations.

> Assuming someone would write an emulator for that particular system:
> - if the ``unreliable read'' behaviour is not emulated, according to
>   your logic, it's a bug in the emulator, which has to be fixed.
> - if the behaviour is emulated, how can we know it is correctly
>   emulated, since even the designers of the chip did not spend enough
>   time tracking down the exact conditions leading to the misbehaviour
>   (and which bogus value would be put on the data bus).
> 
> You may argue that, since the kernel has a workaround for this issue,
> this is a moot point. But if some developer has a better idea for the
> kernel heuristic, how can the new code be tested, if not on the real
> hardware?
> 

The problem with this story is that the purported reasons for supporting old
architectures is to shake out bugs. How do the bugs get shaken out? By
exercising shared, core functionality in distinctive ways.

Idiosyncracies such as the above are not the type of thing that helps shake
out core bugs.

So there are two ways to resolve this discrepency: either it simply makes
more sense to shift to emulated environments for older hardware; or one of
the primary reasons also includes actually running on creaky, old
hardware--the coolness factor.

I suspect the coolness factor looms large. And there's nothing wrong with
that. OTOH, there's a strong case to be made for simply inventing crazy
architectures out of whole cloth and writing an emulator for them.



no XS_NO_CCB for vax/ncr(4) or sparc/si(4)

2014-01-17 Thread David Gwynne
can anyone compile or even test this on a sparc or vax for me?

cheers,
dlg

Index: ncr5380sbc.c
===
RCS file: /cvs/src/sys/dev/ic/ncr5380sbc.c,v
retrieving revision 1.30
diff -u -p -r1.30 ncr5380sbc.c
--- ncr5380sbc.c17 Jul 2011 22:46:48 -  1.30
+++ ncr5380sbc.c18 Jan 2014 01:14:57 -
@@ -88,6 +88,9 @@
 #include 
 #include 
 
+static void *  ncr5380_io_get(void *);
+static voidncr5380_io_put(void *, void *);
+
 static voidncr5380_sched(struct ncr5380_softc *);
 static voidncr5380_done(struct ncr5380_softc *);
 
@@ -362,14 +365,17 @@ ncr5380_init(sc)
 
for (i = 0; i < SCI_OPENINGS; i++) {
sr = &sc->sc_ring[i];
-   sr->sr_xs = NULL;
+   sr->sr_flags = SR_FREE;
timeout_set(&sr->sr_timeout, ncr5380_cmd_timeout, sr);
}
for (i = 0; i < 8; i++)
for (j = 0; j < 8; j++)
sc->sc_matrix[i][j] = NULL;
 
+   scsi_iopool_init(&sc->sc_iopool, sc, ncr5380_io_get, ncr5380_io_put);
+
sc->sc_link.openings = 2;   /* XXX - Not SCI_OPENINGS */
+   sc->sc_link.pool = &sc->sc_iopool;
sc->sc_prevphase = PHASE_INVALID;
sc->sc_state = NCR_IDLE;
 
@@ -585,6 +591,44 @@ out:
  */
 
 
+void *
+ncr5380_io_get(void *xsc)
+{
+   struct ncr5380_softc *sc = xsc;
+   struct sci_req *sr = NULL;
+   int s;
+
+   /*
+* Find lowest empty slot in ring buffer.
+* XXX: What about "fairness" and cmd order?
+*/
+
+   s = splbio();
+   for (i = 0; i < SCI_OPENINGS; i++) {
+   if (sc->sc_ring[i].sr_flags == SR_FREE) {
+   sr = &sc->sc_ring[i];
+   sr->sr_flags = 0;
+   sc->sc_ncmds++;
+   break;
+   }
+   }
+   splx(s);
+
+   return (sr);
+}
+
+void
+ncr5380_io_put(void *xsc, void *xsr)
+{
+   struct ncr5380_softc *sc = xsc;
+   struct sci_req *sr = xsr;
+   int s;
+
+   s = splbio();
+   sr->sr_flags = SR_FREE;
+   splx(s);
+}
+
 /*
  * Enter a new SCSI command into the "issue" queue, and
  * if there is work to do, start it going.
@@ -622,22 +666,8 @@ ncr5380_scsi_cmd(xs)
}
}
 
-   /*
-* Find lowest empty slot in ring buffer.
-* XXX: What about "fairness" and cmd order?
-*/
-   for (i = 0; i < SCI_OPENINGS; i++)
-   if (sc->sc_ring[i].sr_xs == NULL)
-   goto new;
-
-   xs->error = XS_NO_CCB;
-   scsi_done(xs);
-   NCR_TRACE("scsi_cmd: no openings\n", 0);
-   goto out;
-
-new:
/* Create queue entry */
-   sr = &sc->sc_ring[i];
+   sr = xs->io;
sr->sr_xs = xs;
sr->sr_target = xs->sc_link->target;
sr->sr_lun = xs->sc_link->lun;
Index: ncr5380var.h
===
RCS file: /cvs/src/sys/dev/ic/ncr5380var.h,v
retrieving revision 1.12
diff -u -p -r1.12 ncr5380var.h
--- ncr5380var.h25 Mar 2010 13:18:03 -  1.12
+++ ncr5380var.h18 Jan 2014 01:14:57 -
@@ -71,6 +71,7 @@ struct sci_req {
 #defineSR_SENSE2   /* We are getting sense 
*/
 #defineSR_OVERDUE  4   /* Timeout while not 
current */
 #defineSR_ERROR8   /* Error occurred */
+#define SR_FREE16  /* We are free */
int sr_status;  /* Status code from last cmd */
 
struct timeout  sr_timeout;
@@ -144,6 +145,8 @@ struct ncr5380_softc {
/* Ring buffer of pending/active requests */
struct  sci_req sc_ring[SCI_OPENINGS];
int sc_rr;  /* Round-robin scan pointer */
+
+   struct scsi_iopool sc_iopool;
 
/* Active requests, by target/LUN */
struct  sci_req *sc_matrix[8][8];



Re: lpd: race condition

2014-01-17 Thread Todd C. Miller
On Fri, 17 Jan 2014 21:49:53 +0100, Tobias Stoeckmann wrote:

> lpd wants to verify that it doesn't open a symbolic link, checking with
> lstat(), then open()ing the file.  The only reason I can see that the
> code does not simply use O_NOFOLLOW is a different return value if
> it encounters a symlink (maybe I am wrong here, would like to get feedback
> on this assumption).

All that symlink nonsense is to support "lpr -s".  If you allow a
symlink in the spool you need to make sure it still points to the
same thing that was printed with lpr.

> I suggest to skip lstat() and check the fstat() result later on,
> continuing the train of thought of current lpd behaviour -- lpd will
> just _always_ do the file check.

I don't think that will work since fdev and fino are only set for
"lpr -s" as far as I can tell.  So you still need to be able to
tell whether the file you opened is a link...

 - todd



Re: Request for Funding our Electricity

2014-01-17 Thread Miod Vallat
>And it's not full emulator if it doesn't emulate the
> bugs.

It's almost bedtime in Europe. Do you mind if I tell you a bedtime
story?

Years ago, a (back then) successful company selling high-end Unix-based
workstations, having been designing its own systems and core components
for years, started designing a new generation of workstations.

As part of their design, they created a dedicated memory controller,
which turned out to fit their hardware so well that it was reused on
four other workstation motherboard designs.

That memory controller had, among many registers, an arbitration
register, used to configure the relative priority of onboard devices, as
well as expansion slots, to acquire the data bus. Proper setting of this
register is necessary to allow on-board devices and expansion slots to
correctly perform DMA, while still allowing cache writeback to run and
whatnot.

The proper value for that register had to be decided at runtime.

The recommended logic was to rely upon the minimal initialization done
by the firmware, and then clear some bits and set some others depending
upon what on-board devices would be present on the particular
motherboard artwork, and what would be found in the various expansion
slots.

However, it turned out that, on the first few revisions of the memory
controller, reading from this particular register was not reliable at
all. Sometimes, one would read the correct value, and sometimes, one
would read a completely wrong value, depending upon the recent activity
occuring on the data bus.

The hardware engineers could not figure out what exactly caused this.
Most importantly, they could not figure out a reliable workaround to get
the correct value out of this register.

So they asked the software guys for help. And the company's homemade
SVR4-based Unix grew a complex logic to decide, once and for all, which
value to write to the register, without having to rely upon the previous
value. And they told the hardware guys that it was ok not to worry about
this issue anymore.

OpenBSD runs on these systems, but we are not lucky enough to have all
the necessary hardware documentation, and, for some of the bits in this
register, we simply don't know when to set them, and when not to set
them. Instead, the OpenBSD kernel still reads that register, several
times, and has an ugly heuristic to decide when the value read is
likely to be correct. And then we only flip the bits we know for certain
we can tinker with. It's the best we can do.

Assuming someone would write an emulator for that particular system:
- if the ``unreliable read'' behaviour is not emulated, according to
  your logic, it's a bug in the emulator, which has to be fixed.
- if the behaviour is emulated, how can we know it is correctly
  emulated, since even the designers of the chip did not spend enough
  time tracking down the exact conditions leading to the misbehaviour
  (and which bogus value would be put on the data bus).

You may argue that, since the kernel has a workaround for this issue,
this is a moot point. But if some developer has a better idea for the
kernel heuristic, how can the new code be tested, if not on the real
hardware?

Miod



sgi/mvme68k tests for sbic(4)

2014-01-17 Thread David Gwynne
this gets rid of XS_NO_CCB in sbic(4) by moving to iopools. i dont
have an arch that uses this so i mostly want compile testers, but
if someone actually has the hardware that would be great.

this change is mildly interesting because it demonstrates the
flexibility of iopools at sharing resources between consumers. the
command resources are shared at the kernel level, rather than on a
controller or for a target like you see on most hardware. iopools
will guarantee fair access to the command descriptors across all
controllers.

ok?

Index: wd33c93.c
===
RCS file: /cvs/src/sys/dev/ic/wd33c93.c,v
retrieving revision 1.4
diff -u -p -r1.4 wd33c93.c
--- wd33c93.c   2 Jul 2012 18:17:43 -   1.4
+++ wd33c93.c   17 Jan 2014 23:06:20 -
@@ -142,8 +142,12 @@ u_char wd33c93_stp2syn(struct wd33c93_so
 void   wd33c93_setsync(struct wd33c93_softc *, struct wd33c93_tinfo *);
 
 struct pool wd33c93_pool;  /* Adapter Control Blocks */
+struct scsi_iopool wd33c93_iopool;
 int wd33c93_pool_initialized = 0;
 
+void * wd33c93_io_get(void *);
+void   wd33c93_io_put(void *, void *);
+
 /*
  * Timeouts
  */
@@ -204,6 +208,7 @@ wd33c93_attach(struct wd33c93_softc *sc,
sc->sc_link.adapter = adapter;
sc->sc_link.openings = 2;
sc->sc_link.luns = SBIC_NLUN;
+   sc->sc_link.pool = &wd33c93_iopool;
 
bzero(&saa, sizeof(saa));
saa.saa_sc_link = &sc->sc_link;
@@ -226,6 +231,9 @@ wd33c93_init(struct wd33c93_softc *sc)
/* All instances share the same pool */
pool_init(&wd33c93_pool, sizeof(struct wd33c93_acb), 0, 0, 0,
"wd33c93_acb", NULL);
+   pool_setipl(&wd33c93_pool, IPL_BIO);
+   scsi_iopool_init(&wd33c93_iopool, NULL,
+   wd33c93_io_get, wd33c93_io_put);
++wd33c93_pool_initialized;
}
 
@@ -574,17 +582,8 @@ wd33c93_scsi_cmd(struct scsi_xfer *xs)
if (sc->sc_nexus && (flags & SCSI_POLL))
panic("wd33c93_scsicmd: busy");
 
-   s = splbio();
-   acb = (struct wd33c93_acb *)pool_get(&wd33c93_pool,
-   PR_NOWAIT | PR_ZERO);
-   splx(s);
-
-   if (acb == NULL) {
-   xs->error = XS_NO_CCB;
-   scsi_done(xs);
-   return;
-   }
-
+   acb = xs->io;
+   memset(acb, 0, sizeof(*acb));
acb->flags = ACB_ACTIVE;
acb->xs= xs;
acb->timeout = xs->timeout;
@@ -823,14 +822,6 @@ wd33c93_scsidone(struct wd33c93_softc *s
wd33c93_sched(sc);
}
 
-   /* place control block back on free list. */
-   if ((xs->flags & SCSI_POLL) == 0) {
-   s = splbio();
-   acb->flags = ACB_FREE;
-   pool_put(&wd33c93_pool, acb);
-   splx(s);
-   }
-
scsi_done(xs);
 }
 
@@ -1436,11 +1427,6 @@ wd33c93_poll(struct wd33c93_softc *sc, s
}
 
if ((xs->flags & ITSDONE) != 0) {
-   s = splbio();
-   acb->flags = ACB_FREE;
-   pool_put(&wd33c93_pool, acb);
-   splx(s);
-
return (0);
}
 
@@ -2298,6 +2284,17 @@ wd33c93_watchdog(void *arg)
timeout_add_sec(&sc->sc_watchdog, 60);
 }
 
+void *
+wd33c93_io_get(void *null)
+{
+   return (pool_get(&wd33c93_pool, PR_NOWAIT));
+}
+
+void *
+wd33c93_io_put(void *null, void *io)
+{
+   pool_put(&wd33c93_pool, io);
+}
 
 #ifdef SBICDEBUG
 void



Re: Request for Funding our Electricity

2014-01-17 Thread Theo de Raadt
> Regarding the "less architecture support to save electricity"
> argument, I'm not sure one follows the other. Computing power has
> grown to a point that emulators are perfectly valid - particularly for
> older systems.

And that is based upon real experience you have with the emulators?

I rather doubt it.  I believe you are spouting.




Re: Request for Funding our Electricity

2014-01-17 Thread Theo de Raadt
>That's a bug to be filed against an emulator. And it's easier to do
>that *now* when the older hardware is around to test for bug
>compatibility. And it's not full emulator if it doesn't emulate the
>bugs.

We are an operating system project.  We have a full set of tasks ahead
of ourselves.  We are not people writing or improving emulators.

In our experience, all of them are subtly erroneous in their behavior.
At best.  Members of our group have experience with just about all of
them.

> And I must admit the resistance to this is weird.

I am going to make a guess here.  You've never relied on the emulators
yourselves.  Yet you are acting like a know-it-all.  You sure have advice
for us, don't you.

You feel you can tell a group with our success what processes we are
supposed to do move to.  You are very out of place.

Imagine you told us a lot about your life, and we gave you advice.




Re: Request for Funding our Electricity

2014-01-17 Thread Kevin Lyda
On Fri, Jan 17, 2014 at 8:23 PM, Christopher Ahrens  wrote:
> *Instructions are executed as they should, not how they actually work

That's a bug to be filed against an emulator. And it's easier to do
that *now* when the older hardware is around to test for bug
compatibility. And it's not full emulator if it doesn't emulate the
bugs.

> *instructions will, at best, take a two instructions on the host if
>  the architectures and endianness match; if not:
>   The instruction has to matched against a lookup table and if there
>   is a single equivalent instruction to do the same thing and you have
>   the same endianness, that is three processors cycles.  If its
>   different endianness, then you now have between 32 and 128 more
>   instructions (convert to the host endianness then back for 16 to
>   64-bit archs)

All true, but kind of meaningless for faster newer machines. Following
Moore's law, a current machine is likely at least 256 times faster
than a 12 year old machine. And nearly every older architecture has a
machine that is 12 years old.

If supporting older architectures for the full lifespan of that arch
you're going to get to a point where all the hardware versions of that
machine are in production. You'll eventually have a choice between an
emulator or nothing. The last machine of arch X running OpenBSD will
not be running on the OpenBSD Foundation racks.

And note I'm talking about emulators, not architecture optimised
virtual machines. They're probably not ideal for coding device drivers
(and even that's not completely true), but they're fine for doing
userland and higher level kernel development. You'll find endianess,
alignment, cross-arch pointer and int/float size bugs with an emulator
just as easily as you can with hardware.

The two remote bugs that were found in OpenBSD were both ones that
were high enough up the stack that they could be debugged / hacked at
on an emulator.  And as machines get faster/cheaper you'd have the
option of running a small network and run network fuzz testing within
a single machine.

And I must admit the resistance to this is weird. My point was that
the "use less electricity means less ports" argument was wrong. That
emulators provide a way forward with all architectures that
*increases* developer interest (unlike removing them with reduces it).
I'm not saying switch to all emulators all the time for all
development *today*, I'm saying think about going that direction now
when it's easier (hw bug compatibility testing, etc).

It's a lot easier to ask for $X/year if there's a plan for X to reduce.

Emulators are hardly some radical view - this is exactly what OpenBSD
supports and advertises for the oldest hardware it supports. Am I
really saying something new by pointing out to all older archs, "this
is your future"?

> Please continue to do this.  Cash, code and correct docs help OpenBSD,
> dreaming doesn't.

Yelling at the forward march of time doesn't help either. Diodes don't
live forever.

Kevin

-- 
Kevin Lyda
Galway, Ireland
US Citizen overseas? We can vote.
Register now: http://www.votefromabroad.org/



Re: Request for Funding our Electricity

2014-01-17 Thread Christopher Ahrens

Kevin Lyda wrote:

Regarding the "less architecture support to save electricity"
argument, I'm not sure one follows the other. Computing power has
grown to a point that emulators are perfectly valid - particularly for
older systems.

I think a push to package and maintain emulators for many of these
older architectures would be beneficial in many ways. There's some
amount of this already - there are instructions for the simh simulator
for the VAX arch for instance. The obvious benefits I couldd see would
be:

1) You could spin up builds on them w/ little to no effect on electricity usage.
2) Even if the OpenBSD foundation's arch X machine dies, there would
still be infrastructure to maintain the port.
3) It would widen the possible number of developers if people could
spin up older architectures in an emulator.
4) It would make OpenBSD a valuable tool for accessing older media and
documenting older architectures.

I know emulators are not perfect, so a physical machine would be
superior.  But if there was some encouragement for emulators for archs
I think those would be useful benefits.




Even if emulators did work, you still have a couple of problems:

*Instructions are executed as they should, not how they actually work
*instructions will, at best, take a two instructions on the host if
 the architectures and endianness match; if not:
  The instruction has to matched against a lookup table and if there
  is a single equivalent instruction to do the same thing and you have
  the same endianness, that is three processors cycles.  If its
  different endianness, then you now have between 32 and 128 more
  instructions (convert to the host endianness then back for 16 to
  64-bit archs)
  Now if there isn't an equivalent instructions (welcome to the
  difference between CISC and RISC machines)  you are probably going to
  have to run two all the way up to a couple dozen instructions to
  emulate just one, plus you still have the same problem with
  endianness like before
*assuming all the above works, you are still tripling the effort in
 debugging because now you have to determine if the bug is in the
 emulated environment, the emulator itself, or the host OS.
*Even if the above still works perfectly, you will still miss all the
 bugs caused by memory alignment (the host will fix any of that), which
 are the most common we find or the host ends up adding new ones.

But all this is ignoring the real purpose of running on real hardware
which is that the same code runs on all the boxes, so if one of them
outputs something unexpected from the other machines, we know something
is wrong.

The only way to reduce our power for the older archs is if someone were
able to re-build the entire system on more power-efficient,
bug-compatible chips


Support for multiple archs brings interest and exposes bad code in
ways limited arch support does not.


Exactly


Dropping that to save electricity
is not a valid reason with today's compute power.

Anyway, it's been a long time since I did stuff with OpenBSD, but I
think it would be a shame to drop such support. So I'll back up my
words with some cash.  And if I get a roundtuit, perhaps some code or
docs as well.


Please continue to do this.  Cash, code and correct docs help OpenBSD,
dreaming doesn't.



Kevin




And now to paraphrase Theo:
Shut up, donate, and hack.



lpd: race condition

2014-01-17 Thread Tobias Stoeckmann
Hi,

lpd wants to verify that it doesn't open a symbolic link, checking with
lstat(), then open()ing the file.  The only reason I can see that the
code does not simply use O_NOFOLLOW is a different return value if
it encounters a symlink (maybe I am wrong here, would like to get feedback
on this assumption).

In either way, an attacker could create a regular file, waiting for the
lstat() to happen and replace it with a symlink right after the call,
before the open() function is called.

I suggest to skip lstat() and check the fstat() result later on,
continuing the train of thought of current lpd behaviour -- lpd will
just _always_ do the file check.

Also, don't assume that everything is alright if fstat() fails.


Tobias

Index: printjob.c
===
RCS file: /var/www/cvs/src/usr.sbin/lpr/lpd/printjob.c,v
retrieving revision 1.49
diff -u -p -r1.49 printjob.c
--- printjob.c  10 Dec 2013 16:38:04 -  1.49
+++ printjob.c  17 Jan 2014 20:35:01 -
@@ -539,17 +539,16 @@ print(int format, char *file)
int n, fi, fo, p[2], stopped = 0, nofile;
 
PRIV_START;
-   if (lstat(file, &stb) < 0 || (fi = safe_open(file, O_RDONLY, 0)) < 0) {
+   if ((fi = safe_open(file, O_RDONLY, 0)) < 0) {
PRIV_END;
return(ERROR);
}
PRIV_END;
/*
-* Check to see if data file is a symbolic link. If so, it should
-* still point to the same file or someone is trying to print
-* something he shouldn't.
+* Check if expected file was opened. If not, someone is
+* trying to print something he shouldn't.
 */
-   if (S_ISLNK(stb.st_mode) && fstat(fi, &stb) == 0 &&
+   if (fstat(fi, &stb) != 0 ||
(stb.st_dev != fdev || stb.st_ino != fino))
return(ACCESS);
if (!SF && !tof) {  /* start on a fresh page */
@@ -876,18 +875,16 @@ sendfile(int type, char *file)
int sizerr, resp;
 
PRIV_START;
-   if (lstat(file, &stb) < 0 || (f = safe_open(file, O_RDONLY, 0)) < 0) {
+   if ((f = safe_open(file, O_RDONLY, 0)) < 0) {
PRIV_END;
return(ERROR);
}
PRIV_END;
/*
-* Check to see if data file is a symbolic link. If so, it should
-* still point to the same file or someone is trying to print something
-* he shouldn't.
+* Check if expected file was opened. If not, someone is
+* trying to print something he shouldn't.
 */
-   if (S_ISLNK(stb.st_mode) && fstat(f, &stb) == 0 &&
-   (stb.st_dev != fdev || stb.st_ino != fino))
+   if (fstat(f, &stb) != 0 || (stb.st_dev != fdev || stb.st_ino != fino))
return(ACCESS);
if ((amt = snprintf(buf, sizeof(buf), "%c%lld %s\n", type,
(long long)stb.st_size, file)) >= sizeof(buf) || amt == -1)



Re: signed packages

2014-01-17 Thread sven falempin
On Fri, Jan 17, 2014 at 12:28 PM, Marc Espie  wrote:
>
> On Fri, Jan 17, 2014 at 06:23:53PM +0100, Marc Espie wrote:
> > On Fri, Jan 17, 2014 at 12:09:31PM -0500, sven falempin wrote:
> > >
> > >Awesome.
> > >Â * the  one on the client openBSD
> > >Â * the  one on the builder
> > >is there a new make command in ports to sign ? like make sign ?
make
> > >resign ?
> >
> > See signify(1), pkg_add(1), pkg_create(1), bsd.port.mk(5) (look for
> > SIGNING_PARAMETERS).
> >
> > Packages can be signed during build, or later.
> > There's no new command, pkg_create(1) is used for creating signed
packages.
>
> Note that things are still WILDLY changing.  I assume that by now,
> lots of people have noticed the signed stuff.   This is still a moving
> target (working quite well IMO).



i read the manuals , and well , i am still unsure,

if i put SIGNER=bob in the package configuration

then it will be signed with

/etc/signify/bob.sec

having to read 4 different manual page to get this is strange :p

--
-
() ascii ribbon campaign - against html e-mail
/\


Re: signed packages

2014-01-17 Thread Marc Espie
On Fri, Jan 17, 2014 at 06:23:53PM +0100, Marc Espie wrote:
> On Fri, Jan 17, 2014 at 12:09:31PM -0500, sven falempin wrote:
> > 
> >Awesome.
> >Â * the  one on the client openBSD
> >Â * the  one on the builder
> >is there a new make command in ports to sign ? like make sign ? make
> >resign ?
> 
> See signify(1), pkg_add(1), pkg_create(1), bsd.port.mk(5) (look for
> SIGNING_PARAMETERS).
> 
> Packages can be signed during build, or later.
> There's no new command, pkg_create(1) is used for creating signed packages.

Note that things are still WILDLY changing.  I assume that by now,
lots of people have noticed the signed stuff.   This is still a moving
target (working quite well IMO).



Re: signed packages

2014-01-17 Thread Marc Espie
On Fri, Jan 17, 2014 at 12:09:31PM -0500, sven falempin wrote:
> 
>Awesome.
>Â * the  one on the client openBSD
>Â * the  one on the builder
>is there a new make command in ports to sign ? like make sign ? make
>resign ?

See signify(1), pkg_add(1), pkg_create(1), bsd.port.mk(5) (look for
SIGNING_PARAMETERS).

Packages can be signed during build, or later.
There's no new command, pkg_create(1) is used for creating signed packages.



Re: signed packages

2014-01-17 Thread sven falempin
Awesome.

To keep OUR control, one shall create a FTP, resign all packet and update
the key,
or generate packet and sign with is own key, moreover update the one on his
openBSD client ,

where are those keys ?
 * the  one on the client openBSD
 * the  one on the builder

is there a new make command in ports to sign ? like make sign ? make resign
?

+


On Fri, Jan 17, 2014 at 6:26 AM, Marc Espie  wrote:

> It's probably time to talk about it.
>
> Yes, we are now distributing signed packages.  A lot of people have
> probably
> noticed because there was a key mismatch on at least one batch of signed
> packages.
>
> Obviously, we haven't finished testing yet.
>
> Don't read too much into that.  "Signed packages" just mean you can use
> an insecure medium, such as ftp, to download packages: if the key matches,
> it means the package hasn't been tampered with since it was signed.
>
> The cryptographic framework used to sign packages is called signify(1),
> mostly written by Ted Unangst, with a lot of feedback from (mostly) Theo
> and I.
>
> The signing framework in pkg_add/pkg_create is much older than that, if
> was written for x509 a few years ago, but signify(1) will probably be more
> robust and ways simpler.  In particular, there's no "chain-of-trust", so
> you keep complete control on the sources YOU trust.
>
> Signatures should be transparent in use: the package is opened, the
> packing-list signature is checked, and then files are checksummed while
> extracted against the packing-list embedded checksums (there are provisions
> to ensure any dangerous meta-data is also encoded in the packing-list as
> @mode/@user/@group annotations.
>
> So, barring problems, you shouldn't even notice signatures.
>
>


-- 
-
() ascii ribbon campaign - against html e-mail
/\


Re: Request for Funding our Electricity

2014-01-17 Thread Gregory Edigarov

On 01/17/2014 06:08 PM, Kevin Lyda wrote:

Regarding the "less architecture support to save electricity"
argument, I'm not sure one follows the other. Computing power has
grown to a point that emulators are perfectly valid - particularly for
older systems.


You still seem like you do not understand the issue and why they need to use 
hardware in the first place.
so follow my hands, and read my lips, I will explain it slow, and sorry, Teo, 
what I will say, will seem like I repeat you.

Support for different archs, even not mainstream ones help developers to provide more bug free code 
on i386, amd64, and some other "mainstream", just because some code errors are better 
visible on those "non mainstream" architectures.

The virtualization is absolutely not an option here, because those let's call them 
"debug" architectures should run in hardware, to be further able to check the 
code.

So, having OpenBSD running on as much archs as it is possible help developers 
to provide US, the users with much cleaner and much bug free code.

 


--
With best regards,
 Gregory Edigarov



Re: Request for Funding our Electricity

2014-01-17 Thread Kevin Lyda
Regarding the "less architecture support to save electricity"
argument, I'm not sure one follows the other. Computing power has
grown to a point that emulators are perfectly valid - particularly for
older systems.

I think a push to package and maintain emulators for many of these
older architectures would be beneficial in many ways. There's some
amount of this already - there are instructions for the simh simulator
for the VAX arch for instance. The obvious benefits I couldd see would
be:

1) You could spin up builds on them w/ little to no effect on electricity usage.
2) Even if the OpenBSD foundation's arch X machine dies, there would
still be infrastructure to maintain the port.
3) It would widen the possible number of developers if people could
spin up older architectures in an emulator.
4) It would make OpenBSD a valuable tool for accessing older media and
documenting older architectures.

I know emulators are not perfect, so a physical machine would be
superior.  But if there was some encouragement for emulators for archs
I think those would be useful benefits.

Support for multiple archs brings interest and exposes bad code in
ways limited arch support does not. Dropping that to save electricity
is not a valid reason with today's compute power.

Anyway, it's been a long time since I did stuff with OpenBSD, but I
think it would be a shame to drop such support. So I'll back up my
words with some cash.  And if I get a roundtuit, perhaps some code or
docs as well.

Kevin

-- 
Kevin Lyda
Galway, Ireland
US Citizen overseas? We can vote.
Register now: http://www.votefromabroad.org/



Re: pkg_add (pkg.conf): option to require signed packages

2014-01-17 Thread Marc Espie
On Fri, Jan 17, 2014 at 09:59:03AM +0100, Sébastien Marie wrote:
> On Thu, Jan 16, 2014 at 10:02:22AM +, Stuart Henderson wrote:
> > On 2014/01/16 08:53, Sébastien Marie wrote:
> > > Hi,
> > > 
> > > Does it make sens to have an option to require package to be signed ?
> > 
> > It makes more sense to just enable that by default, when we are happy
> > with the infrastructure and usage.
> > 
> 
> I saw "enable by default" more as long term purpose. The patch would
> permit to easily test it...

Enable by default is trivial to do. Look around the code that says
"check_signature" in OpenBSD/PkgAdd.pm, I'm sure you can figure out the
change.



signed packages

2014-01-17 Thread Marc Espie
It's probably time to talk about it.

Yes, we are now distributing signed packages.  A lot of people have probably
noticed because there was a key mismatch on at least one batch of signed
packages.

Obviously, we haven't finished testing yet.

Don't read too much into that.  "Signed packages" just mean you can use
an insecure medium, such as ftp, to download packages: if the key matches,
it means the package hasn't been tampered with since it was signed.

The cryptographic framework used to sign packages is called signify(1),
mostly written by Ted Unangst, with a lot of feedback from (mostly) Theo
and I.

The signing framework in pkg_add/pkg_create is much older than that, if
was written for x509 a few years ago, but signify(1) will probably be more
robust and ways simpler.  In particular, there's no "chain-of-trust", so
you keep complete control on the sources YOU trust.

Signatures should be transparent in use: the package is opened, the 
packing-list signature is checked, and then files are checksummed while
extracted against the packing-list embedded checksums (there are provisions
to ensure any dangerous meta-data is also encoded in the packing-list as
@mode/@user/@group annotations.

So, barring problems, you shouldn't even notice signatures.



Re: pkg_add (pkg.conf): option to require signed packages

2014-01-17 Thread Sébastien Marie
On Thu, Jan 16, 2014 at 10:02:22AM +, Stuart Henderson wrote:
> On 2014/01/16 08:53, Sébastien Marie wrote:
> > Hi,
> > 
> > Does it make sens to have an option to require package to be signed ?
> 
> It makes more sense to just enable that by default, when we are happy
> with the infrastructure and usage.
> 

I saw "enable by default" more as long term purpose. The patch would
permit to easily test it...

But I am confident about your choices. 
Thanks.
-- 
Sébastien Marie