Re: Proposal: Deprecate (or rename) extsrc/

2022-01-08 Thread Alistair Crooks
On Sat, 8 Jan 2022 at 00:29, matthew green  wrote:

> > I propose that we deprecate or remove the "extsrc/" tree,
> > as the name name-complete conflicts with "external/".
>
> yes, please.
>
>
Count me in as well - the name completion collision has always annoyed me.

Thanks,
Al


Re: style change: explicitly permit braces for single statements

2020-07-18 Thread Alistair Crooks
Just to get back to the original subject - fully support {} around single
statements - have been doing that in my own code for ages.

Would be great to have that codified (ha!)

On Thu, 16 Jul 2020 at 12:01, Rhialto  wrote:

> On Thu 16 Jul 2020 at 13:08:49 -0400, Ted Lemon wrote:
> > It sounds like we need a better tool.  FWIW, when actually working on
> > code, I've found that 120 is a better width than 80 -- with 80, there are
> > just too many line breaks.  But I don't mean to say that your
> > preference is wrong -- what sucks is that we have to compromise, instead
> > of having tools that present the text the way that we want to consume
> > it without changing anything in the underlying file. E.g. web browsers
> > just reflow the text when you change the window width. Why don't we
> > have this for code editors?
>
> I have seen an editor (I think it was google's Android development
> environment) that even went so far as to recognize some particular
> boilerplate Java code fragments, and abbreviated them. You could unfold
> them if you wanted though.
>
> I wasn't sure if I liked that or hated it.
>
> -Olaf.
> --
> Olaf 'Rhialto' Seibert -- rhialto at falu dot nl
> ___  Anyone who is capable of getting themselves made President should on
> \X/  no account be allowed to do the job.   --Douglas Adams, "THGTTG"
>


Re: To test presence of CVE-2018-6922 ( TCP vulnerability) in NetBSD5.1

2018-08-11 Thread Alistair Crooks
On Fri, 10 Aug 2018 at 08:01, Ripunjay Tripathi 
wrote:

> Thanks for the link.
>
> On Fri, Aug 10, 2018 at 3:19 PM Maxime Villard  wrote:
>
>> Le 10/08/2018 à 11:18, Ripunjay Tripathi a écrit :
>> > I am trying to test presence of CVE-2018-6922 [...]
>>
>> NetBSD 5 is not supported anymore, and NetBSD 6 is about to reach EOL. So
>> there is no way this is ever going to be fixed in NetBSD 5.
>>
>> I know that. I am interested in understanding if someone has already
> tested the presence OR could help me in my attempts to reproduce this.
> I also need to fix this therefore wanted to be sure if my understanding of
> code tcp_input() is correct.
>

I think you are mistaken - there is no need to fix - see yesterday's
conversation on tech-net, as maxv mentioned, and this from 14 years ago:

https://mail-index.netbsd.org/netbsd-announce/2004/03/04/.html

I know the code in question is opaque, but its effects should be obvious
when running the exploit code.

Regards,
Alistair

PS. CERT-CC were informed that NetBSD was not affected in advance of
publication, but haven't updated their list of vendors to include that.


Re: Generic crc32c support in kernel?

2016-08-13 Thread Alistair Crooks
I think we're talking at cross-purposes.

The zlib function calculates crc32 and was the cause of fun in the
bootblocks; Jaromir was talking about adding the crc32c hashes, used in
ext2fs, iscsi and sctp.

On 12 August 2016 at 23:38, Martin Husemann  wrote:

> On Fri, Aug 12, 2016 at 11:30:15PM +0200, Joerg Sonnenberger wrote:
> > Can we please stop with that bogus myth? The crc32 was only a very small
> > part of the issue and the real issue, the seeking in the kernel image,
> > is still not resolved.
>
> True, avoiding the seeking would cut down the number of crc32
> calculations significantly - but still the time difference is remarkable.
>
> Martin
>
>


Re: Generic crc32c support in kernel?

2016-08-12 Thread Alistair Crooks
Yeah, a lib would be the best place for this - as well as ext\dfs, iscsi
and sctp use crc32c, and there will probably be more users. Would also be
good to get this shared with userland via src/common

On 12 August 2016 at 12:24, Jaromír Doleček 
wrote:

> Hi,
>
> I'm working on ext2fs code now, ext3/ext4 uses crc32c almost exclusively
> to checksum many filesystem structures, so I'll need this soon to write
> data properly and thus support generally used filesystem options.
>
> We do have crc32 support in libkern, so I was thinking doing something
> along that.
>
> I noticed we have something in sctp_crc32.c, also there is another
> implementation in iscsi code. I also found FreeBSD has consolidated and
> optimized this a bit, they use one shared implementation for both places.
> Even there, it's just C code - countrary to Linux, which has also code to
> use the Intel CPU instructions if available. So I've been thinking I could
> either extract this into some shared code; or just create third fork within
> kernel specially for ext2fs.
>
> What is general option, is there any interest to have crc32c support
> consolidated into a lib, should I put some efford into making this shared
> code? It adds some kernel bloat, so I'm slightly reluctant.
>
> Jaromir
>


Re: FWIW: sysrestrict

2016-07-23 Thread Alistair Crooks
ISTM that your sysretsrict suffers from one of the same drawbacks as
pledge/tame/name-du-jour - the restrictions are being burned into the
binary at compile/link time. That might be fine for system binaries
(but some people download distributions from the project servers) that
are built locally - what about anything more than the basics, like an
apache with loadable modules? How do you specify the modular
restrictions? How do we make it so that an apache binary can be
successfully have its restriction set "expanded" to allow modules to
do their job, when that is what sysretsrict is trying to prevent?

I'd be much happier with a variant of seccomp-bpf, or even using lua
to do the same job (if it was performant, JIT-enabled and safe to do
such a thing, I expect not :().

My main problem is that simply outlawing system calls is a very
coarse-grained hammer. I may want a binary to be able to open files
for writing in /tmp, but not open any files in /etc for writing. Or
reading files in my home directory, except for anything in ~/.ssh or
~/.gnupg. How does sysrestrict cope with this?

Thanks,
Alistair

On 23 July 2016 at 14:50, Paul Goyette  wrote:
> I would assume that the checking of syscall restrictions would be done
> within the kauth(9) framework?
>
>
> On Sat, 23 Jul 2016, Maxime Villard wrote:
>
>> Eight months ago, I shared with a few developers the code for a kernel
>> interface [1] that can disable syscalls in user processes.
>>
>> The idea is the following: a syscall bitmap is embedded into the ELF
>> binary
>> itself (in a note section, like PaX), and each time the binary performs a
>> syscall, the kernel checks whether the syscall in question is allowed in
>> the bitmap.
>>
>> In details:
>> - the ELF section is a bitmap of 64 bytes, which means 512 bits, the
>>   number of syscalls. 0 means allowed, 1 means restricted.
>> - in the proc structure, 64 bytes are present, just a copy of the
>>   ELF section.
>> - when a syscall is performed, the kernel calls sysrestrict_enforce
>>   with the proc structure and the syscall number, and gives a look
>>   at the bitmap to make sure it is allowed. If it isn't, the process
>>   is killed.
>> - a new syscall is added, sysrestrict, so that programs can restrict
>>   a syscall at runtime. This might be useful, particularly if a
>>   program calls a syscall once and wants to make sure it is not
>>   allowed any longer.
>> - a userland tool (that I didn't write) can add and update such an ELF
>>   section in the binary.
>>
>> This interface has the following advantages over most already-existing
>> implementations:
>> - it is system-independent, it could almost be copied as-is in FreeBSD.
>> - it is syscall-independent, we don't need to patch each syscall.
>> - it does not require binaries to be recompiled.
>> - the performance cost is low, if not non-existent.
>>
>> I've never tested this code. But in case it inspires or motivates someone.
>>
>> [1] http://m00nbsd.net/garbage/sysrestrict/
>>
>> !DSPAM:5793b16a87246213503!
>>
>>
>
> +--+--++
> | Paul Goyette | PGP Key fingerprint: | E-mail addresses:  |
> | (Retired)| FA29 0E3B 35AF E8AE 6651 | paul at whooppee.com   |
> | Kernel Developer | 0786 F758 55DE 53BA 7731 | pgoyette at netbsd.org |
> +--+--++
>


Re: Lightweight support for instruction RNGs

2015-12-22 Thread Alistair Crooks
Yeah, we keep coming back to this asssumption that nothing can go
wrong with the random output in userland because it is passed through
whitening filters on its way. The analysis of the recent Juniper
backdoor should give you an idea of why relying on that kind of
reasoning is unsound - Juniper had multiple levels of whitening in
their product, and have still had to go through a particularly
embarassing episode. It's worrying to me that we've had to bolt the
stable door once already in this area.

http://www.wired.com/2015/12/researchers-solve-the-juniper-mystery-and-they-say-its-partially-the-nsas-fault


On 21 December 2015 at 16:38, Thor Lancelot Simon <t...@panix.com> wrote:
> On Mon, Dec 21, 2015 at 09:28:40AM -0800, Alistair Crooks wrote:
>> I think there's some disconnect here, since we're obviously talking
>> past each other.
>>
>> My concern is the output from the random devices into userland. I
>
> Yes, then we're clearly talking past each other.  The "output from the
> random devices into userland" is generated using the NIST SP800-90
> CTR_DRBG.  You could key it with all-zeroes and the statistical properties
> of the output would differ in no detectable way* from what you got if
> you keyed it with pure quantum noise.
>
> If you want to run statistical tests that mean anything, you need to
> feed them input from somewhere else.  Feeding them the output of the
> CTR_DRBG can be nothing but -- at best -- security theater.
>
>  [*maybe some day we will have a cryptanalysis of AES that allows us to
>detect such a difference, but we sure don't now]
>
> Thor
>


Re: Lightweight support for instruction RNGs

2015-12-21 Thread Alistair Crooks
I think there's some disconnect here, since we're obviously talking
past each other.

My concern is the output from the random devices into userland. I
couldn't care less about the in-kernel sources, except as they operate
together to produce bits in that output. I'm talking about running the
tests on entropy as collected from the kernel, not sampling the
individual sources. I need to be convinced that the collective output
is not predictable in some way. That the output from random/urandom is
as unpredictable as it can be. That the interactions of various
sources does not produce something that is more predictable rather
than less. How do you know that certain inputs are not going to skew
more or less from certain patterns when run through the output
mechanism?

What I'm trying to get at is that some people seem to think "more
entropy is good", and I don't believe it's as simple as that.

As an aside, I think that we all need to remember, on some platforms,
our entropy pool is not as full as we'd like it to be. Take this VM,
for example, hosted on a machine with an SSD as storage:

% sudo rndctl -l
Password:
Source Bits Type  Flags
cd0   5 disk estimate, collect, v, t, dt
wd0 4266255 disk estimate, collect, v, t, dt
fd0   0 disk estimate, collect, v, t, dt
cpu0 266677 vm   estimate, collect, v, t, dv
wm0   0 net  v, t, dt
pms0  0 tty  estimate, collect, v, t, dt
pckbd00 tty  estimate, collect, v, t, dt
system-power  0 power estimate, collect, v, t, dt
autoconf104 ???  estimate, collect, t, dt
printf0 ???  collect
callout 577 skew estimate, collect, v, dv
%

The entropy collected from wd0 is predictable, as is cd0 (it's an ISO
image in the host file system on the same ssd storage). That leaves us
with cpu, autoconf and callout. For DSA signatures, we need good
entropy, as we do for any ephemeral https connection. And here is
where I'm starting to grow concerned.

Now to get to the tin foil hat bit - since the source code for RDRAND
is not available, we can't review it. How does adding that into the
mix, on an otherwise entropy-free system make me more secure? By
trusting to various digests and linear shifts to obscure it. How do we
know that there is no bias when running through these? We don't, we
just assume there is none.

I want to get rid of all these assumptions, hopes, and calls for me to
get checked in to various rest cures -- by automating the testing of
the output from the random device. The best way I've found so far is
by running dieharder; if there are other ways, or similar packages,
I'd love to hear about them.



On 19 December 2015 at 18:33, Thor Lancelot Simon <t...@panix.com> wrote:
> On Sat, Dec 19, 2015 at 05:23:58PM -0800, Alistair Crooks wrote:
>> On 19 December 2015 at 17:10, Thor Lancelot Simon <t...@panix.com> wrote:
>> > On Sat, Dec 19, 2015 at 04:54:20PM -0800, Alistair Crooks wrote:
>> >> The point is to see if RDRAND plus other inputs does not regress to
>> >> produce an output that is, in some way, "predictable". And while
>> >> running dieharder does not guarantee this, it may show up something
>> >> unusual. Given that there's previous history in this area, I'd
>> >> consider it a prudent thing to do.
>> >
>> > You understand, I hope, how the relevant machinery works.  Samples are
>> > mixed together in a very large state pool which is a collection of LFSRs,
>> > then hashed with SHA1 before output.
>> >
>> > We then use _that_ output to key the NIST CTR_DRBG.
>>
>> And to just expect that everything is mixed in as hoped, with nothing
>> being missed out because of coding errors, is something I should be
>> embracing, and feel better "just because"? Thanks, but we've been
>> bitten that way once before. I want to make sure we're not bitten that
>> way once again. I can't believe there's pushback on this. Lessons
>> learned, etc.
>
> There's pushback because you're suggesting doing something that's pure
> security theater, with no value.  We've had several bugs in the RNG; we've
> never had a bug in the RNG that would have actually caused Dieharder
> failures!
>
> The construction is not amenable to being tested in the way you suggest
> unless one just wants to *say* one tested it without doing any test that
> is meaningful -- in other words, engage in security theater.  I will not
> do that.
>
> So, I'm asking you what I asked you before: where exactly do you want
> the test rig hooked up to this thing?  I'll remind you:
>
> * The RDRAND values should be expected to *pass* the tests even
> 

Re: Lightweight support for instruction RNGs

2015-12-19 Thread Alistair Crooks
On 19 December 2015 at 17:10, Thor Lancelot Simon <t...@panix.com> wrote:
> On Sat, Dec 19, 2015 at 04:54:20PM -0800, Alistair Crooks wrote:
>> The point is to see if RDRAND plus other inputs does not regress to
>> produce an output that is, in some way, "predictable". And while
>> running dieharder does not guarantee this, it may show up something
>> unusual. Given that there's previous history in this area, I'd
>> consider it a prudent thing to do.
>
> You understand, I hope, how the relevant machinery works.  Samples are
> mixed together in a very large state pool which is a collection of LFSRs,
> then hashed with SHA1 before output.
>
> We then use _that_ output to key the NIST CTR_DRBG.

And to just expect that everything is mixed in as hoped, with nothing
being missed out because of coding errors, is something I should be
embracing, and feel better "just because"? Thanks, but we've been
bitten that way once before. I want to make sure we're not bitten that
way once again. I can't believe there's pushback on this. Lessons
learned, etc.

> I would not expect the LFSR pool output to pass dieharder, no matter
> what input is mixed into it.  Conversely, it's nuts to expect the output
> of SHA1 to fail it, and the same goes for the CTR_DRBG.
>
> There would appear to be no suitable place to connect the test rig you're
> suggesting.  If the statistical properties of the output of either SHA1
> or AES (the core of the CTR_DRBG) detectable differ according to the _input_
> to those functions, everyone in the world's got a heck of a problem.  If
> they don't, then we have nowhere we can attach dieharder to see what you
> want to look for.
>
> The only reasonable thing to run dieharder on is the output of the CPU
> instruction itself.  But one does not need to be in the kernel to do that
> (for the Intel instructions in any case; we could perhaps add a tap to
> dispense the output of the VIA "xstorrng").

I do know there is distrust of RDRAND in general, and I want to make
sure that the combined output is not skewed in any way. Simply saying
that you hope that the output is random is not something I can accept.
Frankly, it's not something you should be trusting either. Just run
the damn thing - what's the problem with that?

>> As to how it's hooked up - either present dieharder with a file of
>> random bytes which you've generated using your new rng (via the -f
>> filename parameter), or use one of the generators - you run it with -g
>> -1 to produce a list of possible generators:
>
> There is no "new rng".  This still makes no sense to me at all.

You have changes you want to make to our rng inputs. There is an
existing rng. It's not hard to work out what I mean here. Just as it's
not hard to run a before and after.

I know you don't expect running it to show anything. I'm not sure what
I expect. But that's not a good reason not to run any automatic test
that we can.


Re: Lightweight support for instruction RNGs

2015-12-19 Thread Alistair Crooks
Have you tried running this with pkgsrc/math/dieharder? I'd be
interested to see the results (the current version in pkgsrc -- 3.31.1
-- is much better than the previous one, and displays its results in a
much more useful way than previously). Not the be-all and end-all, but
still worthwhile running it.

Best,
Alistair

On 19 December 2015 at 16:37, Thor Lancelot Simon  wrote:
> I was playing with code for a RDRAND/RDSEED entropy source and it
> just felt like -- much like opencrypto is poorly suited for crypto
> via unprivileged CPU instructions -- our rndsource interface is
> a little too heavy for CPU RNGs implemented as instructions.
>
> I came up with the attached, which mixes in entropy from a new
> "cpu_rng" each time samples are added to the global rndpool.
>
> On the downside, this means we don't get statistics for this source.
>
> On the upside, the cost of the stats-keeping vastly exceeds the cost
> of the entropy generation, at least for the Intel implementation; I'm
> less sure about VIA.
>
> Another downside is that you can't turn this source on and off; but
> on the upside, it's never used _except_ when samples from other sources
> are being mixed in at the same time, so that should not be a cause for
> security concern.
>
> Needless to say, the real benefit here is that we get up to 64 bits
> of additional entropy along with every other sample, without paying
> any additional locking or synchronization overhead.
>
> I've tested the RDRAND code.  The RDSEED code is almost identical but
> I don't have a CPU with RDSEED handy.  The VIA code is lifted almost
> verbatim from the existing PadLock driver, but I no longer have a VIA
> board to test with -- I'd appreciate help with that.
>
> If this looks like a good idea, before committing it I'll make
> cpu_rng_init a do a little more work -- specifically, an entropy
> test is probably in order.
>
> Thor


Re: Lightweight support for instruction RNGs

2015-12-19 Thread Alistair Crooks
The point is to see if RDRAND plus other inputs does not regress to
produce an output that is, in some way, "predictable". And while
running dieharder does not guarantee this, it may show up something
unusual. Given that there's previous history in this area, I'd
consider it a prudent thing to do.

As to how it's hooked up - either present dieharder with a file of
random bytes which you've generated using your new rng (via the -f
filename parameter), or use one of the generators - you run it with -g
-1 to produce a list of possible generators:

> dieharder -g -1
#=#
#dieharder version 3.31.1 Copyright 2003 Robert G. Brown  #
#=#
#Id Test Name   | Id Test Name   | Id Test Name   #
#=#
|   000 borosh13|001 cmrg|002 coveyou |
|   003 fishman18   |004 fishman20   |005 fishman2x   |
|   006 gfsr4   |007 knuthran|008 knuthran2   |
|   009 knuthran2002|010 lecuyer21   |011 minstd  |
|   012 mrg |013 mt19937 |014 mt19937_1999|
|   015 mt19937_1998|016 r250|017 ran0|
|   018 ran1|019 ran2|020 ran3|
|   021 rand|022 rand48  |023 random128-bsd   |
|   024 random128-glibc2|025 random128-libc5 |026 random256-bsd   |
|   027 random256-glibc2|028 random256-libc5 |029 random32-bsd|
|   030 random32-glibc2 |031 random32-libc5  |032 random64-bsd|
|   033 random64-glibc2 |034 random64-libc5  |035 random8-bsd |
|   036 random8-glibc2  |037 random8-libc5   |038 random-bsd  |
|   039 random-glibc2   |040 random-libc5|041 randu   |
|   042 ranf|043 ranlux  |044 ranlux389   |
|   045 ranlxd1 |046 ranlxd2 |047 ranlxs0 |
|   048 ranlxs1 |049 ranlxs2 |050 ranmar  |
|   051 slatec  |052 taus|053 taus2   |
|   054 taus113 |055 transputer  |056 tt800   |
|   057 uni |058 uni32   |059 vax |
|   060 waterman14  |061 zuf ||
#=#
|   200 stdin_input_raw |201 file_input_raw  |202 file_input  |
|   203 ca  |204 uvag|205 AES_OFB |
|   206 Threefish_OFB   |207 XOR (supergenerator)|208 kiss|
|   209 superkiss   |||
#=#
|   400 R_wichmann_hill |401 R_marsaglia_multic. |402 R_super_duper   |
|   403 R_mersenne_twister  |404 R_knuth_taocp   |405 R_knuth_taocp2  |
#=#
|   500 /dev/random |501 /dev/urandom||
#=#
#=#

Best,
Alistair

On 19 December 2015 at 16:46, Thor Lancelot Simon <t...@panix.com> wrote:
> On Sat, Dec 19, 2015 at 04:42:54PM -0800, Alistair Crooks wrote:
>> Have you tried running this with pkgsrc/math/dieharder? I'd be
>> interested to see the results (the current version in pkgsrc -- 3.31.1
>> -- is much better than the previous one, and displays its results in a
>> much more useful way than previously). Not the be-all and end-all, but
>> still worthwhile running it.
>
> I have to ask, how would you hook it up and what exactly would be the
> point?  If you want to run RDRAND output through dieharder, you don't
> need to use the kernel for that -- the instruction is not a privileged
> instruction.
>
> And suppose you had a nasty tainted CPU where RDRAND actually just
> fed you the output of AES-256 with a key known to the adversary.  That
> output will pass statistical tests just fine -- all of them -- since
> if the output of the cipher is distinguishable from truly random data
> without knowledge of the key, that's a pretty good indicator there's
> a problem with the cipher...
>
> Thor
>


Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-12 Thread Alistair Crooks
On Mon, Nov 11, 2013 at 02:23:11PM -0500, Thor Lancelot Simon wrote:
 It seems to me this is largely a tempest in a teapot that could be
 dealt with by a simple table, somewhere obvious on the web site,
 showing the mapping necessary to download a working kernel and
 binaries for each common CPU (or SoC).

And that table is... where, exactly?

My point is that this should have been done first.  A simple list of
mappings, kept up to date by the people who work on or commit this
stuff.  Not difficult, and immensely helpful to anyone trying to use
NetBSD for anything.

In lieu of that, the whole arm hf/sf setup just stinks from a
usability PoV.

NetBSD isn't alone in this - Linux is going through this whole thing
too, but at least the goal of the strategy there appears coherent. 
We had nothing documented here until this tempest in a teapot was
brewed - we have only had a list of the possible variants in the last
day or so.  This sucks, and has to change.  Much more communication
needed.

Regards,
Alistair


Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-11 Thread Alistair Crooks
On Mon, Nov 11, 2013 at 10:18:29AM -0500, Thor Lancelot Simon wrote:
 On Sun, Nov 10, 2013 at 08:38:27PM +0100, Alistair Crooks wrote:
  
  But in the big picture, having hf and sf versions of a platform's
  userland, in the year 2013, is, well, sub-optimal.  I don't think the
 
 That seems wrong to me.  I don't think in this cases you have
 a platform -- you have two platforms that happen to be able to run
 some of the same code, but which do not share an ABI.
 
 To put it this way, NetBSD on mips can run Ultrix binaries.  Should
 we simply declare Ultrix ECOFF the lowest common denominator and run
 everything else in emulation rather than native?

Not sure I follow your logic there - certainly not saying that we should
do that.

What I am asking for is a much better way of people describing the
design decisions they've taken, and for them to attempt the radical
step of documenting these decisions, and publishing them, so that
people can understand why these decisions were taken.  This would go a
long way towards alleviating the WTF moments that we've all been
experiencing just recently.

To put this another way - someone has a Beaglebone - what userland
should they be looking for - hf, sf?  Beyond that - earm or arm?  How
do people find out what chip is in an embedded appliance?  What web
page documents the choices of ARM NetBSD userland right now, let alone
how to work out where to get them once they know they want a hf earm?
How would they specify that in building packages from pkgsrc?

I'm concerned that you think that what we have right now is workable.

Regards,
Alistair



Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-11 Thread Alistair Crooks
On Sun, Nov 10, 2013 at 01:48:12PM -0800, Matt Thomas wrote:
 
 On Nov 10, 2013, at 1:39 PM, Alistair Crooks a...@pkgsrc.org wrote:
 
  On Sun, Nov 10, 2013 at 01:20:41PM -0800, Matt Thomas wrote:
  Exactly.  with hf, floating point values are passed in floating point
  registers.  That can not be hidden via a library (this works on x86
  since the stack has all the arguments).  
  
  Thanks, I understand.  But...  there has to be a different way of
  doing this that does not require such wholesale changes, especially
  when they were made without discussion.
  
  + use virtual registers which get mapped onto the real thing, either
  through compilation or JIT
 
 Doesn?t help since there are also FP instructions.

Encode the FP instructions in a similar manner, then.
 
  + optimise for one passing scheme, and translate the other dynamically
 
 We already have a libc_vfp.so for earm which will use real FP
 instructions to do the softfloat ops.

And this is documented ... where?
 
  + have both sets of passing conventions in a fat binary, and select
  accordingly
 
 ELF doesn?t really support fat binaries.  

ELF doesn't really support versioning, but everyone uses it for that.
 
  I'm sure there are way more than I've outlined above, and that others
  have much better ideas than I have.
  
  At the moment, this has been optimised for the kernel architecture,
  with the userlevel changes assumed to be collateral damage.  Since the
  users are what matters, that needs to be changed.
 
 I strongly disagree with that.  I specifically choose use different machine
 arches so that the hard/soft float binary packages would be separate.  
 From using soft/hard float userlands on PPC, I already knew that mixing
 them was wrong.  

Firstly, you need to publicly document design choices, and get people's
feedback. I suspect that what you've done is the right thing - by keeping
everything secret, though, we have to have these kind of discussions in
retrospect, and there's a lot of ill-will generated, and people wonder
what you were thinking when you did this.

Secondly, you have created new NetBSD ports for each of these hard/soft
float binary packages. They need port maintainers. They need port pages
and mailing lists. They need a whole support ecosystem so that people
know what is out there, what they can use. They need to be added to the
bulk build system, and the regression test systems, assuming they can be
emulated well.

  How do you propose to fix this (interim) mess for pkgsrc?  This is a
  real issue for us, and you should send your proposal to
  tech-...@netbsd.org.
 
 Is it just the multiplicity of arm packages or something else?

I have no indication before I buy a SoC if it is supported by NetBSD. 
There is no mapping table from SoC to distribution to use.  It is
difficult to see what kernel to use (witness the emails on various
lists which say things don't work for (very talented and clueful
people) when they are using the wrong kernel).  I need to know what
kernel to put on a SoC.  I need to know whether it's earm, and whether
it's sf or hf.  I need to know before I buy that SoC.  I need to know
which SoCs are in which appliance.

So it's not just the large amount of chips out there, it's mapping
them onto NetBSD distributions in advance.

Moving on from that, pkgsrc needs to be updated to deal with all the new
types of hf/sf/earm/arm that people will encounter. I'm more than happy
to work with you to get something that people can use.

Best,
Alistair

Regards,
Alistair


hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-10 Thread Alistair Crooks
On Sun, Nov 10, 2013 at 04:56:04AM +, Jun Ebihara wrote:
 Module Name:  pkgsrc
 Committed By: jun
 Date: Sun Nov 10 04:56:04 UTC 2013
 
 Modified Files:
   pkgsrc/misc/raspberrypi-userland: Makefile
 
 Log Message:
 support earmhf.
 ONLY_FOR_PLATFORM=  NetBSD-*-*arm*
 oked by jmcneill.

Thanks for doing this, Jun-san.

But in the big picture, having hf and sf versions of a platform's
userland, in the year 2013, is, well, sub-optimal.  I don't think the
ramifications of the change were considered in enough detail, and we
need to discuss it, before we have to start growing new architectures
in pkgsrc for this and that.

Can't we lean on what was done for i386/i387 twenty years ago, and
use a userland library to decide whether to use softfloat in the
absence of hardware?

So let's discuss...

Thanks,
Alistair

PS.  I'm going to take a lack of response to this mail (by Nov 18th)
to indicate that the hf/sf stuff should be backed out.  All interested
parties please respond.  Thanks!


Re: MACHINE_ARCH on NetBSD/evbearmv6hf-el current

2013-10-26 Thread Alistair Crooks
On Sat, Oct 26, 2013 at 11:10:52AM -0700, Matt Thomas wrote:
 
 On Oct 26, 2013, at 10:54 AM, Izumi Tsutsui tsut...@ceres.dti.ne.jp wrote:
 
  By static MACHINE_ARCH, or dynamic sysctl(3)?
  If dynamic sysctl(3) is prefered, which node?
  
  hw.machine_arch
  
  which has been defined for a long long time.
  
  Yes, defined before sf vs hf issue arised, and
  you have changed the definition (i.e. make it dynamic)
  without public discussion.  That's the problem.
 
 It was already dynamic (it changes for compat_netbsd32).

Whether or when it's dynamic or not, it would be great if you could
fix it so that binary packages can be used.

And Tsutsui-san is right - public discussion needs to take place, and
consumers made aware, before these kind of changes are made.

Thanks,
Alistair


Re: iscsi?

2013-02-27 Thread Alistair Crooks
On Thu, Feb 28, 2013 at 01:27:57AM -0500, Mouse wrote:
  [iSCSI initiator support?]
 
  I'm pretty sure iscsi-initiator support is only provided for NetBSD
  through the netbsd-iscsi-initiator package in the net category of the
  pkgsrc tree.  Because it relies on the fuse interface to get its work
  done, I am also pretty sure the earliest version of NetBSD on which
  it will work is NetBSD-5.
 
 Hm, fuse...does this mean iSCSI disk can be used only for a filesystem,
 rather than just appearing as another disk drive?  That's what the fuse
 documentation makes it look like; it looks as though it operates at the
 filesystem layer, not the disk-device layer.  Am I missing something?

No, it presents as a block device; it just uses fuse as transport.

There's more information on running the iscsi initiator using a cgd on
top:

http://www.netbsd.org/docs/encrypted-iscsi.html

If you want the in-kernel iscsi initiator, it arrived in NetBSD 6.0.

Regards,
Alistair


Re: SAS scsibus target numbering

2012-07-26 Thread Alistair Crooks
On Fri, Jul 27, 2012 at 07:35:19AM +1000, matthew green wrote:
 
  I have a (mpt) SAS with seven discs connected.
  The discs attach as sd0..sd6, but the SCSI target numbers are 0..5 and 7.
  It appears to me that someone is skippig ID 6 for the controller.
  It doesn't hurt too much, but it took me a while to find out why detaching 
  targets 2, 3, 4 and 5 worked and 6 didn't (of course, 7 worked).
  
  Is there a reason for this behaviour?
 
 it's usual for the SCSI HBA to assign a targetID for itself.
 
 besides having to use a different number, is there any real reason
 you want to change them?  perhaps a better solution would be to
 patch scsictl such that we can detach via autoconf name as well
 as scsi addresses.  maybe we should do this anyway..

Depending on LSI firmware (I think), the LSI may re-number disks
depending on how they present themselves, uuid, etc.  Some people like
this persistence (mainly Winders users, I imagine), it causes problems
for others.

To get around this, at work I used lsiutil to clear the NVRAM
settings.  Otherwise, switching out all the LSI-attached disks in one
of the Open Connect Appliances made the drives present themselves as
da33 to da64, rather than da1 to da32 - under FreeBSD, obviously.

Regards,
Alistair


Adding a new file system

2012-06-09 Thread Alistair Crooks
I recently started writing a file system for some specific use-cases,
and had to make a number of changes. There was a request that I write
up the steps that needed to be taken, and so this tries to outline
them.

For the purposes of this write-up, we'll call the new file system NEWfs

1. the file system code itself - src/sys/fs/NEWfs/*

2. add the location and file-system related definition to 
   src/sys/conf/files
   include fs/NEWfs/files.NEWfs

3. construct a new kernel config file in src/sys/amd64/conf/NEWkernel
   include arch/amd64/conf/GENERIC
   file-system NEWFS

4. add string definitions to src/sys/sys/mount.h
   #define MOUNT_NEWFS   newfs   /* New file system*/

5. add new vnodetagtype (VT_NEWFS) and string representation in
   VNODE_TAGS in the same respective places to src/sys/sys/vnode.h

6. add and populate src/sbin/mount_NEWfs

7. add and populate src/sbin/newfs_NEWfs

8. add new FSTYPE_DEFN src/sys/sys/disklabel.h
   x(NEWFS, 30, NewFS,NULL,  newfs)/* NEW file system */

9. add another definition to src/sys/sys/disk.h
   #define  DKW_PTYPE_NEWFS newfs

10. add the DKW definition from src/sys/sys/disk.h to the code in
src/common/lib/libutil/getfstypename.c
case FS_NEWFS:
return DKW_PTYPE_NEWFS;

Then I had to add some code so that I could build my new file system
with the rump infrastructure:

11. add the source directory for NEWfs to 
src/sys/rump/fs/lib/Makefile.rumpfscomp

12. add and populate src/sys/rump/fs/lib/libNEWfs

13. add and populate src/usr.bin/puffs/rump_NEWfs

14. add and populate src/sys/modules/NEWfs

Having built and installed all header files by make includes, and
all libraries and programs by make install, I could newfs to a file
system image, and use rump to mount the image on a directory.

Regards,
Alistair
Sat Jun  9 00:28:53 PDT 2012


Re: Adding a new file system

2012-06-09 Thread Alistair Crooks
On Sat, Jun 09, 2012 at 05:29:02PM +0900, Izumi Tsutsui wrote:
 agc@ wrote:
 
  I recently started writing a file system for some specific use-cases,
  and had to make a number of changes. There was a request that I write
  up the steps that needed to be taken, and so this tries to outline
  them.
  :
  1. the file system code itself - src/sys/fs/NEWfs/*
  :
  7. add and populate src/sbin/newfs_NEWfs
 
 and makefs(8) for distribution?

Yes, you're absolutely right.  I had been about to reply that my new
file system was special, was not general purpose, but specifically
crafted for simplicity, speed and efficiency; however, being able to
take a directory structure and be able to just construct an image on
it would be very useful.  Thanks for the insight.
 
 It's efficient to design kernel and newfs sources
 to share them with makefs too.
 
 v7fs is a good example.
 (and it's so annoying to write makefs -t msdos
  with current implementations..)

Heh.  I contrasted the source code in v7fs and sysvbfs and came to the
conclusion that v7fs was larger - it is spread over many files,
though, and maybe the endian translation takes up some more space too.

Thanks once again,
Alistair



Re: GSOC 2012 project clarification

2012-04-02 Thread Alistair Crooks
On Mon, Apr 02, 2012 at 08:48:35PM -0400, Matthew Mondor wrote:
   So by
   generating some pseudo random numbers we can erase the previous secure
   data.
  
  I'm not sure that pseudo-random numbers help security in the general
  case, compared to just zeros. For a plain harddisk, either one is
  good enough. For a SSD, both are useless. A difference would be
  if the device was an encrypted disk because all-0 would be a perfect
  known plaintext. It should be configurable.
 
 For reference, perhaps see what rm(1) -P option does (and GNU's
 shred(1)), which is a commonly used technique: overwrite with 0xff,
 overwrite with 0x00, then with some pseudo-random data.  I'm not sure
 if the last step is necessary, but it's generally recommended to not
 just overwrite with 0x00 but also with 0xff first.
 
 rm(1) tells where to read more:
  The -P option attempts to conform to U.S. DoD 5220-22.M, National
  Industrial Security Program Operating Manual (NISPOM) as updated by
 ...
 
 Some hardware also support the feature, and as a second step it might
 be nice to be able to use this feature where available...

There are a number of scenarios which make any attempt to overwrite
data blocks less efficient than they would otherwise seem.  Even
without taking SSDs into account, there's all of the sector sparing
which takes place in drives manufactured in the last 15 years.

Neither would I advocate all of Peter Gutman's 35 passes

http://en.wikipedia.org/wiki/Gutmann_method

(for fairly obvious reasons); I wouldn't even recommend the 7 passes
most US government agencies mandate these days. All of this is way over
the top.

But I'd like the students to look into things a bit more, and tell us
what they propose should be best practise in 2012 and for the next 3-5
years, say.

Regards,
Alistair



Re: Implementing mount_union(8) into vfs (for -o union)?

2012-01-28 Thread Alistair Crooks
On Sat, Jan 28, 2012 at 09:47:16PM +0200, Alan Barrett wrote:
 On Sat, 28 Jan 2012, Julian Fagir wrote:
 I've just been trying to mount a tmpfs over a read-only root 
 file system.  Unfortunately, this won't work just by mounting a 
 tmpfs with option union over the root file system. You'd have to 
 create a tmpfs, and mount that one with mount_union(8) over the 
 root file system, which is again not possible.
 
 I read your message twice and I still don't know what you mean. 
 Could you give examples of the commands that you use, and the 
 errors.

An example of using unionfs, rather than union dirs, is in
src/distrib/embedded/mkimage - which unfortunately has too many tmpfs
incantations, but does work well for usermode.  More should happen in
this area in future.  I think that I used mount_union as union dirs
didn't handle whiteouts properly, but it's been a while, and I didn't
write things down from that period.

One thing I did find was that some log files are opened O_WRONLY|O_APPEND,
and so need to be copied manually into the upper layer first or the
corresponding upper-layer entry does not created.

Regards,
Alistair


Re: Implementing mount_union(8) into vfs (for -o union)?

2012-01-28 Thread Alistair Crooks
On Sat, Jan 28, 2012 at 10:33:06PM +0100, Julian Fagir wrote:
 There is no src/distrib/embedded?

sorry, src/distrib/utils/embedded
 


Re: Addition to kauth(9) framework

2011-08-29 Thread Alistair Crooks
On Mon, Aug 29, 2011 at 09:19:11AM -0400, Christos Zoulas wrote:
 On Aug 29,  7:54pm, m...@eterna.com.au (matthew green) wrote:
 -- Subject: re: Addition to kauth(9) framework
 
 | 
 |   In article 20110829003259.913f014a...@mail.netbsd.org,
 |   YAMAMOTO Takashi y...@mwd.biglobe.ne.jp wrote:
 |  hi,
 |  
 |   I'd like to apply the attached patch.
 |   It implements two things:
 |   
 |   - chroot(2)-ed process is given new kauth_cred_t with reference count
 | equal to 1.
 |  
 |  can you find a way to avoid this?
 |  
 |  YAMAMOTO Takashi
 |   
 |   He tried and I think that this is the minimal hook he needs.
 |  
 |  do you mean that we need to unshare the credential unconditionally,
 |  regardless his module is used or not?  why?
 | 
 | maybe it's just me, but i actually have absolutely no problem
 | with chroot unsharing kauth_cred_t by default.  it just seems
 | to have more generic safety aspects.
 
 I share the same sentiment; I don't see the change as a big deal.

Likewise - the whole idea behind chroot is the isolatino of
operations, and I can only see the unsharing of kauth_cred_t by
default as helping this.

Maybe I'm missing something here?

Thanks,
Alistair


Re: Seventh Edition(V7) filesystem support.

2011-05-25 Thread Alistair Crooks
On Tue, May 24, 2011 at 10:48:40PM +0900, UCHIYAMA Yasushi wrote:
 This filesystem purpose I intended is that file exchanging with small computer
 (such as H8/300, ARM7TDMI...)system. as alternative of FAT. and also,
 Tri-endian support. It can mount PDP-11 V7 disk image.
 
 http://www.vnop.net/~uch/h8/w/tools/patch/netbsd5.99.24-v7fs110524.patch

This is superb, thank you - I'd love to see this in-tree.

Thanks,
Alistair


Re: enumeration of bicycle sheds

2010-02-18 Thread Alistair Crooks
On Thu, Feb 18, 2010 at 01:18:54PM +, Iain Hibbert wrote:
 actually caused warnings then I would be more inclined to use it. In the
 meantime, I generally prefer #define but am not greatly attached.

whilst bicycles are a - particularly - sore point for me right now,
the reason that used to be proferred for preferring enums over #define
was the ability of symbolic debuggers to grok enums.

regards,
alistair