Re: Git/Mtn for FreeBSD, PGP WoT Sigs, Merkel Hash Tree Based

2019-10-07 Thread grarpamp
On 10/4/19, Igor Mozolevsky  wrote:
> On Fri, 20 Sep 2019 at 22:01, grarpamp  wrote:
>>
>> For consideration...
>> https://lists.freebsd.org/pipermail/freebsd-security/2019-September/010099.html
>>
>> SVN really may not offer much in the way of native
>> internal self authenticating repo to cryptographic levels
>> of security against bitrot, transit corruption and repo ops,
>> external physical editing, have much signing options, etc.
>> Similar to blockchain and ZFS hash merkle-ization,
>> signing the repo init and later points tags commits,
>> along with full verification toolset, is useful function.
>
>
> 
>
> Isn't UNIX(TM) philosophy that a program should do one thing and do it
> well? Just because people can't be bothered to learn to use multiple
> tools to do *multiple* tasks on the same dataset, is not a reason, let
> alone "the reason," to increase any program complexity to orders of
> N^M^K^L so that one "foo checkout" does all the things one wants!

Was r353001 cryptosigned so people can verify it with
a second standalone multiple tool called "PGP", after the
first standalone multiple tool called "repo checkout"?
Was it crypto chained back into a crypto history so they could
treat it as a secure diff (the function of a third standalone multiple
tool "diff a b") instead of as entirely separate (and space wasting
set of) unlinked independant assertions / issuances as to a state?
How much time does that take over time each time vs
perhaps loading signed set of keys into repo client config.
Is LOGO and tape better because less complex tool than C and disk.

> When crypto invalidates a repo, how would it be different
> from seeing non ASCII characters in plain ASCII files, or sudden
> refusal to compile
> one way or another you'd still need to restore
> from BACKUP

Backup is separate, and indeed a fine practice to help
keep for when all sorts of horrors can happen.

> crypto IS NOT a substitute for good data keeping
> practices.

Who said that it was. However it can be a wrapper of
proof / certification / detection / assurance / integrity / test
over them... a good thing to have there, as opposed to nothing.

> Also, what empirical data do you have for repo bitrot/transit
> corruption that is NOT caught by underlying media?

Why are people even bothering to sha-2 or sign iso's, or
reproducible builds? There is some integrity function there.
Else just quit doing those too then.

Many sources people can find, just search...
https://www.zdnet.com/article/dram-error-rates-nightmare-on-dimm-street/
http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
http://www.cs.toronto.edu/~bianca/papers/ASPLOS2012.pdf
https://www.jedec.org/sites/default/files/Barbara_A_summary.pdf
https://en.wikipedia.org/wiki/Data_degradation
https://en.wikipedia.org/wiki/ECC_memory
https://en.wikipedia.org/wiki/Soft_error
Already have RowHammer too, who is researching DiskHammer?
Yes, there does need to be current baseline studies made
in 2020 across all of say Google, Amazon, Facebook global
datacenters... fiber, storage, ram, etc. It is surely not zero
errors otherwise passed.

Then note all the users who do not run any media, memory,
and cables capable of detecting and or correcting garbage.
And the claims or data, about "checksums / digests / hashes"
that fall short of at least 2^128 odds that strong
crypto based repositories can provide. Many do not,
and should not, accept less as sufficient standards.
What is the worth of your data and instructions producted
with some software from some repositories from some hops.
Though error is only part of entire possible subject, still however...
Lower some risks there too by raising some crypto bars.

Be sure to expand "external physical editiing" hinted
to include malicious, even by both local and remote
adversarial actors, and or those acting outside of
established practice. Some crypto repositories require
additionally compromise of committer and or distribution
private key to impart trust downstream, all of which leaves
nice audit, instead of just sneaking in a "vi foo.rcs" or binary
equivalent.

Cryptographic defense in depth, not prayer.


[Sorry not sure which is better mail list]
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Git/Mtn for FreeBSD, PGP WoT Sigs, Merkel Hash Tree Based

2019-10-07 Thread Igor Mozolevsky
On Mon, 7 Oct 2019 at 08:43, grarpamp  wrote:
>
> On 10/4/19, Igor Mozolevsky wrote:
> > On Fri, 20 Sep 2019 at 22:01, grarpamp  wrote:
> >>
> >> For consideration...
> >> https://lists.freebsd.org/pipermail/freebsd-security/2019-September/010099.html
> >>
> >> SVN really may not offer much in the way of native
> >> internal self authenticating repo to cryptographic levels
> >> of security against bitrot, transit corruption and repo ops,
> >> external physical editing, have much signing options, etc.
> >> Similar to blockchain and ZFS hash merkle-ization,
> >> signing the repo init and later points tags commits,
> >> along with full verification toolset, is useful function.
> >
> >
> > 
> >
> > Isn't UNIX(TM) philosophy that a program should do one thing and do it
> > well? Just because people can't be bothered to learn to use multiple
> > tools to do *multiple* tasks on the same dataset, is not a reason, let
> > alone "the reason," to increase any program complexity to orders of
> > N^M^K^L so that one "foo checkout" does all the things one wants!
>
> Was r353001 cryptosigned so people can verify it with
> a second standalone multiple tool called "PGP", after the
> first standalone multiple tool called "repo checkout"?
> Was it crypto chained back into a crypto history so they could
> treat it as a secure diff (the function of a third standalone multiple
> tool "diff a b") instead of as entirely separate (and space wasting
> set of) unlinked independant assertions / issuances as to a state?
> How much time does that take over time each time vs
> perhaps loading signed set of keys into repo client config.

I'm guessing they are rhetorical questions; but you ought to look up
how to do tool chaining in any flavour in UNIX(TM).


> Is LOGO and tape better because less complex tool than C and disk.

For some people, perhaps.




> > crypto IS NOT a substitute for good data keeping
> > practices.
>
> Who said that it was. However it can be a wrapper of
> proof / certification / detection / assurance / integrity / test
> over them... a good thing to have there, as opposed to nothing.

What is the specific risk model you're mitigating---all you say is
hugely speculative?!


> > Also, what empirical data do you have for repo bitrot/transit
> > corruption that is NOT caught by underlying media?
>
> Why are people even bothering to sha-2 or sign iso's, or
> reproducible builds? There is some integrity function there.
> Else just quit doing those too then.

Funny you should say that, Microsoft, for example, don't checksum
their ISOs for the OSes. You missed the point about reproducible
builds entirely: given code A from Alice and package B from Bob,
Charlie can compile package C from A and verify that C is identical to
B, a simple `diff' of binaries is sufficient for that! The problem is
that a lot of the time code A itself is buggy to such degree that it's
vulnerable to attack (recall Heartbleed, for example). Crappy code is
not mitigated by any layer of additional integrity checking of the
same crappy code!


> Many sources people can find, just search...
> https://www.zdnet.com/article/dram-error-rates-nightmare-on-dimm-street/
> http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
> http://www.cs.toronto.edu/~bianca/papers/ASPLOS2012.pdf
> https://www.jedec.org/sites/default/files/Barbara_A_summary.pdf
> https://en.wikipedia.org/wiki/Data_degradation
> https://en.wikipedia.org/wiki/ECC_memory
> https://en.wikipedia.org/wiki/Soft_error

I don't bother with second-hand rumors on WikiPedia so I'm not even
going to bother looking there, but as for the rest, seriously, you're
quoting a study of DDR1 and DDR2??? I have it on good authority that
when at least one manufactured moved to smaller die process for
DDR3 they saw the error rates plummet to their own surprise (as they
were expecting the opposite) and now we're on DDR4, and what's
the die size there?.. Perhaps you need to look into the error rates of
EDO RAM et al too?

In any event, ECC, integrity checking etc is done on the underlying
media to detect and in some cases correct errors so you have to worry
less about it at higher levels, so getting so obsessed by it is just
silly especially advocating for a tool to do it all in one go! Here's
a question to ponder: if code set X, certificate Y, and signed digest
Z are stored on one media (remote server in your case), and your
computed digest doesn't match digest Z, what part was corrupt, X, Y,
or Z, or your checksumming?


> Already have RowHammer too, who is researching DiskHammer?

And RowHammer has been successfully demonstrated in a production
environment? How exactly are you planning on timing the attack vector
to get RAM cell data when you (a) don't know when that cell will be
occupied by what you want, nor (b) where that cell is going to be in the
first place? Go ask any scientist who works for pharma to explain the
difference between "works in a lab" and "works in the real world"...


> Yes, there does

Re: ktrace/kdump give incorrect message on unlinkat() failure due to capabilities

2019-10-07 Thread John Baldwin
On 9/25/19 10:33 AM, Sergey Kandaurov wrote:
> On Sat, Sep 21, 2019 at 08:43:58PM -0400, Ryan Stone wrote:
>> I have written a short test program that runs unlinkat(2) in
>> capability mode and fails due to not having the write capabilities:
>>
>> https://people.freebsd.org/~rstone/src/unlink.c
>>
>> If I run the binary under ktrace and look at the kdump output, it
>> gives the following incorrect output:
>>
>> 43775 unlink   CALL  unlinkat(0x3,0x7fffe995,0)
>> 43775 unlink   NAMI  "from.QAUlAA0"
>> 43775 unlink   CAP   operation requires CAP_LOOKUP, descriptor holds 
>> CAP_LOOKUP
>> 43775 unlink   RET   unlinkat -1 errno 93 Capabilities insufficient
>>
>> The message should instead say that the operation requires
>> CAP_UNLINKAT.  Looking at sys/capsicum.h, I suspect that the problem
>> is related to the strange definition of CAP_UNLINKAT:
>>
>> #define CAP_UNLINKAT (CAP_LOOKUP | 0x1000ULL)
> 
> FYI, with this grep it was able to decode capabilities.
> 
> Index: lib/libsysdecode/mktables
> ===
> --- lib/libsysdecode/mktables (revision 352685)
> +++ lib/libsysdecode/mktables (working copy)
> @@ -157,7 +157,7 @@
>  gen_table "sigcode" "SI_[A-Z]+[[:space:]]+0(x[0-9abcdef]+)?"   
> "sys/signal.h"
>  gen_table "umtxcvwaitflags" "CVWAIT_[A-Z_]+[[:space:]]+0x[0-9]+"   
> "sys/umtx.h"
>  gen_table "umtxrwlockflags" "URWLOCK_PREFER_READER[[:space:]]+0x[0-9]+"
> "sys/umtx.h"
> -gen_table "caprights"   
> "CAP_[A-Z_]+[[:space:]]+CAPRIGHT\([0-9],[[:space:]]+0x[0-9]{16}ULL\)"   
> "sys/capsicum.h"
> +gen_table "caprights"   
> "CAP_[A-Z_]+[[:space:]]+(CAPRIGHT|[()A-Z_|[:space:]]+CAP_LOOKUP)"   
> "sys/capsicum.h"
>  gen_table "sctpprpolicy""SCTP_PR_SCTP_[A-Z_]+[[:space:]]+0x[0-9]+" 
> "netinet/sctp_uio.h" "SCTP_PR_SCTP_ALL"
>  gen_table "cmsgtypesocket"  "SCM_[A-Z_]+[[:space:]]+0x[0-9]+"  
> "sys/socket.h"
>  if [ -e "${include_dir}/x86/sysarch.h" ]; then

CAP_SEEK and CAP_MMAP_X might also be subject to this.  However, I'm not quite
understanding the regex, or at least why the modified portion of the regex isn't
something like this:

(CAPRIGHT\(|\(CAP_LOOKUP)

That is, you currently have [()A-Z_|[:space:]]+ for an expression that I think
will only ever match a single '(' character.

A more general form that might work for CAP_SEEK and CAP_MMAP_X might be
to match on 'CAP_ | 0xhttps://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: memstick installer doesn't install loader.efi into ESP

2019-10-07 Thread Yuri Pankov
On Mon, Oct 7, 2019, at 5:23 AM, Yuri Pankov wrote:
> Just tried reinstalling the system on my laptop using the latest 
> available memstick snapshot 
> (https://download.freebsd.org/ftp/snapshots/amd64/amd64/ISO-IMAGES/13.0/FreeBSD-13.0-CURRENT-amd64-20191004-r353072-memstick.img),
>  
> using UEFI boot, and default ZFS partitioning, and it didn't boot after 
> installation.  Booting into the installer again, I noticed that ESP is 
> empty.
> 
> Reinstalling again after wiping pool labels and clearing partitions 
> didn't change anything, though I noticed the "/tmp/bsdinstall-esps: no 
> such file or directory" in the installer log.
> 
> Mounting ESP, creating EFI/FreeBSD/ directory, copying /boot/loader.efi 
> there, and creating appropriate Boot variable solves it, of course, but 
> I'm wondering what have gone wrong.
> 
> Laptop has NVMe drive (nvd0, empty, gpart destroy -F nvd0), SATA drive 
> (ada0, empty, gpart destroy -F ada0), and is booting from USB memstick 
> (da0); no Boot variables defined when booting to installer (other than 
> defaults ones for laptop).  Any other details I should provide here?

I think I see the problem, bootconfig script uses ZFSBOOT_DISKS variable that 
isn't defined and always empty and auto-detection must be unreliable in some 
cases, at least for me as I reproduced it on another system with NVMe device 
adding a SATA disk as well (didn't look into the details).  Should be easy to 
fix, review incoming.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"