Re: [gentoo-dev] Re: RFC: News item for net-firewall/shorewall all-in-one package migration

2015-04-17 Thread Thomas D.
Hi,

thank you all for the feedback.

I read through the news archive and most previous news items don't use
the package category in the title.

I'll propose

 > Title: shorewall is now a single package


I filled a bug for the news item request:
https://bugs.gentoo.org/show_bug.cgi?id=546952


-Thomas



signature.asc
Description: OpenPGP digital signature


[gentoo-dev] RFC: News item for net-firewall/shorewall all-in-one package migration

2015-04-04 Thread Thomas D.
Hi,

some of you maybe know or already have noticed that the
net-firewall/shorewall* ebuilds were re-integrated into a new all-in-one
ebuild for easier maintenance.

The package is proxy-maintained.

While preparing the new ebuild I discussed with the proxy-maint team and
shorewall users if we should create a news item for that change.
Most people participating in the discussion thought that emerge's error
message like

> # emerge -p --update net-firewall/shorewall::gentoo
> 
> These are the packages that would be merged, in order:
> 
> Calculating dependencies... done!
> [ebuild U  ] net-firewall/shorewall-4.6.6.2::gentoo 
> [4.5.21.10-r1::gentoo] USE="doc init%* ipv4%* ipv6%* lite4%* -lite6%" 0 KiB
> [blocks B  ] net-firewall/shorewall-init ("net-firewall/shorewall-init" 
> is blocking net-firewall/shorewall-4.6.6.2)
> [blocks B  ] net-firewall/shorewall-core ("net-firewall/shorewall-core" 
> is blocking net-firewall/shorewall-4.6.6.2)
> 
> Total: 1 package (1 upgrade), Size of downloads: 0 KiB
> Conflict: 2 blocks (2 unsatisfied)
> 
> !!! Multiple package instances within a single package slot have been pulled
> !!! into the dependency graph, resulting in a slot conflict:
> 
> net-firewall/shorewall:0
> 
>   (net-firewall/shorewall-4.6.6.2:0/0::gentoo, ebuild scheduled for merge) 
> pulled in by
> net-firewall/shorewall::gentoo (Argument)
> 
>   (net-firewall/shorewall-4.5.21.10-r1:0/0::gentoo, installed) pulled in by
> =net-firewall/shorewall-4.5.21.10-r1 required by 
> (net-firewall/shorewall-init-4.5.21.10-r1:0/0::gentoo, installed)
> ^   
> 
> 
> It may be possible to solve this problem by using package.mask to
> prevent one of those packages from being selected. However, it is also
> possible that conflicting dependencies exist such that they are
> impossible to satisfy simultaneously.  If such a conflict exists in
> the dependencies of two different packages, then those packages can
> not be installed simultaneously. You may want to try a larger value of
> the --backtrack option, such as --backtrack=30, in order to see if
> that will solve this conflict automatically.
> 
> For more information, see MASKED PACKAGES section in the emerge man
> page or refer to the Gentoo Handbook.
> 
> 
>  * Error: The above package list contains packages which cannot be
>  * installed at the same time on the same system.
> 
>   (net-firewall/shorewall-init-4.5.21.10-r1:0/0::gentoo, installed) pulled in 
> by
> net-firewall/shorewall-init required by @selected
> 
>   (net-firewall/shorewall-core-4.5.21.10-r1:0/0::gentoo, installed) pulled in 
> by
> =net-firewall/shorewall-core-4.5.21.10-r1 required by 
> (net-firewall/shorewall-4.5.21.10-r1:0/0::gentoo, installed)
> 
> 
> For more information about Blocked Packages, please refer to the following
> section of the Gentoo Linux x86 Handbook (architecture is irrelevant):
> 
> https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Blocked_packages

should be clear enough for everyone.


Well, it turns out that not everyone understands the merge conflict and
knows what to do. Multiple users filled bugs and requested a news item,
two recent examples:

- https://bugs.gentoo.org/show_bug.cgi?id=544216#c2
- https://bugs.gentoo.org/show_bug.cgi?id=539664#c2


As proxy-maintainer I changed my mind today and created a news item.
Mostly because it doesn't hurt anyone (no negative impact). It only
helps people who don't know what to do... and why shouldn't we help if
we can?

Please review my proposal below:

Just a few notes to explain my choice of words:

1) The news item will tell the user what has changed and why this change
   was made. Interested users can read the bug report for further
   information.

2) The given emerge command should work on all systems for every user.
   No need to check which package in detail they need to remove.
   No error messages like
 "--- Couldn't find 'net-firewall/shorewall-lite' to unmerge."
   because they didn't have shorewall-lite installed.

3) The last paragraph should indicate that the new shorewall ebuild is
   "stable" and that they don't have to react immediately but within the
   next 30-60 days if they don't want to upgrade now.


===
Title: New net-firewall/shorewall all-in-one package
Author: Thomas D. 
Content-Type: text/plain
Posted: 2015-04-
Revision: 1
News-Item-Format: 1.0
Display-If-Installed: net-firewall/shorewall-core
Display-If-Installed: net-firewall/shorewall6
Display-If-Installed: net-firewall/shorewall-lite
Display-If-Installed: net-firewall/shorewall6-lite
Display-If-Installed: net-firewall/sh

Re: [gentoo-dev] rfc: add-on files handling improvements

2015-03-30 Thread Thomas D.
Hi,

William Hubbs wrote:
> I believe, back in the day we started this practice, portage did not
> support --newuse or --changed-use, so there was no way to only update
> packages that had changed or new use flags. In that situation, I
> understand why we installed all of these add-on files unconditionally
> and told users to use INSTALL_MASK if they wanted them not to be on
> their systems.
> 
> However, I feel that we should update our practice now since we have these
> features available to us and to our users.
> 
> In my previous thread about zsh, it was suggested that I could use the
> zsh-completion use flag to control zsh-completion installation, and not
> rdepend on zsh. This is now how pybugz is set up.

Are we talking about an actual problem?

Are these "add-on files" causing real problems?

How many "add-on files" on an average system are really useless (=cruft
files) for most people and how much disk space do they take up?


Do you remember the discussion about "USE flag per init system"? It was
decided to drop the systemd USE flag if it is only controlling the
installation of systemd service files and we didn't want to introduce a
USE flag per init system because it doesn't scale.

Also, image you are on OpenRC and decide to switch to systemd. If we
wouldn't install the service files on every system the user would have
to re-emerge his/her whole system to switch.

Following your argumentation we would add an exception for init systems.

So which add-on files are left? Logrotate! Doesn't the same argument
against USE flags for each init system applies to things like logrotate,
too? If not, at least the same argument "if you switch your init system"
applies: If you decide to start using logrotate you would have to
re-emerge your packages just for a 1kb file...

Add another exception for logrotate files? :)

I guess that's not what you want. But do you see the problem? How would
you decide for which files you want to add an exception?

Do you want to discuss with cron users if their cronjobs are add-on
files or not?

Some packages are providing files for logwatch. I don't expect that many
desktop user will use logwatch. But do you want to start a discussion
with non-desktop users if it is worth to re-emerge a whole package for
1kb additional files?

Coming back to my first question: Why do you want to change the current
situation? Are these "add-on files" causing any problems nowadays?

Introducing another USE flag to control what INSTALL_MASK already do
doesn't sound like a good way to go for me.

But maybe I am not aware of a real problem you have with these "add-on
files"...


-Thomas




Re: [gentoo-dev] Should Gentoo do https by default?

2015-03-27 Thread Thomas D.
Hi,

Hanno Böck wrote:
> Right now a number of Gentoo webpages are by default served over http.
> There is a growing trend to push more webpages to default to https,
> mostly pushed by google. I think this is a good thing and I think
> Gentoo should follow.

+1


> Right now we seem to have a mix:
> * A number of webpages default to http and have optional https
>   (www.gentoo.org)
> * Some with sensitive logins are already https by default (e.g.
>   bugs.gentoo.org), but they don't use hsts, which they should
> * Some with logins are mixed http/login-via-https, which makes them
>   vulnerable to ssl-stripping-attacks (e.g. wiki.gentoo.org)

Don't forget the forum (http://forums.gentoo.org/). Even if you connect
to https://forums.gentoo.org/ it will always fall back to HTTP.
Also all the mail notifications will send you to the HTTP version...


-Thomas




RE: [gentoo-dev] Figuring out the solution to in-network-sandbox distcc

2015-01-25 Thread Thomas D.
Hi,

Michał Górny wrote:
> I see two generic approaches possible here:
> 
> 1. proxying distcc from within the build environment, or
> 
> 2. moving distcc-spawned processes back to parent's namespace.
> 
> 
> distcc client/server solution
> -
> 
> The most obvious solution to me is to employ a client/server model
> where a system-wide daemon is running, parsing /etc/distcc/hosts
> and doing all the network activity.
> 
> [...]

It is not only distcc. Please don't forget things like sys-devel/icecream.


-Thomas





Re: [gentoo-dev] Re: RFC: enabling ipc-sandbox & network-sandbox by default

2014-05-15 Thread Thomas D.
Hi,

Ciaran McCreesh wrote:
> Sandboxing isn't about security. It's about catching mistakes.

>From Wikipedia
(http://en.wikipedia.org/wiki/Sandbox_%28computer_security%29):
> In computer security, a sandbox is a security mechanism for 
> separating running programs. It is often used to execute untested 
> code, or untrusted programs from unverified third-parties,
> suppliers, untrusted users and untrusted websites

network-sandbox is using unshare() syscalls to separate... not?

But when I wrote my mail I was referring to Michal's statements in
. He is
explicitly listing "improving security"...


-Thomas



Re: [gentoo-dev] Re: RFC: enabling ipc-sandbox & network-sandbox by default

2014-05-15 Thread Thomas D.
Hi,

Ryan Hill wrote:
> Probably best to make FEATURES=distcc disable network-sandbox
> then. People enabling it are explicitly saying they want to access
> the network.

Do you really think it is a good behavior to automatically disable
something you can call a "security feature"? At least there should be a
warning, not?

Think about situations where the user just know "network-sandbox is
important, because it will protect my system from unwanted
modifications" (the thing where the test suite for example will write to
the local, productive, database server...) and therefore explicitly
enable that feature by hand.

But the user is *also* using distcc to speed up the compilation/update
time in his/her network.

The user maybe knows that distcc is using network, but he/she might be
surprised that it won't work together with the network-sandbox feature.
If we now silently disable network-sandbox because the user also set
distcc he/she might be even more surprised when he/she noticed that
his/her local productive database system was accessed by emerge though
he/she enabled network-sandbox feature to prevent this (but which was
automatically disabled without a warning).

Because it is security relevant and the impact could be a real problem I
won't even show just a warning the user could miss. If network-sandbox
*and* distcc are both set, emerge should fail complaining about the
problem.
This is something the user should be aware of and must be solved by hand.

So if we decide to enable the network-sandbox feature by default (which
we should do), users also using distcc must take action.

And if in future we will solve the problem so that both features can be
used together, we should send out a news item for people using the
distcc feature telling them "Now you can re-enable (the default)
network-sandbox feature"...


-Thomas




Re: [gentoo-dev] Possibility of overriding user defined INSTALL_MASK from an ebuild?

2014-02-28 Thread Thomas D.
Hi,

Ian Stakenvicius wrote:
> That said, what we could do (if this isn't done already) is have 
> portage automatically elog or ewarn what files are excluded from
> the system on merge time due to the INSTALL_MASK.  At least that
> way, users would be able to see in the log what files were removed,
> so when something they need -is- removed they'll be able to see
> that right away. (note, i've never used INSTALL_MASK, so I've no
> idea what portage reports)

That's already happening.

For example an INSTALL_MASK

  INSTALL_MASK="/etc/systemd/"
  INSTALL_MASK="${INSTALL_MASK} /lib/systemd/"
  INSTALL_MASK="${INSTALL_MASK} /lib64/systemd/"
  INSTALL_MASK="${INSTALL_MASK} /usr/lib/systemd/"
  INSTALL_MASK="${INSTALL_MASK} /usr/lib64/systemd/"

is given. When you emerge a package you will see messages like

  [...]

  >>> Installing (1 of 1) sys-fs/udev-210
   * Removing /etc/systemd/
   * Removing /lib/systemd/
   * Removing /lib64/systemd/
   * Removing /usr/lib/systemd/
   * Removing /usr/lib64/systemd/
   * checking 51 files for package collisions
  >>> Merging sys-fs/udev-210 to /

  [...]

If you keep logs, elogv for example will also show this information:

  │ [...]│
  │  │
  │INFO: other   │
  │Removing /etc/systemd/│
  │Removing /lib/systemd/│
  │Removing /lib64/systemd/  │
  │Removing /usr/lib/systemd/│
  │Removing /usr/lib64/systemd/  │

The downside is that this message will always appear when you have set
an INSTALL_MASK. Even for packages which don't install anything into
the masked paths. So people maybe tend to ignore this information
because it is always shown :)

If this message would only be shown if the merged package is *really*
affected by the INSTALL_MASK, this would be an improvement.


-Thomas




Re: [gentoo-dev] News draft #2 for the udev-210 upgrade (was: 209 upgrade)

2014-02-26 Thread Thomas D.
Hi,

I like your (Alex) new proposal, but I have the following annotations:

> As of sys-fs/udev-210, the options CONFIG_FHANDLE and CONFIG_NET
> are now required in the kernel. A warning will be issued if they
> are missing when you upgrade. See the package's README in
> /usr/share/doc/udev-210/ for more optional kernel options.

Isn't that a chicken/egg problem? I see the NEWS when I have
 Overriding the 80-net-name-slot.rules in /etc/udev/rules.d/ no
> longer works since upstream renamed the file to 
> /lib/udev/rules.d/80-net-setup-link.rules.

All I have to do is changing the name from "80-net-name-slot.rules" to
"80-net-setup-link.rules", e.g. adjusting the name because upstream
renamed... not? So a text like

  The most reliable way of disabling the new network interface
  scheme is still the kernel parameter "net.ifnames=0". If you are
  using the "net-name-slot.rules" approach make sure you adapt
  the new naming before you restart because upstream renamed the
  "80-net-name-slot.rules" to "80-net-setup-link.rules".

sounds better for me instead of saying "overriding no longer works"
and to clarify later that this is still possible ;)

I don't see a need for mentioning that the actual configuration is
located in "/lib/systemd/network/..." in the NEWS item.


-Thomas



Re: [gentoo-dev] News draft #2 for the udev-210 upgrade (was: 209 upgrade)

2014-02-25 Thread Thomas D.
Hi,

Rich Freeman wrote:
> On Tue, Feb 25, 2014 at 6:39 AM, Thomas D.  wrote:
>> Also, I cannot belief that I cannot overwrite
>> "/lib/udev/rules.d/80-net-setup-link.rules" via "/etc/udev/rules.d"...
> 
> I don't see why not - from the news item:
> So, to clarify, you can override the new .rules file or the .link file in /etc
> but using the kernel parameter is the most consistent way.

Maybe I am wrong, but when talking about kernel parameter we are talking
only about

   net.ifnames=

right?

So with this parameter we can only disable the new naming, right?

But as said, I am using udev to name my interfaces -- the new kernel
naming isn't my problem. I don't understand how this should help me.

My fear is that all my routers and servers with multiple interfaces
won't come up anymore after the upgrade because they don't have my
custom names anymore because due to the new rule, udev didn't or failed
to rename...


>> Don't get me wrong. Yes, I don't use systemd and I am a happy OpenRC
>> user but I have no problems with systemd (as long as it doesn't affects
>> me). But this upgrade seems to affect non-systemd users.
>>
> 
> The only thing that changed is the location where a config setting is
> stored.  Nobody has to use systemd as a sysvinit replacement.

Have you read documentation? It is not about locations at all... my
problem is that it seems like that I have to use a new syntax from
systemd-udev when doing something in "/etc/systemd" but as said: I am
using sys-fs/udev, I don't care about systemd... why should I learn
systemd when I am only using udev?


>> Wasn't Gentoo about choices?
> 
> Well, we generally don't give users a choice in where config files are
> installed.

No, not locations. My choice was not to use systemd. Now a package,
sys-fs/udev, turns into systemd-udev...

Also: If it wouldn't be possible to keep sys-fs/udev as it was I
wouldn't bother that much. But as said, Lars (Polynomial-C) showed us
that we don't need to turn sys-fs/udev into systemd-udev...

So I am asking why we are doing that for people who don't use systemd?

Polynomical-C doesn't uses much patches... no, the magic is in the
ebuild. Upstream still supports the "old" usage... it is the Gentoo
ebuild which turns the package into systemd-udev...

And that's what I meant when I said "give something 'back'": It should
be possible to create an ebuild for systemd and non-systemd users. Yes,
more maintenance is needed. But taking a package which was working fine
for non-systemd users and transform it into a systemd package isn't nice
and fair.

You get my point?


-Thomas




Re: [gentoo-dev] News draft #2 for the udev-210 upgrade (was: 209 upgrade)

2014-02-25 Thread Thomas D.
Hi,

line 16 ("renamed the file to
/lib/udev/rules.d/80-net-setup-link.rules") and line 18 ("you can
override in /etc/systemd/network/") doesn't end with punctuation.


Did I get this right? I am using udev to give my interfaces custom names
and I am not a systemd user but to keep my setup working with udev-210 I
have to exclude

  /lib/systemd/network/
  /etc/systemd/

from my INSTALL_MASK *and* I have to configure things in
"/etc/systemd/"? Really?

Also, I cannot belief that I cannot overwrite
"/lib/udev/rules.d/80-net-setup-link.rules" via "/etc/udev/rules.d"...


Anyway:
Don't get me wrong. Yes, I don't use systemd and I am a happy OpenRC
user but I have no problems with systemd (as long as it doesn't affects
me). But this upgrade seems to affect non-systemd users.

Wasn't Gentoo about choices?

So when it is possible to provide a sys-fs/udev package like Lars
(Polynomial-C) has shown which won't require non-systemd user to use
files from systemd and to do configuration in "/etc/systemd/" why don't
we provide such a package to non-systemd users?
The package already has an "openrc" USE flag...

Remember that the non-systemd folk in Gentoo is doing a lot to help you
(the systemd folk) to add proper systemd support in Gentoo. Now it seems
like it is time to give something "back", => make sure a change required
for systemd doesn't hurt non-systemd users.

Thanks.


-Thomas




Re: [gentoo-dev] News item draft for >=sys-fs/udev-209 upgrade

2014-02-24 Thread Thomas D.
Hi,

not everyone is using systemd. On my systems for example, I don't have
"/lib/systemd/" (INSTALL_MASK).

The current news item draft raises question like "When the 'actual
configuration' is in /lib/systemd/network/99-default.link... what will
happen to people without systemd (and a INSTALL_MASK set)?"

Would be nice if the news item and Wiki could handle upgrade path for
systemd *and* non-systemd users...

Thanks.


-Thomas



Re: [gentoo-dev] Re: RFC: Hosting daily gx86 squashfs images and deltas

2014-01-17 Thread Thomas D.
Hi,

Michał Górny wrote:
> Now, does anyone have an old portage-YYZZ.tar.{bz2,xz} snapshot? I
> need the official one from our mirrors, preferably 3-4 months old.






-- 
Regards,
Thomas




Re: [gentoo-dev] friendly reminder wrt net virtual in init scripts

2013-11-06 Thread Thomas D.
Hi,

Michael Orlitzky wrote:
>> If you are aware about any other know attacks, please share.
> 
> Replay attacks, mentioned in the RFC (or Google). These could be
> mitigated, but no one has bothered.

The OCSP response is signed. The signature contains a time stamp. If
your clock is right, replay attacks are only possible for the expected
lifespan of the response. But because it is expected that an OCSP
response is valid for x hours, it is not a real problem.

But sadly there are some CA which are serving pre-generated OCSP
responses which are valid for 7 days (like their CRLs). 7 days can be
very long... :(


> This is a long way of saying "it sends the address of every website
> you visit to a third party."

See Alex's reply. I wanted to make it clear to everyone, that the
address isn't the full URL.


>> If you are still really concerned about what OCSP may do to your 
>> privacy, may I ask if you are also concerned about DNS servers? If
>> not, what's the difference between an OCSP responder which you ask
>> for a serial number, which can be resolved to a CN and a DNS server
>> which you ask for a ... CN? :)
> 
> Only two DNS servers are involved; mine and those of the domain I'm
> visiting.

Again, please see Alex's reply. Also, if you are using your *own* DNS
server, you are *special*. But most people will use the DNS server from
their ISP. And I wasn't talking about *special* people who are able to
run everything in their own trusted environment.


>> Also, you are trusting a CA to secure your connections, but you
>> don't trust the same CA due to privacy concerns?
> 
> You're conflating two things here. I trust AES to keep my connection
> safe. I don't send my data to the CA.

CAs not only issue certificates. They should also make sure that they
only issue "secure" certificates:

  - Require a secure signing algorithm
  - Require a secure key size

You could use the best algorithm available. But if the certificate's
private key is shared with others, others are able to decrypt the
captured secure traffic.

The CAB forum for example says that no CA is allowed to create the key
used for any issued customer certificate.

So when you are using a pre-populated list of trusted CAs you are also
expecting that these CAs are doing their jobs right.

IF you don't do that, you shouldn't use them.


>> If you don't trust any CA, we don't have to talk about things like
>> OCSP or CRL and revocation...
> 
> Well there we agree. Why would you trust the CAs? You don't know them
> personally and you aren't their customer.
> 
> Do you trust the governments of the USA and China? (Hint: you
> shouldn't.) If the answer is no, then you don't trust the CA system.
> So whether or not you trust them to revoke that authentication is a
> moot point.

Well, that's another discussion. As said before, we don't have to talk
about these things if you don't trust a system called "Web of trust" :)

But because most people "live" in this (broken) system (this is
reality!), do you still think telling them they should disable OCSP,
which will actually disable an important feature (again, without OCSP
you are unable to check a certificate for revocation in Firefox) and
make them vulnerable to a new threat is a good thing?


-- 
Regards,
Thomas



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-dev] OCSP Was: friendly reminder wrt net virtual in init scripts

2013-11-06 Thread Thomas D.
Hi,

Duncan wrote:
> Meanwhile, another question for Thomas.  Is this "certificate stapling" 
> the same thing google chrome is now doing for the google site, that 
> enabled it to detect the (I think it was) Iranian and/or Chinese CA 
> tampering, allowing them to say a "google" cert was valid that was 
> actually their MitM cert, as appeared in the tech-news a few months ago?  
> Or was that something different?
> 
> I had interpreted (well, I think I read, but either the journalist could 
> have been mixed up too, or maybe I was misinterpreting what I read, 
> either way the effect on my understanding is the same) the "certificate 
> stapling" referred to at the time as indicating that google configured 
> the certs for their own sites into chrome as shipped itself, effectively 
> hard-coding them, NOT as google handling its own OCSP requests, as OCSP 
> cert stapling does.  So now I'm wondering if I interpreted wrong then, or 
> if there's actually two different things being referred to as certificate 
> stapling, here.

No, OCSP Stapling is something else.

Guess you are talking about HSTS and "SSL pinning" [1,2]: In Google
Chrome, they hard coded some certificates/certificate meta data [3]
which must be present in every certificate used for any Google site.

If you connect to a Google site for example and this site will use a
certificate from a CA not specified in [3] (depending on the service,
they may also verify against a list of known fingerprints like EV SSL is
working), connection will be terminated and the browser will send some
details to Google so they get noticed.



See also:
=
[1]
http://blog.chromium.org/2011/06/new-chromium-security-features-june.html

[2] https://www.imperialviolet.org/2011/05/04/pinning.html

[3] http://www.googblogs.com/uncategorized/changes-to-our-ssl-certificates/


-- 
Regards,
Thomas




signature.asc
Description: OpenPGP digital signature


Re: [gentoo-dev] friendly reminder wrt net virtual in init scripts

2013-11-06 Thread Thomas D.
Hi,

mingdao wrote:
> Now, if any one of us turned off OCSP as Michael suggested, what should one do
> after turning it back on? Could there now be certificates trusted there which
> should not be?

Well, only your current browser session can be affected. For Firefox:

  History -> Clear Recent History -> Details

In the dialog, just check "Active logins" and click "Clear Now".

This should clear any existing SSL state cache.


For Chrome it is a bit harder, because Chrome doesn't offer such an
option AFAIK (see [1]). Also, it depends on the SSL backend you are using.


PS: To enable OCSP in Chrome, go to chrome://settings/advanced

  Security
Manage Certificates...
  Check for server certificate revocation

It is disabled by default, due to "performance concerns" :(



See also:
=
[1] http://code.google.com/p/chromium/issues/detail?id=90454


-- 
Regards,
Thomas




signature.asc
Description: OpenPGP digital signature


Re: [gentoo-dev] friendly reminder wrt net virtual in init scripts

2013-11-06 Thread Thomas D.
Hi,

Michael Orlitzky wrote:
> You should disable OCSP anyway. In Firefox, it's under,
> 
>   Edit -> Preferences -> Advanced -> Encryption -> Validation
> 
> The OCSP protocol is itself is vulnerable to MITM attacks, which is cute
> when you consider its purpose.
> 
> Moreover, it sends the address of every website you visit to a third
> party, which is the real reason to disable it IMO.

This is going OT but I cannot leave this statement uncommented, because
from my knowledge this is wrong/you are hiding important information
everyone should know about:

First, if you tell people they should disable OCSP you should also tell
these people the consequences: When you disable OCSP in Firefox, there
is *no* other way to know if a certificate was revoked or not. This is
because Firefox *never* downloaded any CRLs. Furthermore, they removed
the possibility to do that [1,2].

If you don't have the possibility to check a certificate for revocation,
the whole trust system cannot work because there is no way to tell
someone "Yes, it is nice that you trust me (=you trust the CA) and I
said you can trust this certificate (=the CA you trust has signed the
certificate in question) but now I changed my mind (=the CA has revoked
the certificate) so please don't trust this certificate anymore." Please
read "Would you knowingly trust an irrevocable SSL certificate?" [3].
And yes, this is a *real* problem, see [7].


Yes, there is a known MITM attacks against OCSP, see [4]. But this is
only possible due to bad default settings: Just change your OCSP setting
to *require* a valid answer. In Firefox:

  Edit -> Preferences -> Advanced -> Certificates -> Validation

Make sure

  When an OCSP server connection fails, treat the certificate
  as invalid

is checked (or you can just set "security.OCSP.require" to "TRUE").

If you are aware about any other know attacks, please share.


Regarding your privacy concerns:
No, your OCSP-enabled browser won't share the address (URL) with the
OCSP responder. Your browser will use the site's certificate serial
number to ask the OCSP responder if the certificate is still valid. Yes,
the company who is running the OCSP responder is able to log "You [IP,
UA...] requested status for certificates with the serial number 0x1,
0x2, 0x3" and because the OCSP responder needs some basic knowledge
about the certificates it should provide answers for, the operator may
know that the certificate with the serial number 0x1 has the Common Name
(CN) "www.mysecretsite.invalid" and 0x2 was issued for
"www.mydarksecrets.invalid" or 0x3 was for "www.facebook.com", but the
operator doesn't know the URL you visited.


I don't say OCSP is perfect. For example an OCSP check will delay the
initial SSL handshake, because your browser has to connect to the OCSP
responder when it has received the certificate from the server you are
connecting to. Depending on your connection and the OCSP responder, this
may take some time [5].

But the CRL system doesn't work anymore (and was never working in
Firefox, unless you manually added all the CRL distribution points for
your CA and Sub CAs...), because VerSign and other big SSL companies are
providing >20 MB CRLs. Imagine you would use your phone to visit a
website using some kind of mobile connection and it would have to fetch
50+ MBs in CRLs before the website will open...

Google for example decided some time ago to disable CRL checks too. They
will download CRLs for you and are planing to release these centralized
CRLs with normal updates. See [6].

They are improving OCSP. The next big thing is OCSP stapling [8,9] which
is now supported by all major browsers and patches are available for
most web servers.
OCSP stapling was developed to save the extra round trip to the OCSP
responder, but OCSP stapling-enabled websites will also "increase" your
privacy, because you don't longer have to tell the OCSP responder the
certificate (CN) you want to check.


If you are still really concerned about what OCSP may do to your
privacy, may I ask if you are also concerned about DNS servers? If not,
what's the difference between an OCSP responder which you ask for a
serial number, which can be resolved to a CN and a DNS server which you
ask for a ... CN? :)
Also, you are trusting a CA to secure your connections, but you don't
trust the same CA due to privacy concerns?


So please, don't just tell anybody to turn off OCSP. Tell them why you
may think they should do that. But also tell them about the new risks
they have to deal with so that they are able to decide on their own if
they want to disable OCSP or not.

PS: As long as you are trusting CAs and don't manage the trust of any
certificate you are using on your own I recommend to enable OCSP in all
your browsers and to treat any kind of invalid OCSP responses as a hard
failure, because I want to know if I can trust the certificate used to
secure my communication or not.

If you don't trust any CA, we don't have to talk about things lik